# Explicit inverses of some tridiagonal matrices

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Linear Algebra and its Applications 325 (2001) 7 21 wwwelseviercom/locate/laa Explicit inverses of some tridiagonal matrices CM da Fonseca, J Petronilho Depto de Matematica, Faculdade de Ciencias e Technologia, Univ Coimbra, Apartado 3008, 3000 Coimbra, Portugal Received 28 April 1999; accepted 26 June 2000 Submitted by G Heinig Abstract We give explicit inverses of tridiagonal 2-Toeplitz and 3-Toeplitz matrices which generalize some well-known results concerning the inverse of a tridiagonal Toeplitz matrix 2001 Elsevier Science Inc All rights reserved AMS classification: 33C45; 42C05 Keywords: Invertibility of matrices; r-toeplitz matrices; Orthogonal polynomials 1 Introduction In recent years the invertibility of nonsingular tridiagonal or block tridiagonal matrices has been quite investigated in different fields of applied linear algebra (for historical notes see [8]) Several numerical methods, more or less efficient, have risen in order to give expressions of the entries of the inverse of this kind of matrices Though, explicit inverses are known only in a few cases, in particular when the tridiagonal matrix is symmetric with constant diagonals and subject to some restrictions (cf [3,8,10]) Furthermore, Lewis [5] gave a different way to compute other explicit inverses of nonsymmetric tridiagonals matrices In [1], Gover defines a tridiagonal 2-Toeplitz matrix of order n, as a matrix A = (a ij ),a ij = 0if i j > 1anda ij = a kl if (i, j) (k, l) mod 2, ie, Corresponding author addresses: (CM da Fonseca), (J Petronilho) /01/\$ - see front matter 2001 Elsevier Science Inc All rights reserved PII:S (00)

2 8 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) 7 21 a 1 b 1 c 1 a 2 b 2 c 2 a 1 b 1 A = c 1 a 2 b 2 (1) c 2 a 1 n n Similarly, a tridiagonal 3-Toeplitz matrix (n n) is of the form a 1 b 1 c 1 a 2 b 2 c 2 a 3 b 3 c 3 a 1 b 1 B = c 1 a 2 b 2 (2) c 2 a 3 b 3 c 3 a 1 n n Making use of the theory of orthogonal polynomials, we will give the explicit inverse of tridiagonal 2-Toeplitz and 3-Toeplitz matrices, based on recent results from [1,6,7] As a corollary, we will obtain the explicit inverse of a tridiagonal Toeplitz matrix, ie, a tridiagonal matrix with constant diagonals (not necessarily symmetric) Different proofs of the results by Kamps [3,4] involving the sum of all the entries of the inverse can be simplified Throughout this paper, it is assumed that all the matrices are invertible 2 Inverse of a tridiagonal matrix Let T be an n n real nonsingular tridiagonal matrix a 1 b 1 0 c 1 a 2 b 2 T = c 2 (3) bn 1 0 c n 1 a n Denote T 1 = (α ij ) In [5] Lewis proved the following result Lemma 21 Let T be the matrix (3) and assume that b l /= 0 for l = 1,,n 1 Then α ji = γ ij α ij when i<j,

3 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) γ ij = j 1 l=i c l b l Of course this reduces to α ij = α ji, when T is symmetric Using this lemma, one can prove that there exist two finite sequences {u i } and {v i } (i = 1,,n 1) such that u 1 v 1 u 1 v 2 u 1 v 3 u 1 v n γ 12 u 1 v 2 u 2 v 2 u 2 v 3 u 2 v n T 1 = γ 13 u 1 v 3 γ 23 u 2 v 3 u 3 v 3 u 3 v n (4) γ 1,n u 1 v n γ 2,n u 2 v n γ 3,n u 3 v n u n v n Since {u i } and {v i } are only defined up to a multiplicative constant, we make u 1 = 1 The following result shows how to compute {u i } and {v i } The determination of these sequences is very similar to the particular case studied in [8], the reader may check for details Lemma 22 (i) The {v i } (i = 1,,n 1) can be computed from v 1 = 1 d 1, v k = b k 1 d k v k 1, k = 2,,n, d n = a n, d i = a i b ic i, i = n 1,,1 d i+1 (ii) The {u i } (i = 1,,n 1) can be computed from u n = 1, δ n v n δ 1 = a 1, u k = b k δ k u k+1, k = n 1,,1, δ i = a i b i 1c i 1 δ i 1, i = 2,,n 1 Notice that for centro-symmetric matrices, ie, a ij = a n+1 i,n+1 j, we have v i = u n+1 i The following proposition is known in the literature We give a proof based on the lecture of [8] Theorem 21 Let T be the n n tridiagonal matrix defined in (3) and assume that b l /= 0 for l = 1,,n 1Then

4 10 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) 7 21 ( 1) i+j d j+1 d n b i b j 1 if i j, δ i δ n (T 1 ) ij = ( 1) i+j d i+1 d n c j c i 1 if i>j δ j δ n (with the convention that empty product equals 1) Proof Let us assume that i jthen But (T 1 ) ij = u i v j = ( 1) n i b i b n 1 δ i δ n v n ( 1) j 1 b 1 b j 1 d 1 d j v n = ( 1) n 1 b 1 b n 1 d 1 d n Therefore (T 1 ) ij =( 1) n i b i b n 1 δ i δ n ( 1) n 1 d 1 d n b 1 b n 1 ( 1) j 1 b 1 b j 1 d 1 d j =( 1) i+j b i b j 1 d j+1 d n δ i δ n If we set δ i = θ i, θ i 1 θ 0 = 1, θ 1 = a 1, (5) we get the recurrence relation θ i = a i θ i 1 b i 1 c i 1 θ i 2 ; and if we now put d i = φ i, φ n+1 = 1, φ n = a n, (6) φ i+1 we get the recurrence relation φ i = a i φ i+1 b i c i φ i+2 As a consequence, we will achieve ( 1) i+j b i b j 1 θ i 1 φ j+1 /θ n if i j, (T 1 ) ij = (7) ( 1) i+j c j c i 1 θ j 1 φ i+1 /θ n if i>j These are the general relations given by Usmani [9], which we will use throghout this paper Notice that θ n = δ 1 δ n = det T

5 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) Chebyshev polynomials of the second kind In the next it is useful to consider the set of polynomials {U n } n 0, such that each U n is of degree exactly n, satisfying the recurrence relation U n+1 (x) = 2xU n (x) U n 1 (x), n 1 with initial conditions U 0 = 1 and U 1 = 2x These polynomials are called Chebyshev polynomials of the second kind and they have the form sin(n + 1)θ U n (x) =, cos θ = x, sin θ when x < 1 When x > 1, they have the form sinh(n + 1)θ U n (x) =, cosh θ = x sinh θ In particular U n (±1) = (±1) n (n + 1) In any case, if x /= 1, then U n (x) = rn+1 + rn+1, r + r r ± = x ± x 2 1 are the two solutions of the quadratic equation r 2 2xr + 1 = 0 The following proposition is a slight extension of Lemma 25 of Meurant [8] Lemma 31 Fix a positive integer number n and let a and b be nonzero real numbers such that U i (a/2b) /= 0 for all i = 1, 2,,n Consider the recurrence relation α 1 = a, α i = a b2, i = 2,,n α i 1 Under these conditions, if a =±2b, then α i = Otherwise ±b(i + 1) i α i = b ri+1 r+ i ri + ri+1,

6 12 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) 7 21 r ± = a ± a 2 4b 2 2b are the two solutions of the quadratic equation r 2 (a/b)r + 1 = 0 Proof Let us assume that a/=±2b (otherwise it is trivial) and set α i = β i /β i 1 Then β i aβ i 1 + b 2 β i 2 = 0, β 0 = 1, β 1 = a, hence β i = b i U i (a/2b) and the conclusion follows 4 Inverse of a tridiagonal 2-Toeplitz matrix In this section we will determine the explicit inverse of a matrix of type (1) By Theorem 21, this is equivalent to determine δ i and d i, ie, by (7) and transformations (5) and (6), θ i and φ i So, to evaluate θ i,wehave θ 0 = 1, θ 1 = a 1, θ 2i = a 2 θ 2i 1 b 1 c 1 θ 2i 2, θ 2i+1 = a 1 θ 2i b 2 c 2 θ 2i 1 Making the change z i = ( 1) i θ i, we get the relations z 0 = 1, z 1 = a 1, z 2i = a 2 z 2i 1 b 1 c 1 z 2i 2, z 2i+1 = a 1 z 2i b 2 c 2 z 2i 1 According to the results in [6], z i = Q i (0), Q i is a polynomial of degree exactly i, defined by the recurrence relation Q i+1 (x) = (x β i )Q i (x) γ i Q i 1 (x), (8) { β 2j = a 1, β 2j+1 = a 2, and { γ2j = b 2 c 2, γ 2j+1 = b 1 c 1, (9) and initial conditions Q 0 (x) = 1andQ 1 (x) = x β 0

7 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) The solution of the recurrence relation (8) with coefficients (9) is Q 2i (x) = P i [π 2 (x)], Q 2i+1 (x) = (x a 1 )Pi [π 2 (x)], (10) π 2 (x) = (x a 1 )(x a 2 ), (11) ( ) i x Pi (x) = b1 c 1 b 2 c 2 b1 b 2 c 1 c 2 Ui 2, (12) b 1 b 2 c 1 c 2 and ( ) [ i x b1 c 1 b 2 c 2 P i (x)= b1 b 2 c 1 c 2 U i 2 b 1 b 2 c 1 c ] 2 x b1 c 1 b 2 c 2 + βu i 1 2, (13) b 1 b 2 c 1 c 2 with β 2 = b 2 c 2 /b 1 c 1 We may conclude that θ i = ( 1) i Q i (0) Now, to evaluate φ i we put ψ i = φ n+1 i, for i = 1,,n, and we have to distinguish the cases when the order n of the matrix (1) is odd and when it is even First let us suppose that n is odd Then we have ψ 0 = 1, ψ 1 = a 1, ψ 2i = a 2 ψ 2i 1 b 2 c 2 ψ 2i 2, ψ 2i+1 = a 1 ψ 2i b 1 c 1 ψ 2i 1 Following the same steps as in the previous case, we obtain ψ i = ( 1) i Q i (0), ie, φ i = ( 1) i Q n+1 i (0), Q i (x) is the same as (10) but with β 2 = b 1 c 1 /b 2 c 2 in (13) Notice that if b 1 c 1 = b 2 c 2,thenθ i = φ n+1 i If n is even, we get ψ 0 = 1, ψ 1 = a 2, ψ 2i = a 1 ψ 2i 1 b 1 c 1 ψ 2i 2, ψ 2i+1 = a 2 ψ 2i b 2 c 2 ψ 2i 1 Therefore ψ i = ( 1) i Q i (0),

8 14 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) 7 21 ie, φ i = ( 1) i+1 Q n+1 i (0), Q 2i (x) = P i [π 2 (x)], Q 2i+1 (x) = (x a 2 )Pi [π 2 (x)], with π 2 (x), P i (x) and Pi (x) the same as in (11), (13) and (12), respectively Observe that if a 1 = a 2, then θ i = φ n+1 i We have determined completely the θ i s and φ i s of (7 ) thus the inverse of the matrix in the case of a tridiagonal 2-Toeplitz matrix (1) Theorem 41 Let A be the tridiagonal matrix (1), with a 1 a 2 /= 0 and b 1 b 2 c 1 c 2 > 0 Put π 2 (x):=(x a 1 )(x a 2 ), β := b 2 c 2 /b 1 c 1 and let {Q i ( ; α, γ )} be the sequence of polynomials defined by ( ) [ i π2 (x) b 1 c 1 b 2 c 2 Q 2i (x; α, γ )= b1 b 2 c 1 c 2 U i 2 b 1 b 2 c 1 c ] 2 π2 (x) b 1 c 1 b 2 c 2 +γu i 1 2 b 1 b 2 c 1 c 2 ( ) i π2 (x) b 1 c 1 b 2 c 2 Q 2i+1 (x; α, γ ) = (x α) b1 b 2 c 1 c 2 Ui 2, b 1 b 2 c 1 c 2 α and γ are some parameters Under these conditions, ( 1) i+j b (A 1 p (j i)/2 i bq (j i+1)/2 i θ i 1 φ j+1 /θ n if i j, ) ij = ( 1) i+j c (j i)/2 cq (j i+1)/2 j θ j 1 φ i+1 /θ n if i>j, p j (14) p l = (3 ( 1) l )/2, q l = (3 + ( 1) l )/2, z denotes the greater integer less or equal to the real number z, θ i = ( 1) i Q i (0; a 1,β), (15) and ( 1) i Q n+1 i (0; a 1, 1/β) φ i = ( 1) i+1 Q n+1 i (0; a 2,β) if n is odd, if n is even Remark As we already noticed, under the conditions of Theorem 41, if n is odd and b 1 c 1 = b 2 c 2, then θ i = φ n+1 i ;andifn is even and a 1 = a 2, then also θ i = φ n+1 i

9 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) As a first application, let us consider a classical problem on the inverse of a tridiagonal Toeplitz matrix Suppose that a 1 = a 2 = a, b 1 = b 2 = b and c 1 = c 2 = c in (1), with a/= 0andbc > 0 Therefore, we have the n n tridiagonal Toeplitz matrix a b 0 c a b T = c a (16) b 0 c a According to (14), ( 1) i+j b j i θ i 1 φ j+1 /θ n if i j, (T 1 ) ij = ( 1) i+j c i j θ j 1 φ i+1 /θ n if i>j It is well-known and we can easily check using elementary trigonometry that ) U 2i+1 (x) = 2xU i (2x 2 1 Since β = 1and ( a 2 ) ( 2bc U i = U i 2bc from (15) we get ( ) ( i a θ i = bc Ui a ) = bc bc a a U 2i+1 2, bc ) 2, bc and taking into account the above remark the φ i s can be computed using the θ i s, giving ( ) n+1 i a φ i = θ n+1 i = bc Un+1 i 2 bc Therefore, the next result comes immediately Corollary 41 Let T be the matrix in (16) and define d :=a/2 bc The inverse is given by ( 1) i+j b j i U i 1 (d)u n j (d) ( bc ) j i+1 if i j, U n (d) (T 1 ) ij = ( 1) i+j c i j U j 1 (d)u n i (d) ( bc ) i j+1 if i>j U n (d)

10 16 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) 7 21 This result is well-known It can be deduced, eg, using results in the book of Heinig and Rost [2, p 28] The next corollary is Theorem 31 of Kamps [3] Corollary 42 Let Σ be the n-square matrix a b 0 b a b Σ = b a (17) b 0 b a The inverse is given by ( 1) i+j 1 b (Σ 1 ) ij = ( 1) i+j 1 b U i 1 (a/2b)u n j (a/2b) U n (a/2b) U j 1 (a/2b)u n i (a/2b) U n (a/2b) if i j, if i>j We consider now a second application In [3,4], the matrices of type (17), with a>0, b/= 0anda>2 b, arose as the covariance matrix of one-dependent random variables Y 1,,Y n, with same expectation Let us consider the least squares estimator ˆµ opt = 1t Σ 1 Y 1 t Σ 1 1, 1 = (1,,1) and Y = (Y 1,,Y n ) t, which estimates the parameter µ equal to the common expectation of the Y i s, with variance V 1 ˆµ opt = 1 t Σ 1 1 According to Kamps, the estimator ˆµ opt is the best unbiased estimator based on Y So, the sum of all the entries of the inverse of Σ, 1 t Σ 1 1, has an important role in the determination of this estimator and therefore in the computation of the variance V ( ˆµ opt ) Kamps used in [3] some sum and product formulas involving different kinds of Chebyshev polynomials We will prove the same results in a more concise way Corollary 43 The sum s i of the ith row (or column) of Σ 1, for i = 1,,n is given by s i = 1 + b(σ 1i + σ 1,n i+1 ), a + 2b σ ij = (Σ 1 ) ij, when i j Proof By Corollary 42, for i j,

11 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) ( σ ij = ( 1) i+j 1 r i + r ) i r n j+1 + r n j+1 ( ), b (r + r ) r+ n+1 rn+1 r ± = a ± a 2 4b 2 2b Thus i 1 n s i = σ ji + σ ij j=1 j=1 j=i By the sum of a geometric progression and the fact that r + r = 1, we have i 1 ( r+ i+1 )( and σ ji = b (σ 1i + σ ii ) a + 2b + 1 a + 2b n σ ij = b σ 1,n i+1 σ ii a + 2b j=i 1 a + 2b (r + r ) ( r i + r i ) ( r n i ) r+ n i+1 r n i+1 ( ) r+ n+1 rn+1 + rn i ( (r + r ) r+ n+1 rn+1 ) ) Finally ( r+ i+1 ri+1 = (r + r ) r+ n i+1 r n i+1 ( r+ n+1 rn+1 ) r+ i ri r+ n i rn i ) The previous theorem says that s i depends only on the first row of Σ 1 In fact (as the following result says) the sum of all entries of Σ 1 depends only on σ 11 and σ 1n Corollary 44 1 t Σ 1 1 = n + 2bs 1 a + 2b 5 Inverse of a tridiagonal 3-Toeplitz matrix In an analogous way of the 2-Toeplitz matrices, we may ask about the inverse of a tridiagonal 3-Toeplitz matrix To solve this case, we have to compute θ i and φ i in (7)

12 18 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) 7 21 For θ i,wehave θ 0 = 1, θ 1 = a 1, θ 3i = a 3 θ 3i 1 b 2 c 2 θ 3i 2, θ 3i+1 = a 1 θ 3i b 3 c 3 θ 3i 1, θ 3i+2 = a 2 θ 3i+1 b 1 c 1 θ 3i Making the change z i = ( 1) i θ i, we get the relations z 0 = 1, z 1 = a 1, z 3i = a 3 z 3i 1 b 2 c 2 z 3i 2, z 3i+1 = a 1 z 3i b 3 c 3 z 3i 1, z 3i+2 = a 2 z 3i+1 b 1 c 1 z 3i According to the results in [7], z i = Q i (0), Q i is a polynomial of degree exactly i, defined according to Q i+1 (x) = (x β i )Q i (x) γ i Q i 1 (x), β 3j = a 1, γ 3j = b 3 c 3, β 3j+1 = a 2, and γ 3j+1 = b 1 c 1, β 3j+2 = a 3, γ 3j+2 = b 2 c 2 The solution of the recurrence relation (8) with coefficients (18) is (18) Q 3i (x) = P i [π 3 (x)] + b 3 c 3 (x a 2 ) P i 1 [π 3 (x)], Q 3i+1 (x) = (x a 1 )P i [π 3 (x)] + b 1 c 1 b 3 c 3 P i 1 [π 3 (x)], Q 3i+2 (x) = [(x a 1 )(x a 2 ) b 1 c 1 ] P i [π 3 (x)], x a π 3 (x) = b 1 c 1 x a 2 1 b 3 c 3 b 2 c 2 x a 3 and P i (x) = (2 ) [ ] i x b1 c 1 b 2 c 2 b 3 c 3 b 1 b 2 b 3 c 1 c 2 c 3 U i 2 b 1 b 2 b 3 c 1 c 2 c 3

13 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) We may conclude that θ i = ( 1) i Q i (0) In order to compute the φ i s, we define ψ i = φ n+1 i for i = 1,,n In this situation, we have to distinguish three cases If n 0 (mod 3), then ψ 0 = 1, ψ 1 = a 3, ψ 3i = a 1 ψ 3i 1 b 1 c 1 ψ 3i 2, ψ 3i+1 = a 3 ψ 3i b 3 c 3 ψ 3i 1, ψ 3i+2 = a 2 ψ 3i+1 b 2 c 2 ψ 3i If n 1(mod3),then ψ 0 = 1, ψ 1 = a 1, ψ 3i = a 2 ψ 3i 1 b 2 c 2 ψ 3i 2, ψ 3i+1 = a 1 ψ 3i b 1 c 1 ψ 3i 1, ψ 3i+2 = a 3 ψ 3i+1 b 3 c 3 ψ 3i, and if n 2(mod3),then ψ 0 = 1, ψ 1 = a 2, ψ 3i = a 3 ψ 3i 1 b 3 c 3 ψ 3i 2, ψ 3i+1 = a 2 ψ 3i b 2 c 2 ψ 3i 1, ψ 3i+2 = a 1 ψ 3i+1 b 1 c 1 ψ 3i It is clear that these three recurrence relations can be solved by a similar process as for the computation of the θ i s (mutatis mutandis) Following in this way, we can state the next proposition Theorem 51 Let B be the tridiagonal matrix (2), with a 1 a 2 a 3 /= 0 and b 1 b 2 b 3 c 1 c 2 c 3 > 0Let π 3 α β γ ; x x a 1 1 := α x b 1 γ β x c, P i α β γ ; x :=2 αβγ ( [ ]) 1 U i 2 π 3 αβγ α β γ ; x (a + b + c),

14 20 CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) 7 21 Q 3i α β γ ; x := P i α β γ ; x +γ(x b) P i 1 α β γ ; x, Q 3i+1 α β γ ; x := (x a)p i α β γ ; x +αγ P i 1 α β γ ; x, Q 3i+2 α β γ ; x := [(x a)(x b) α] P i α β γ ; x, a, b, c, α, β and γ are some parameters, subject to the restriction αβγ > 0 Under these conditions, ( 1) i+j β i β i+1 β j θ i 1 φ j+1 /θ n if i j, (B 1 ) ij = (19) ( 1) i+j γ j γ j+1 γ i θ j 1 φ i+1 /θ n if i>j, β 3l+s+1 = b s+1, γ 3l+s+1 = c s+1 (s = 0, 1, 2; l = 0, 1, 2,), and the θ i s and φ i s are explicitly given by θ i = ( 1) i a1 a Q 2 a 3 i ; 0 (20) b 1 c 1 b 2 c 2 b 3 c 3 and ( 1) n+1 i a3 a Q 2 a 1 n+1 i ; 0 if n 0, (mod 3), b 2 c 2 b 1 c 1 b 3 c 3 φ i = ( 1) n+1 i a1 a Q 3 a 2 n+1 i ; 0 b 3 c 3 b 2 c 2 b 1 c 1 ( 1) n+1 i a2 a Q 1 a 3 n+1 i ; 0 b 1 c 1 b 3 c 3 b 2 c 2 if n 1, (mod 3), if n 2,(mod 3) Acknowledgments This work was supported by CMUC (Centro de Matemática da Universidade de Coimbra) The authors thank the editor as well as the unknown referees for their helpful remarks which helped to improve the final version of the paper

15 References CM da Fonseca, J Petronilho / Linear Algebra and its Applications 325 (2001) [1] M Gover, The eigenproblem of a tridiagonal 2-Toeplitz matrix, Linear Algebra Appl 197/198 (1994) [2] G Heinig, K Rost, Algebraic Methods for Toeplitz-like Matrices and Operators, vol 13 Birkhäuser, OT, 1984 [3] U Kamps, Chebyshev polynomials and least squares estimation based on one-dependent random variables, Linear Algebra Appl 112 (1989) [4] U Kamps, Relative efficiency of estimates and the use of Chebyshev polynomials in a model of pairwise overlapping samples, Linear Algebra Appl 127 (1990) [5] JW Lewis, Inversion of tridiagonal matrices, Numer Math 38 (1982) [6] F Marcellán, J Petronilho, Eigenproblems for tridiagonal 2-Toeplitz matrices and quadratic polynomials mappings, Linear Algebra Appl 260 (1997) [7] F Marcellán, J Petronilho, Orthogonal polynomials and cubic polynomial mappings I, Communications in the Analytic Theory of Continued Fractions 8 (2000) [8] G Meurant, A review on the inverse of symmetric tridiagonal and block tridiagonal matrices, SIAM J Matrix Anal Appl 13 (3) (1992) [9] R Usmani, Inversion of a tridiagonal Jacobi matrix, Linear Algebra Appl 212/213 (1994) [10] P Schlegel, The explicit inverse of a tridiagonal matrix, Math Computation 24 (111) (1970) 665

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Diagonal, Symmetric and Triangular Matrices

Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

### Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

### A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

### A note on companion matrices

Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

### I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

### FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

### DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Math 313 Lecture #10 2.2: The Inverse of a Matrix

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is

### (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### 1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

### Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

### Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses

University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 57 Lecture Notes 1: Matrix Algebra Part B: Determinants and Inverses Peter J. Hammond email: p.j.hammond@warwick.ac.uk Autumn 2012,

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

### Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

### B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

### Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

### MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### 1. LINEAR EQUATIONS. A linear equation in n unknowns x 1, x 2,, x n is an equation of the form

1. LINEAR EQUATIONS A linear equation in n unknowns x 1, x 2,, x n is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b, where a 1, a 2,..., a n, b are given real numbers. For example, with x and

### Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

### Math 2331 Linear Algebra

2.2 The Inverse of a Matrix Math 2331 Linear Algebra 2.2 The Inverse of a Matrix Jiwen He Department of Mathematics, University of Houston jiwenhe@math.uh.edu math.uh.edu/ jiwenhe/math2331 Jiwen He, University

### MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

### Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

### Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

### Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

### Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

### Eigenvalues and eigenvectors of a matrix

Eigenvalues and eigenvectors of a matrix Definition: If A is an n n matrix and there exists a real number λ and a non-zero column vector V such that AV = λv then λ is called an eigenvalue of A and V is

### Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

### NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

### Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

### T ( a i x i ) = a i T (x i ).

Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

### Basics Inversion and related concepts Random vectors Matrix calculus. Matrix algebra. Patrick Breheny. January 20

Matrix algebra January 20 Introduction Basics The mathematics of multiple regression revolves around ordering and keeping track of large arrays of numbers and solving systems of equations The mathematical

### WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE?

WHICH LINEAR-FRACTIONAL TRANSFORMATIONS INDUCE ROTATIONS OF THE SPHERE? JOEL H. SHAPIRO Abstract. These notes supplement the discussion of linear fractional mappings presented in a beginning graduate course

### Matrices, transposes, and inverses

Matrices, transposes, and inverses Math 40, Introduction to Linear Algebra Wednesday, February, 202 Matrix-vector multiplication: two views st perspective: A x is linear combination of columns of A 2 4

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

### Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

### MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

### Math 115 Spring 2011 Written Homework 5 Solutions

. Evaluate each series. a) 4 7 0... 55 Math 5 Spring 0 Written Homework 5 Solutions Solution: We note that the associated sequence, 4, 7, 0,..., 55 appears to be an arithmetic sequence. If the sequence

### 1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

### Inverses. Stephen Boyd. EE103 Stanford University. October 27, 2015

Inverses Stephen Boyd EE103 Stanford University October 27, 2015 Outline Left and right inverses Inverse Solving linear equations Examples Pseudo-inverse Left and right inverses 2 Left inverses a number

### Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

### CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Determinants LECTURE Calculating the Area of a Parallelogram. Definition Let A be a 2 2 matrix. A = The determinant of A is the number

LECTURE 13 Determinants 1. Calculating the Area of a Parallelogram Definition 13.1. Let A be a matrix. [ a c b d ] The determinant of A is the number det A) = ad bc Now consider the parallelogram formed

### The Inverse of a Square Matrix

These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

### MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY PLAMEN KOEV Abstract In the following survey we look at structured matrices with what is referred to as low displacement rank Matrices like Cauchy Vandermonde

### Mathematics Notes for Class 12 chapter 3. Matrices

1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

### Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

### MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

### v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

### Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

### The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

The Hadamard Product Elizabeth Million April 12, 2007 1 Introduction and Basic Results As inexperienced mathematicians we may have once thought that the natural definition for matrix multiplication would

### LINEAR ALGEBRA OF PASCAL MATRICES

LINEAR ALGEBRA OF PASCAL MATRICES LINDSAY YATES Abstract. The famous Pascal s triangle appears in many areas of mathematics, such as number theory, combinatorics and algebra. Pascal matrices are derived

### Sign pattern matrices that admit M, N, P or inverse M matrices

Sign pattern matrices that admit M, N, P or inverse M matrices C Mendes Araújo, Juan R Torregrosa CMAT - Centro de Matemática / Dpto de Matemática Aplicada Universidade do Minho / Universidad Politécnica

### Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

### Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

### SOME PROPERTIES OF SYMBOL ALGEBRAS OF DEGREE THREE

SOME PROPERTIES OF SYMBOL ALGEBRAS OF DEGREE THREE CRISTINA FLAUT and DIANA SAVIN Communicated by the former editorial board In this paper, we study some properties of the matrix representations of the

### 1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

### INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

### Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Math 181 Handout 16. Rich Schwartz. March 9, 2010

Math 8 Handout 6 Rich Schwartz March 9, 200 The purpose of this handout is to describe continued fractions and their connection to hyperbolic geometry. The Gauss Map Given any x (0, ) we define γ(x) =

### 4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

### THE SIGN OF A PERMUTATION

THE SIGN OF A PERMUTATION KEITH CONRAD 1. Introduction Throughout this discussion, n 2. Any cycle in S n is a product of transpositions: the identity (1) is (12)(12), and a k-cycle with k 2 can be written

### Quadratic Equations in Finite Fields of Characteristic 2

Quadratic Equations in Finite Fields of Characteristic 2 Klaus Pommerening May 2000 english version February 2012 Quadratic equations over fields of characteristic 2 are solved by the well known quadratic

### Spectral Measure of Large Random Toeplitz Matrices

Spectral Measure of Large Random Toeplitz Matrices Yongwhan Lim June 5, 2012 Definition (Toepliz Matrix) The symmetric Toeplitz matrix is defined to be [X i j ] where 1 i, j n; that is, X 0 X 1 X 2 X n

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

### ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

### The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

### Lecture L3 - Vectors, Matrices and Coordinate Transformations

S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between