NOTES ON LINEAR ALGEBRA
|
|
- Kelley Boyd
- 7 years ago
- Views:
Transcription
1 NOTES ON LINEAR ALGEBRA LIVIU I. NICOLAESCU CONTENTS 1. Multilinear forms and determinants Mutilinear maps The symmetric group Symmetric and skew-symmetric forms The determinant of a square matrix Additional properties of determinants Examples Exercises Spectral decomposition of linear operators Invariants of linear operators The determinant and the characteristic polynomial of an operator Generalized eigenspaces The Jordan normal form of a complex operator Exercises Euclidean spaces Inner products Basic properties of Euclidean spaces Orthonormal systems and the Gramm-Schmidt procedure Orthogonal projections Linear functionals and adjoints on Euclidean spaces Exercises Spectral theory of normal operators Normal operators The spectral decomposition of a normal operator The spectral decomposition of a real symmetric operator Exercises Applications Symmetric bilinear forms Nonnegative operators Exercises Elements of linear topology Normed vector spaces Convergent sequences Completeness Continuous maps 86 Date: Started January 7, Completed on. Last modified on April 28, These are notes for the Honors Algebra Course, Spring
2 2 LIVIU I. NICOLAESCU 6.5. Series in normed spaces The exponential of a matrix The exponential of a matrix and systems of linear differential equations Closed and open subsets Compactness Exercises 98 References 100
3 LINEAR ALGEBRA 3 1. MULTILINEAR FORMS AND DETERMINANTS In this section, we will deal exclusively with finite dimensional vector spaces over the field F = R, C. If U 1, U 2 are two F-vector spaces, we will denote by Hom(U 1, U 2 ) the space of F-linear maps U 1 U Mutilinear maps. Definition 1.1. Suppose that U 1,..., U k, V are F-vector spaces. A map Φ : U 1 U k V is called k-linear if for any 1 i k, any vectors u i, v i U i, vectors u j U j, j i, and any scalar λ F we have Φ(u 1,..., u i 1, u i + v i, u i+1,..., u k ) = Φ(u 1,..., u i 1, u i, u i+1,..., u k ) + Φ(u 1,..., u i 1, v i, u i+1,..., u k ), Φ(u 1,..., u i 1, λu i, u i+1,..., u k ) = λφ(u 1,..., u i 1, u i, u i+1,..., u k ). In the special case U 1 = U 2 = = U k = U and V = F, the resulting map Φ : U } {{ U } F k is called a k-linear form on U. When k = 2, we will refer to 2-linear forms as bilinear forms. We will denote by T k (U ) the space of k-linear forms on U. Example 1.2. Suppose that U is an F-vector space and U is its dual, U := Hom(U, F). We have a natural bilinear map, : U U F, U U (α, u) α, u := α(u). The bilinear map is called the canonical pairing between the vector space U and its dual. Example 1.3. Suppose that A = (a ij ) 1 i,j n is an n n matrix with real entries. Define Φ A : R n R n R, Φ(x, y) = i,j a ij x i y j, x = x 1. x n, y = To show that Φ is indeed a bilinear form we need to prove that for any x, y, z R n and any λ R we have Φ A (x + z, y) = Φ A (x, y) + Φ A (z, y), (1.1a) To verify (1.1a) we observe that y 1. y n. Φ A (x, y + z) = Φ A (x, y) + Φ A (x, z), φ A (λx, y) = Φ A (x, λy) = λφ A (x, by). (1.1b) (1.1c) Φ A (x + z, y) = i,j a ij (x i + z i )y j = i,j (a ij x i y j + a ij z i y j ) = i,j a ij x i y j + ij a ij z i y j = Φ A (x, y) + Φ A (z, z).
4 4 LIVIU I. NICOLAESCU The equalities (1.1b) and (1.1c) are proved in a similar fashion. Observe that if e 1,..., e n is the natural basis of R n, then Φ A (e i, e j ) = a ij. This shows that Φ A is completely determined by its action on the basic vectors e 1,... e n. Proposition 1.4. For any bilinear form Φ T 2 (R n ) there exists an n n real matrix A such that Φ = Φ A, where Φ A is defined as in Example 1.3. The proof is left as an exercise The symmetric group. For any finite sets A, B we denote Bij(A, B) the collection of bijective maps ϕ : A B. We set S(A) := Bij(A, A). We will refer to S(A) as the symmetric group on A and to its elements as permutations of A. Note that if ϕ, σ S(A) then ϕ σ, ϕ 1 S(A). The composition of two permutations is often referred to as the product of the permutations. We denote by 1, or 1 A the identity permutation that does not permute anything, i.e., 1 A (a) = a, a A. For any finite set S we denote by S its cardinality, i.e., the number of elements of S. Observe that Bij(A, B) A = B. In the special case when A is the discrete interval A = I n = {1,..., n} we set S n := S(I n ). The collection S n is called the symmetric group on n objects. We will indicate the elements ϕ S n by diagrams of the form ( ) n. ϕ 1 ϕ 2... ϕ n For any finite set S we denote by S its cardinality, i.e., the number of elements of S. Proposition 1.5. (a) If A, B are finite sets and A = B, then Bij(A, B) = Bij(B, A) = S(A) = S(B). (b) For any positive integer n we have S n = n! := n. Proof. (a) Observe that we have a bijective correspondence Bij(A, B) ϕ ϕ 1 Bij(B, A) so that Bij(A, B) = Bij(B, A). Next, fix a bijection ψ : A B. We get a correspondence This correspondence is injective because F ψ : Bij(A, A) Bij(A, B), ϕ F ψ (ϕ) = ψ ϕ. F ψ (ϕ 1 ) = F ψ (ϕ 2 ) ψ ϕ 1 = ψ ϕ 2 ψ 1 (ψ ϕ 1 ) = ψ 1 (ψ ϕ 2 ) ϕ 1 = ϕ 2. This correspondence is also surjective. Indeed, if φ Bij(A, B) then ψ 1 φ Bij(A, A) and This, F ψ is a bijection so that F ψ (ψ 1 φ) = ψ (ψ 1 φ) = φ. S(A) = Bij(A, B).
5 Finally we observe that LINEAR ALGEBRA 5 S(B) = Bij(B, A) = Bij(A, B) = S(A). This takes care of (a). To prove (b) we argue by induction. Observe that S 1 = 1 because there exists a single bijection {1} {1}. We assume that S n 1 = (n 1)! and we prove that S n = n!. For each k I n we set S k n := { ϕ S n ; ϕ(n) = k }. A permutation ϕ S k n is uniquely detetrimined by its restriction to I n \ {n} = I n 1 and this restriction is a bijection I n 1 I n \ {k}. Hence S k n = Bij(I n 1, I n \ {k}) = S n 1, where at the last equality we used part(a). We deduce S n = S 1 n + + S n n = S n S n 1 }{{} n = n S n 1 = n(n 1)!, where at the last step we invoked the inductive assumption. Definition 1.6. An inversion of a permutation σ S n is a pair (i, j) I n I n with the following properties. i < j. σ(i) > σ(j). We denote by σ the number of inversions of the permutation σ. The signature of σ is then the quantity sign(σ) := ( 1) σ { 1, 1}. A permutation σ is called even/odd if sign(σ) = ±1. We denote by S ± n the collection of even/odd permulations. Example 1.7. (a) Consider the permutation σ S 5 given by ( ) σ = The inversions of σ are (1, 2), (1, 3), (1, 4), (1, 5), (2, 3), (2, 4), (2, 5), (3, 4), (3, 5), (4, 5), so that σ = = 10, sign(σ) = 1. (b) For any i j in I n we denote by τ ij the permutation defined by the equalities k, k i, j τ ij (k) = j, k = i i, k = j. A transposition is defined to be a permutation of the form τ ij for some i < j. Observe that so that τ ij = 2 j i 1, sign(τ ij ) = 1, i j. (1.2)
6 6 LIVIU I. NICOLAESCU Proposition 1.8. (a) For any σ S n we have (b) For any ϕ, σ S n we have (c) sign(σ 1 ) = sign(σ) sign(σ) = 1 i<j n σ(j) σ(i). (1.3) j i sign(ϕ σ) = sign(ϕ) sign(σ). (1.4) Proof. (a) Observe that the ratio σ(j) σ(i) j i negative if and only if (i, j) is inversion. Thus the number of negative ratios σ(j) σ(i) j i, i < j, is equal to the number of inversions of σ so that the product 1 i<j n σ(j) σ(i) j i has the same sign as the signature of σ. Hence, to prove (1.3) it suffices to show that σ(j) σ(i) j i = sign(σ) = 1, i.e., 1 i<j n σ(j) σ(i) = j i. (1.5) i<j i<j This is now obvious because the factors in the left-hand side are exactly the factors in the right-hand side multiplied in a different order. Indeed, for any i < j we can find a unique pair i < j such that σ(j ) σ(i ) = ±(j i). (b) Observe that sign(ϕ) = i<j ϕ(j) ϕ(i) j i = i<j ϕ(σ(j)) ϕ(σ(i) σ(j) σ(i) and we deduce sign(ϕ) sign(σ) = i<j ϕ(σ(j)) ϕ(σ(i)) σ(j) σ(i) i<j σ(j) σ(i) j i To prove (c) we observe that i<j ϕ(σ(j)) ϕ(σ(i)) j i = sign(ϕ σ). 1 = sign(1) = sign(σ 1 σ) = sign(σ 1 ) sign(σ).
7 1.3. Symmetric and skew-symmetric forms. LINEAR ALGEBRA 7 Definition 1.9. Let U be an F-vector space, F = R or F = C. (a) A k-linear form Φ T k (U ) is called symmetric if for any u 1,..., u k U, and any permutation σ S k we have Φ(u σ(1),..., u σ(k) ) = Ψ(u 1,..., u k ). We denote by S k U the collection of symmetric k-linear forms on U. (b) A k-linear form Φ T k (U ) is called skew-symmetric if for any u 1,..., u k U, and any permutation σ S k we have Φ(u σ(1),..., u σ(k) ) = sign(σ)ψ(u 1,..., u k ). We denote by Λ k U the space of skew-symmetric k-linear forms on U. Example Suppose that Φ Λ n U, and u 1,..., u n U. The skew-linearity implies that for any i < j we have Indeed, we have Φ(u 1,..., u i 1, u i, u i+1,..., u j 1, u j, u j+1,..., u n ) = Φ(u 1,..., u i 1, u j, u i+1,..., u j 1, u i, u j+1,..., u n ). Φ(u 1,..., u i 1, u j, u i+1,..., u j 1, u i, u j+1,..., u n ) = Φ(u τij (1),..., u τij (k),..., u τij (n)) and sign(τ ij ) = 1. In particular, this implies that if i j, but u i = u j then Φ(u 1,..., u n ) = 0. Proposition Suppose that U is an n-dimensional F-vector space and e 1,..., e n is a basis of U. Then for any scalar c F there exists a unique skew-symmetric n-linear form Φ Λ n U such that Φ(e 1,..., e n ) = c. Proof. To understand what is happening we consider first the special case n = 2. Thus dim U = 2. If Φ Λ 2 U and u 1, u 2 U we can write for some scalars a ij F, i, j {1, 2}. We have u 1 = a 11 e 1 + a 21 e 2, u 2 = a 12 e 1 + a 22 e 2, Φ(u 1, u 2 ) = Φ(a 11 e 1 + a 21 e 2, a 12 e 1 + a 22 e 2 ) = a 11 Φ(e 1, a 12 e 1 + a 22 e 2 ) + a 21 Φ(e 2, a 12 e 1 + a 22 e 2 ) = a 11 a 12 Φ(e 1, e 1 ) + a 11 a 22 Φ(e 1, e 2 ) + a 21 a 12 Φ(e 2, e 1 ) + a 21 a 22 Φ(e 2, e 2 ). The skew-symmetry of Φ implies that Φ(e 1, e 1 ) = Φ(e 2, e 2 ) = 0, Φ(e 2, e 1 ) = Φ(e 1, e 2 ). Hence Φ(u 1, u 2 ) = (a 11 a 22 a 21 a 12 )Φ(e 1, e 2 ). If dim U = n and u 1,..., u n U, then we can write n n u 1 = a i1 1e i1,..., u k = a ik ke ik i 1 =1 i k =1
8 8 LIVIU I. NICOLAESCU ( n Φ(u 1,..., u n ) = Φ a i1 1e i1,..., = n i 1,...,i n=1 i 1 =1 ) n a inne in i n=1 a i1 1 a innφ(e i1,..., e in ). Observe that if the indices i 1,..., i n are not pairwise distinct then Φ(e i1,..., e in ) = 0. Thus, in the above sum we get contributions only from pairwise distinct choices of indices i 1,..., i n, Such a choice corresponds to a permutation σ S n, σ(k) = i k. We deduce that Φ(u 1,..., u n ) = σ S n a σ(1)1 a σ(n)n Φ(e σ(1),..., e σ(n) ). = sign(σ)a σ(1)1 a σ(n)n Φ(e 1,..., e n ). σ Sn Thus, Φ Λ n U is uniquely determined by its value on (e 1,..., e n ). Conversely, the map (u 1,..., u n ) c σ S n sign(σ)a σ(1)1 a σ(n)n, u k = n a ik e i, is indeed n-linear, and skew-symmetric. The proof is notationally bushy, but it does not involve any subtle idea so I will skip it. Instead, I ll leave the proof in the case n = 2 as an exercise The determinant of a square matrix. Consider the vector space F n we canonical basis e 1 =. 0, e 2 =. 0,..., e n = According to Proposition 1.11 there exists a unique, n-linear skew-symmetric form Φ on F n such that Φ(e 1,..., e n ) = 1. We will denote this form by det and we will refer to it as the determinant form on F n. The proof of Proposition 1.11 shows that if u 1,..., u n F n, u 1k u 2k u k =, k = 1,..., n,. u nk then det(u 1,... u n ) = σ S n sign(σ)u σ(1)1 u σ(2)2 u σ(n)n. (1.6). 1. i=1 Note that det(u 1,... u n ) = ϕ S n sign(ϕ)u 1ϕ(1) u 2ϕ(2) u nϕ(n). (1.7)
9 LINEAR ALGEBRA 9 Definition Suppose that A = (a ij ) 1 i,j n is an n n-matrix with entries in F which we regard as a linear operator A : F n F n. The determinant of A is the scalar det A := det(ae 1,..., Ae n ) where e 1,..., e n is the canonical basis of F n, and Ae k is the k-th column of A, a 1k a 2k Ae k =, k = 1,..., n.. a nk Thus, according to (1.6) we have det A = σ S n sign(σ)a σ(1)1 a σ(n)n (1.7) = σ S n sign(σ)a 1σ(1) a nσ(n). (1.8) Remark Consider a typical summand in the first sum in (1.8), a σ(1)1 a σ(n)n. Observe that the n entries a σ(1)1, a σ(2)2,..., a σ(n)n lie on different columns of A and thus occupy all the n columns of A. Similarly, these entries lie on different rows of A. A collection of n entries so that no two lie on the same row or the same column is called a rook placement. 1 Observe that in order to describe a rook placement, you need to indicate the position of the entry on the first column, by indicating the row σ(1) on which it lies, then you need to indicate the position of the entry on the second column etc. Thus, the sum in (1.8) has one term for each rook placement. If A denotes the transpose of the n n-matrix A with entries a ij = a ji we deduce that det A = sign(σ)a σ(1)1 a σ(n)n = sign(σ)a 1σ(1) a nσ(n) = det A. (1.9) σ S n σ S n Example Suppose that A is a 2 2 matrix [ ] a11 a A = 12 a 21 a 22 Then det A = a 11 a 22 a 12 a 21. Proposition If A is an upper triangular n n-matrix, then det A is the product of the diagonal entries. A similar result holds if A is lower triangular. 1 If you are familiar with chess, a rook controls the row and the column at whose intersection it is situated.
10 10 LIVIU I. NICOLAESCU Proof. To keep the ideas as transparent as possible, we carry the proof in the special case n = 3. Suppose first that A is upper tringular, Then A = a 11 a 12 a 13 0 a 22 a a 33 so that Ae 1 = a 11 e 1, Ae 2 = a 12 e 1 + a 22 e 2, Ae 3 = a 13 e 1 + a 23 e 2 + a 33 e 3 Then det A = det(ae 1, Ae 2, Ae 3 ) = det(a 11 e 1, a 12 e 1 + a 22 e 2, a 13 e 1 + a 23 e 2 + a 33 e 3 ) = det(a 11 e 1, a 12 e 1, a 13 e 1 + a 23 e 2 + a 33 e 3 ) + det(a 11 e 1, a 22 e 2, a 13 e 1 + a 23 e 2 + a 33 e 3 ) = a 11 a 12 det(e 1, e 1, a 13 e 1 + a 23 e 2 + a 33 e 3 ) +a }{{} 11 a 22 det(e 1, e 2, a 13 e 1 + a 23 e 2 + a 33 e 3 ) =0 ) + det(e 1, e 2, a 33 e 3 ) = a 11 a 22 (det(e 1, e 2, a 13 e 1 ) + det(e }{{} 1, e 2, a 23 e 2 ) }{{} =0 =0 = a 11 a 22 a 33 det(e 1, e 2, e 3 ) = a 11 a 22 a 33. This proves the proposition when A is upper triangular. If A is lower triangular, then its transpose A is upper triangular and we deduce det A = det A = a 11 a 22 a 33 = a 11a 22 a 33. Recall that we have a collection of elementary column (row) operations on a matrix. The next result explains the effect of these operations on the determinant of a matrix. Proposition Suppose that A is an n n-matrix. The following hold. (a) If the matrix B is obtained from A by multiplying the elements of the i-th column of A by the same nonzero scalar λ, then det B = λ det A. (b) If the matrix B is obtained from A by switching the order of the columns i and j, i j then det B = det A. (c) If the matrix B is obtained from A by adding to the i-th column, the j-th column, j i then det B = det A. (d) Similar results hold if we perform row operations of the same type. Proof. (a) We have det B = det(be 1,..., Be n ) = det(ae 1,..., λae i, Ae n ) = λ det(ae 1,..., Ae i, Ae n ) = λ det A. (b) Observe that for any σ S n we have det(ae σ(1),..., Ae σ(n) ) = sign(σ) det(ae 1,..., Ae σ(n) ) = sign(σ) det A. Now observe that the columns of B are and sign(τ ij ) = 1. Be 1 = Ae τij (1),..., Be n = Ae τij (n)
11 LINEAR ALGEBRA 11 For (c) we observe that det B = det(ae 1,..., Ae i 1, Ae i + Ae j, Ae i+1,..., Ae j,..., Ae n ) = det(ae 1,..., Ae i 1, Ae i, Ae i+1,..., Ae j,..., Ae n ) + det(ae 1,..., Ae i 1, Ae j, Ae i+1,..., Ae j,..., Ae n ) }{{} =0 = det A. Part (d) follows by applying (a), (b), (c) to the transpose of A, observing that the rows of A are the columns of A and then using the equality det C = det C. The above results represents one efficient method for computing determinants because we know that by performing elementary row operations on a square matrix we can reduce it to upper triangular form. Here is a first application of determinants. Proposition Suppose that A is an n n-matrix with entries in F. Then the following statements are equivalent. (a) The matrix A is invertible. (b) det A 0. Proof. A matrix A is invertible if and only if by performing elementary row operations we can reduce to an upper triangular matrix B whose diagonal entries are nonzero, i.e., det B 0. By performing elementary row operation the determinant changes by a nonzero factor so that det A 0 det B 0. Corollary Suppose that u 1,..., u n F n. The following statements are equivalent. (a) The vectors u 1,..., u n are linearly independent. (b) det(u 1,..., u n ) 0. Proof. Consider the linear operator A : F n F n given by Ae i = u i, i = 1,..., n. We can tautologically identify it with a matrix and we have det(u 1,..., u n ) = det A. Now observe that (u 1,..., u n ) are linearly independent of and only if A is invertible and according to the previous propostion, this happens if and only if det A Additional properties of determinants. Proposition If A, B are two n n-matrices, then Proof. We have det AB = det(abe 1,..., ABe n ) = det( det AB = det A det B. (1.10) n b i1 1Ae i1,... i 1 =1 n b innae in ) i n=1
12 12 LIVIU I. NICOLAESCU = b i 1,...,i n=1 b i1 1 b inn det(ae i1,... Ae in ) In the above sum, the only nontrivial terms correspond to choices of pairwise distinct indices i 1,..., i n. For such a choice, the sequence i 1,..., i n describes a permutation of I n. We deduce det AB = σ S n b σ(1)1 b σ(n)n det(ae σ(1),..., Ae σ(n) ) }{{} =sign(σ) det A = det A σ S n sign(σ)b σ(1)1 b σ(n)n = det A det B. Corollary If A is an invertible matrix, then Proof. Indeed, we have so that det A 1 = 1 det A. A A 1 = 1 det A det A 1 = det 1 = 1. Proposition Suppose that m, n are positive integers and S is an (m + n) (m + n)-matrix that has the block form [ ] A C S =, 0 B where A is an m m-matrix, B is an n n-matrix and C is an m n-matrix. Then det S = det A det B. Proof. We denote by s ij the (i, j)-entry of S, i, j I m+n. From the block description of S we deduce that j m and i > n s ij = 0. (1.11) We have det S = σ S m+n sign(σ) m+n i=1 s σ(i)i, From (1.11) we deduce that in the above sum the nonzero terms correspond to permutations σ S m+n such that σ(i) m, i m. (1.12) If σ is such a permutation, ten its restriction to I m is a permutation α of I m and its restriction to I m+n \ I m is a permutation of this set, which we regard as a permutation β of I n. Conversely, given α S m and β S n we obtain a permutation σ = α β S mn satisfying (1.12) given by { α(i), i m, α β(i) = m + β(i m), i > m.
13 LINEAR ALGEBRA 13 Observe that and we deduce det S = = m sign(α) α S m i=1 sign(α β) = sign(α) sign(β), s α(i)i sign(α β) α S m, β S n n sign(β) β Sn j=1 m+n i=1 s α β(i)i s m+β(j),j+m = det A det B. Definition If A is an n n-matrix and i, j I n, we denote by A(i, j) the matrix obtained from A by removing the i-th row and the j-th column. Corollary Suppose that the j-th column of an n n-matrix A is sparse, i.e., all the elements on the j-th column, with the possible exception of the element on the i-th row, are equal to zero. Then det A = ( 1) i+j a ij det A(i, j). Proof. Observe that if i = j = 1 then A has the block form [ ] a11 A = 0 A(1, 1) and the result follows from Proposition We can reduce the general case to this special case by permuting rows and columns of A. If we switch the j-th column with (j 1)-th column we can arrange that the (j 1)-th column is the sparse column. Iterating this procedure we deduce after (j 1) such switches that the first column is the sparse column. By performing (i 1) row-switches we can arrange that the nontrivial element on this sparse column is situated on the first row. Thus, after a total of i + j 2 row and column switches we obtain a new matrix A with the block form [ ] A aij = 0 A(i, j) We have ( 1) i+j det A = det A = a ij det A(i, j). Corollary 1.24 (Row and column expansion). Fix j I n. Then for any n n-matrix we have n n det A = ( 1) i+j a ij det A(i, j) = ( 1) i+k a jk A(j, k). i=1 The first equality is referred to as the j-th column expansion of det A, while the second equality is referred to as the j-th row expansion of det A. Proof. We prove only the column expansion. The row expansion is obtained by applying to column expansion to the transpose matrix. For simplicity we assume that j = 1. We have ( n ) det A = det(ae 1, Ae 2,..., Ae n ) = det a i1 e i, Ae 2,..., Ae n k=1 i 1
14 14 LIVIU I. NICOLAESCU = n a i1 det ( ) e i, Ae 2,..., Ae n. i 1 Denote by A i the matrix whose first column is the column basic vector e i, and the other columns are the corresponding columns of A, Ae 2,..., Ae n. We can rewrite the last equality as n det A = a i1 det A i. i=1 The first column of A i is sparse, and the submatrix A i (i, 1) is equal to the submatrix A(i, 1). We deduce from the previous corollary that det A i = ( 1) i+1 det A i (i, 1) = ( 1) i+1 det A(i, 1). This completes the proof of the column expansion formula. Corollary If k j then n ( 1) i+j a ik det A(i, j) = 0. i=1 Proof. Denote by A the matrix obtained from A by removing the j-th column and replacing with the k-th column of A. Thus, in the new matrix A the j-th and the k-th columns are identical so that det A = 0. On the other hand A (i, j) = A(i, j) Expanding det A along the j-th column we deduce n n 0 = det A = ( 1) i+j a ij det A(i, j) = ( 1) i j a ik det A(i, j). i=1 Definition For any n n matrix A we define the adjoint matrix Ǎ to be the n n-matrix with entries ǎ ij = ( 1) i+j det A(j, i), i, j I n. Form Corollary 1.24 we deduce that for any j we have n ǎ ji a ij = det A, i=1 while Corollary 1.25 implies that for any j k we have n ǎ ji a ik = 0. i=1 The last two identities can be rewritten in the compact form If A is invertible, then from the above equality we conclude that i=1 ǍA = (det A)1. (1.13) A 1 = 1 (1.14) det AǍ.
15 Example Suppose that A is a 2 2 matrix [ ] a11 a A = 12 a 21 a 22 LINEAR ALGEBRA 15 Then det A = a 11 a 22 a 12 a 21, A(1, 1) = [a 22 ], A(1, 2) = [a 21 ], A(2, 1) = [a 12 ], A(2, 2) = [a 11 ], ǎ 11 = det A(1, 1) = a 22, ǎ 12 = det A(2, 1) = a 12, so that and we observe that ǎ 21 = det A(1, 2) = a 21, ǎ 22 = det A(2, 2) = a 11, Ǎ = ǍA = [ ] a22 a 12 a 21 a 11 [ det A 0 0 det A ]. Proposition 1.28 (Cramer s Rule). Suppose that A is an invertible n n-matrix and u, x F n are two column vectors such that Ax = u, i.e., x is a solution of the linear system a 11 x 1 + a 12 x a 1n x n = u 1 a 21 x 1 + a 22 x a 2n x n = u 2... a n1 x 1 + a n2 x a nn x n = u n. Denote by A j (u) the matrix obtained from A by replacing the j-th column with the column vector u. Then x j = det A j(u), j = 1,..., n. (1.15) det A Proof. By expanding along the j-th column of A j (u) we deduce On the other hand, Hence (det A)x j = n det A j (u) = n ( 1) j+k det A(k, j). (1.16) k=1 (det A)x = (ǍA)x = Ǎu. k=1 ǎ jk u k = k ( 1) k+j u k det A(k, j) (1.16) = det A j (u).
16 16 LIVIU I. NICOLAESCU 1.6. Examples. To any list of complex numbers (x 1,..., x n ) we associate the n n matrix x 1 x 2 x n V (x 1,..., x n ) =..... (1.17) x n 1 1 x2 n 1 x n 1 n This matrix is called the Vandermonde matrix associated to the list of numbers (x 1,..., x n ). We want to compute its determinant. Observe first that det V (x 1,... x n ) = 0. if the numbers z 1,..., z n are not distinct. Observe next that [ ] 1 1 det V (x 1, x 2 ) = det = (x x 1 x 2 x 1 ). 2 Consider now the 3 3 situation. We have det V (x 1, x 2, x 3 ) = det x 1 x 2 x 3 x 2 1 x 2 2 x 2 3 Subtract from the 3rd row the second row multiplied by x 1 to deduce det V (x 1, x 2, x 3 ) = det x 1 x 2 x 3 0 x 2 2 x 1x 2 x 2 3 x 3x 1 = det x 1 x 2 x 3. 0 x 2 (x 2 x 1 ) x 2 3 x 3x 1 Subtract from the 2nd row the first row multiplied by x 1 to deduce det V (x 1, x 2, x 3 ) = det [ 0 x 2 x 1 x 3 x 1 x = det 2 x 1 x 3 x 1 x 0 x 2 (x 2 x 1 ) x 3 (x 3 x 1 ) 2 (x 2 x 1 ) x 3 (x 3 x 1 ) [ ] 1 1 = (x 2 x 1 )(x 3 x 1 ) det = (x x 2 x 2 x 1 )(x 3 x 1 ) det V (x 2, x 3 ). 3 = (x 2 x 1 )(x 3 x 1 )(x 3 x 1 ). We can write the above equalities in a more compact form det V (x 1, x 2 ) = (x j x i ), det V (x 1, x 2, x 3 ) = 1 i<j 2. 1 i<j 3 A similar row manipulation argument (left to you as an exercise) shows that (x j x i ). (1.18) det V (x 1,..., x n ) = (x 2 x 1 ) (x n x 1 ) det V (x 2,..., x n ). (1.19) We have the following general result. Proposition For any integer n 2 and any complex numbers x 1,..., x n we have det V n (x 1,..., x n ) = (x j x i ). (1.20) 1 i<j n ]
17 LINEAR ALGEBRA 17 Proof. We will argue by induction on n. The case n = 2 is contained in (1.18). Assume now that (1.20) is true for n 1. This means that det V (x 2,..., x n ) = (x j x i ). Using this in (1.19) we deduce det V n (x 1,..., x n ) = (x 2 x 1 ) (x n x 1 ) 2 i<j n Here is a simple application of the above computation. 2 i<j n (x j x i ) = 1 i<j n (x j x i ). Corollary If x 1,..., x n are distinct complex numbers then for any complex numbers r 1,..., r n there exists a polynomial of degree n 1 uniquely determined by the conditions Proof. The polynomial P must have the form P (x 1 ) = r 1,..., p(x n ) = r n. (1.21) P (x) = a 0 + a 1 x + + a n 1 x n 1, where the coefficients a 0,..., a n 1 are to be determined. We will do this using (1.21) which can be rewritten as a system of linear equations in which the unknown are the coefficients a 0,..., a n 1, a 0 + a 0 x a n 1 x1 n 1 = r 1 a 0 + a 1 x a n 1 x2 n 1 = r 2... a 0 + a 1 x n + + a n 1 xn n 1 = r n. We can rewrite this in matrix form 1 x 1 x n 1 1 a 0 r 1 1 x 2 x2 n 1 a 1 r x n xn n 1 }{{} =V (x 1,...,x n). a n 1 = Because the numbers x 1,..., x n are distinct, we deduce from (1.20) that. r n det V (x 1,..., x n ) = V (x 1,..., x n ) 0. Hence the above linear system has a unique solution a 0,..., a n 1..
18 18 LIVIU I. NICOLAESCU 1.7. Exercises. Exercise 1.1. Prove that the map in Example 1.2 is indeed a bilinear map. Exercise 1.2. Prove Proposition 1.4. Exercise* 1.3. (a) Show that for any i I n we have τ ij τ ij = 1 In. (b) Prove that for any permutation σ S n there exists a sequence of transpositions τ i1 j 1,..., τ imj m, m < n, such that τ imj m τ i1 j 1 σ = 1 In. Conclude that any permutation is a product of transpositions. Exercise 1.4. Decompose the permutation ( ) as a composition of transpositions. Exercise 1.5. Suppose that Φ T 2 (U) is a symmetric bilinear map. Define Q : U F by setting Show that for any u, v U we have Exercise 1.6. Prove that the map Q(u) = Φ(u, u), u U. Φ(u, v) = 1 4 ( ) Q(u + v) Q(u v). Φ : R 2 R 2 R, Φ(u, v) = u 1 v 2 u 2 v 1, is bilinear, and skew-symmetric. Exercise 1.7. (a) Show that a bilinear form Φ : U U F is skew-symmetric if and only if Φ(u, u) = 0, u U. Hint: Expand Φ(u + v, u + v) using the bilinearity of Φ. (b) Prove that an n-linear form Φ T n (U) is skew-symmetric if and only if for any i j and any vectors u 1,..., u n U such that u i = u j we have Φ(u 1,..., u n ) = 0. Hint. Use the trick in part (a) and Exercise 1.3. Exercise 1.8. Compute the determinant of the following 5 5-matrix
19 LINEAR ALGEBRA 19 Exercise 1.9. Fix complex numbers x and h. Compute the determinant of the matrix x h 1 0 x 2 hx h 1. x 3 hx 2 hx h Can you generalize thus example? Exercise Prove the equality (1.19). Exercise (a) Consider a degree (n 1) polynomial P (x) = a n 1 x n 1 + a n 2 x n a 1 x + a 0, a n 1 0. Compute the determinant of the following matrix x 1 x 2 x n V =..... x1 n 2 x n 2 2 xn n 2 P (x 1 ) P (x 2 ) P (x n ) (b) Compute the determinants of the following n n matrices x 1 x 2 x n A =...., x n 2 1 x2 n 2 x n 2 n x 2 x 3 x n x 1 x 3 x 4 x n x 1 x 2 x n 1 and B = x 1 x 2 x n.... x n 2 1 x2 n 2 x n 2 n (x 2 + x x n ) n 1 (x 1 + x 3 + x x n ) n 1 (x 1 + x x n 1 ) n 1 Hint. To compute det B it is wise to write S = x x n so that x 2 + x x n = (S x 1 ), x 1 + x x n = S x 2 etc. Next observe that (S x) k is a polynomial of degree k in x.. Exercise Suppose that A is skew-symmetric n n matrix, i.e., Show that det A = 0 if n is odd. A = A. Exercise Suppose that A = (a ij ) 1 i,j n is an n n matrix with complex entries. (a) Fix complex numbers x 1,..., x n, y 1,..., y n and consider the n n matrix B with entries Show that b ij = x i y j a ij. det B = (x 1 y 1 x n y n ) det A.
20 20 LIVIU I. NICOLAESCU (b) Suppose that C is the n n matrix with entries Show that det C = det A. c ij = ( 1) i+j a ij. Exercise (a) Suppose we are given three sequences of numbers a = (a k ) k 1, b = (b k ) k 1 and c = (c k ) k 1. To these seqences we associate a sequence of Jacobi matrices a 1 b c 1 a 2 b J n = 0 c 2 a 3 b (J) c n 1 a n Show that Hint: Expand along the last row. (b) Suppose that above we have det J n = a n det J n 1 b n 1 c n 1 det J n 2. (1.22) c k = 1, b k = 2, a k = 3, k 1. Compute J 1, J 2. Using (1.22) determine J 3, J 4, J 5, J 6, J 7. Can you detect a pattern? Exercise Suppose we are given a sequence of polynomials with complex coefficients (P n (x)) n 0, deg P n = n, for all n 0, P n (x) = a n x n +, a n 0. Denote by V n the space of polynomials with complex coefficients and degree n. (a) Show that the collection {P 0 (x),..., P n (x)} is a basis of V n. (b) Show that for any x 1,..., x n C we have det P 0 (x 1 ) P 0 (x 1 ) P 0 (x n ) P 1 (x 1 ) P 1 (x 2 ) P 1 (x n ).... P n 1 (x 1 ) P n 1 (x 2 ) P n 1 (x n ) = a 0a 1 a n 1 (x j x i ). Exercise To any polynomial P (x) = c 0 + c 1 x c n 1 x n 1 of degree n 1 with complex coefficients we associate the n n circulant matrix c 0 c 1 c 2 c n 2 c n 1 c n 1 c 0 c 1 c n 3 c n 2 C P = c n 2 c n 1 c 0 c n 4 c n 3, c 1 c 2 c 3 c n 1 c 0 Set ρ = e 2πi n, i = 1, so that ρ n = 1. Consider the n n Vandermonde matrix V ρ = V (1, ρ,..., ρ n 1 ) defined as in (1.17) (a) Show that for any j = 1,..., n 1 we have 1 + ρ j + ρ 2j + + ρ (n 1)j = 0. i<j
21 LINEAR ALGEBRA 21 (b) Show that C P V ρ = V ρ Diag ( P (1), P (ρ),..., P (ρ n 1 ) ), where Diag(a 1,..., a n ) denotes the diagonal n n-matrix with diagonal entries a 1,..., a n. (c) Show that det C P = P (1)P (ρ) P (ρ n 1 ). (d) Suppose that P (x) = 1 + 2x + 3x 2 + 4x 3 so that C P is a 4 4-matrix with integer entries and thus det C P is an integer. Find this integer. Can you generalize this computation? Exercise Consider the n n-matrix A = (a) Find the matrices A 2, A 3,..., A n. (b) Compute (I A)(I + A + + A n 1 ). (c) Find the inverse of (I A). Exercise Let P (x) = x d + a d 1 x d a 1 x + a 0 be a polynomial of degree d with complex coefficients. We denote by S the collection of sequences of complex numbers, i.e., functions f : { 0, 1, 2,... } C, n f(n). This is a complex vector space in a standard fashion. We denote by S P the subcollection of sequences f S satisfying the recurrence relation f(n + d) + a d 1 f(n + d 1) + a 1 f(n + 1) + a 0 f(n) = 0, n 0. (R P ) (a) Show that S P is a vector subspace of S. (b) Show that the map I : S P C d which associates to f S P its initial values If, f(0) f(1) If =. Cd f(d 1) is an isomorphism of vector spaces. (c) For any λ C we consider the sequence f λ defined by f λ (n) = λ n, n 0. (Above it is understood that λ 0 = 1.) Show that f λ S P if and only if P (λ) = 0, i.e., λ is a root of P. (d) Suppose P has d distinct roots λ 1,..., λ d C. Show that the collection of sequences f λ1,..., f λd is a basis of S P. (e) Consider the Fibonacci sequence ( f(n) ) defined by n 0 f(0) = f(1) = 1, f(n + 2) = f(n + 1) + f(n), n 0.
22 22 LIVIU I. NICOLAESCU Thus, f(2) = 2, f(3) = 3, f(4) = 5, f(5) = 8, f(6) = 13,.... Use the results (a) (d) above to find a short formula describing f(n). Exercise Let b, c be two distinct complex numbers. Consider the n n Jacobi matrix b + c b c b + c b c b + c b 0 0 J n = c b + c b c b + c Find a short formula for det J n. Hint: Use the results in Exercises 1.14 and 1.18.
23 LINEAR ALGEBRA SPECTRAL DECOMPOSITION OF LINEAR OPERATORS 2.1. Invariants of linear operators. Suppose that U is an n-dimensional F-vector space. We denote by L(U) the space of linear operators (maps) T : U U. We already know that once we choose a basis be = (e 1,..., e n ) of U we can represent T by a matrix A = M(e, T ) = (a ij ) 1 i,j n, where the elements of the k-th column of A describe the coordinates of T e k in the basis e, i.e., n T e k = a 1k e a nk e n = a jk e j. A priori, there is no good reason of choosing the basis e = (e 1,..., e n ) over another f = (f 1,..., f n ). With respect to this new basis the operator T is represented by another matrix n B = M(f, T ) = (b ij ) 1 i,j n, T f k = b jk f j. The basis f is related to the basis e by a transition matrix C = (c ij ) 1 i,j n, f k = j=1 j=1 n c jk e j. Thus the, k-th column of C describes the coordinates of the vector f k in the basis e. Then C is invertible and B = C 1 AC. (2.1) The space U has lots of bases, so the same operator T can be represented by many different matrices. The question we want to address in this section can be loosely stated as follows. Find bases of U so that, in these bases, the operator T represented by very simple matrices. We will not define what a a very simple matrix is but we will agree that the more zeros a matrix has, the simpler it is. We already know that we can find bases in which the operator T is represented by upper triangular matrices. These have lots of zero entries, but it turns out that we can do much better than this. The above question is closely related to the concept of invariant of a linear operator. An invariant is roughly speaking a quantity naturally associated to the operator that does not change when we change bases. Definition 2.1. (a) A subspace V U is called an invariant subspace of the linear operator T L(U) if T v V, v V. (b) A nonzero vector u 0 U is called an eigenvector of the linear operator T if and only if the linear subspace spanned by u 0 is an invariant subspace of T. Example 2.2. (a) Suppose that T : U U is a linear operator. Its null space or kernel j=1 ker T := { u U; T u = 0 }, is an invariant subspace of T. Its dimension, dim ker T, is an invariant of T because in its definition we have not mentioned any particular basis. We have already encountered this dimension under a different guise.
24 24 LIVIU I. NICOLAESCU If we choose a basis e = (e 1,..., e n ) of U and use it to represent T as an n n matrix A = (A ij ) 1 i,j n, then dim ker T is equal to the nullity of A, i.e., the dimension of the vector space of solutions of the linear system The range Ax = 0, x = x 1. x n F n. R(T ) = { T u; u U } is also an invariant subspace of T. Its dimension dim R(T ) can be identified with the rank of the matrix A above. The rank nullity theorem implies that dim ker T + dim R(T ) = dim U. (2.2) (b) Suppose that u 0 U is an eigenvector of T. Then T u 0 span(u 0 ) so that there exists λ F such that T u 0 = λu The determinant and the characteristic polynomial of an operator. Assume again that U is an n-dimensional F-vector space. A more subtle invariant of an operator T L(U) is its determinant. This is a scalar det T F. Its definition requires a choice of a basis of U, but the end result is independent any choice of basis. Here are the details. Fix a basis e = {e 1,..., e n } of U. We use it to represent T as an n n real matrix A = (a ij ) 1 i,j n. More precisely, this means that n T e j = a ij e i, j = 1,..., n. If we choose another basis of U, i=1 f = (f 1,..., f n ), then we can represent T by another n n matrix B = (b ij ) 1 i,j n, i.e., n T f j = b ij f i, j = 1,..., n. i=1 As we have discussed above the basis f is obtained from e via a change-of-basis matrix C = (c ij ) 1 i,j n, i.e., n f j = c ij e j, j = 1,..., n. Moreover the matrices A, B, C are related by the transition rule (2.1), Thus i=1 B = C 1 AC. det B = det(c 1 AC) = det C 1 det A det C = det A. The upshot is that the matrices A and B have the same determinant. Thus, no mater what basis of U we choose to represent T as an n n matrix, the determinate of that matrix is independent of the basis used. This number, denoted by det T is an invariant of T called the determinant of the operator T. Here is a simple application of this concept.
25 LINEAR ALGEBRA 25 Corollary 2.3. ker T 0 det T = 0. More generally, for any x F consider the operator x1 T : U U, defined by We set (λ1 T )u = xu T u, u U. P T (x) = det(x1 T ). Proposition 2.4. The quantity P T (x) is a polynomial of degree n = dim U in the variable λ. Proof. Choose a basis e = (e 1,..., e n ). In this basis T is represented by an n m matrix A = (a ij ) 1 i,j n and the operator x1 T is represented by the matrix x a 11 a 12 a 13 a 1n a 21 x a 22 a 23 a 2n xi A = a n1 a n2 a n3 x a nn As explained in Remark 1.13, the determinant of this matrix is a sum of products of certain choices of n entries of this matrix, namely the entries that form a rook placement. Since there are exactly n entries in this matrix that contain the variable x, we see that each product associated to a rook placement of entries is a polynomial in x of degree n. There exists exactly one rook placement so that each of the entries of this placement contain the term x This pavement is easily described, it consists of the terms situated on the diagonal of this matrix, and the product associated to these entries is (x a 11 ) (x a nn ). Any other rook placement contains at most (n 1) entries that involve the term x, so the corresponding product of these entries is a polynomial of degree at most n 1. Hence det(xi A) = (x a 11 ) (x a nn ) + polynomial of degree n 1. Hence P T (x) = det(xi A) is a polynomial of degree n in x. Definition 2.5. The polynomial P T (x) is called the characteristic polynomial of the operator T. Recall that a number λ F is called an eigenvalue of the operator T if and only if there exists u U \ 0 such that T u = λu, i.e., (λ1 T )u = 0. Thus λ is an eigenvalue of T if and only if ker(λi T ) 0. Invoking Corollary 2.3 we obtain the following important result. Corollary 2.6. A scalar λ F is an eigenvalue of T if and only if it is a root of the characteristic polynomial of T, i.e., P T (λ) = 0.
26 26 LIVIU I. NICOLAESCU The collection of eigenvalues of an operator T is called the spectrum of T and it is denoted by spec(t ). If λ spec(t ), then the subspace ker(λ1 T ) U is called the eigenspace corresponding to the eigenvalue λ. From the above corollary and the fundamental theorem of algebra we obtain the following important consequence. Corollary 2.7. If T : U U is a linear operator on a complex vector space U, then spec(t ). We say that a linear operator T : U U is triangulable if there exists a basis e = (e 1,..., e n ) of U such that the matrix representing T in this basis is upper triangular. We will refer to A as a triangular representation of T. Triangular representations, if they exist, are not unique. We already know that any linear operator on a complex vector space is triangulable. Corollary 2.8. Suppose that T : U U is a triangulable operator. Then for any basis e = (e 1,..., e n ) of U such that the matrix A = (a ij ) 1 i,j n representing T in this basis is upper triangular, we have P T (x) = (x a 11 ) (x a nn ). Thus, the eigenvalues of T are the elements along the diagonal of any triangular representation of T Generalized eigenspaces. Suppose that T : U U is a linear operator on the n-dimensional F-vector space. Suppose that spec(t ). Choose an eigenvalue λ spec(t ). Lemma 2.9. Let k be a positive integer. Then ker(λ1 T ) k ker(λ1 T ) k+1. Moreover, if ker(λ1 T ) k = ker(λ1 T ) k+1, then ker(λ1 T ) k = ker(λ1 T ) k+1 = ker(λ1 T ) k+2 = ker(λ1 T ) k+3 =. Proof. Observe that if (λ1 T ) k u = 0, then (λ1 T ) k+1 u = (λ1 T )(λ1 T ) k u = 0, so that ker(λ1 T ) k ker(λ1 T ) k+1. Suppose that ker(λ1 T ) k = ker(λ1 T ) k+1 To prove that ker(λ1 T ) k+1 = ker(λ1 T ) k+2 it suffices to show that Let v ker(λ1 T ) k+2. Then ker(λ1 T ) k+1 ker(λ1 T ) k+2. (λ1 T ) k+1 (1 λt )v = 0, so that (1 λt )v ker ker(λ1 T ) k+1 = ker(λ1 T ) k so that (λ1 T ) k (1 λt )v = 0, i.e., v ker(λ1 T ) k+1. We have thus shown that ker(λ1 T ) k+1 = ker(λ1 T ) k+2. The remaining equalities ker(λ1 T ) k+2 = ker(λ1 T ) k+3 = are proven in a similar fashion.
27 Corollary For any m n = dim U we have LINEAR ALGEBRA 27 ker(λ1 T ) m = ker(λ1 T ) n, R(λ1 T ) m = R(λ1 T ) n. (2.3a) (2.3b) Proof. Consider the sequence of positive integers d 1 (λ) = dim F (λ1 T ),..., d k (λ) = dim F (λ1 T ) k,.... Lemma 2.9 shows that d 1 (λ) d 2 (λ) n = dim U. Thus there must exist k such that d k (λ) = d k+1 (λ). We set k 0 = min { k; d k (λ) = d k+1 (λ) }. Thus d ( λ) < < d k0 (λ) n, so that k 0 n. On the other hand, since d k0 (λ) = d k0 +1(λ) we deduce that ker(λ1 T ) k 0 = ker(λ1 T ) m, m k 0. Since n k 0 we deduce ker(λ1 T ) n = ker(λ1 T ) k 0 = ker(λ1 T ) m, m k 0. This proves (2.3a). To prove (2.3b) observe that if m > n, then R(λ1 T ) m = (λ1 T ) n( ) (λ1 T ) m n V (λ1 R) n( V ) = R(λ1 T ) n. On the other hand, the rank-nullity formula (2.2) implies that dim R(λ1 T ) n = dim U dim ker(λ1 T ) n This proves (2.3b). = dim U (λ1 T ) m = dim R(λ1 T ) m. Definition Let T : U U be a linear operator on the n-dimensional F-vector space U. Then for any λ spec(t ) the subspace ker(λ1 T ) n is called the generalized eigenspace of T corresponding to the eigenvalue λ and it is denoted by E λ (T ). We will denote its dimension by m λ (T ), or m λ, and we will refer to it as the multiplicity of the eigenvalue λ. Proposition Let T L(U), dim F U = n, and λ spec(t ). Then the generalized eigenspace E λ (T ) is an invariant subspace of T. Proof. We need to show that T E λ (T ) E λ (T ). Let u E λ (T ), i.e., (λ1 T ) n u = 0. Clearly λu T u ker(λ1 T ) n+1 = E λ (T ). Since λu E λ (T ) we deduce that T u = λu (λu T ) E λ (T ).
28 28 LIVIU I. NICOLAESCU Theorem Suppose that T : U U is a triangulable operator on the the n-dimensional F -vector space U. Then the following hold. (a) For any λ spec(t ) the multiplicity m λ is equal to the number of times λ appears along the diagonal of a triangular representation of T. (b) det T = λ m λ(t ), (2.4a) λ spec(t ) P T (x) = λ spec(t ) λ spec(t ) (x λ) m λ(t ), (2.4b) m λ (T ) = deg P T = dim U = n. (2.4c) Proof. To prove (a) we will argue by induction on n. For n = 1 the result is trivially true. For the inductive step we assume that the result is true for any triangulable operator on an (n 1)-dimensional F-vector space V, and we will prove that the same is true for triangulable operators acting on an n- dimensional space U. Let T L(U) be such an operator. We can then find a basis e = (e 1,..., e n ) of U such that, in this basis, the operator T is represented by the upper triangular matrix A = λ 1 0 λ λ n λ n Suppose that λ spec(t ). For simplicity we assume λ = 0. Otherwise, we carry the discussion of the operator T λ1. Let ν be the number of times 0 appears on the diagonal of A we have to show that ν = dim ker T n. Denote by V the subspace spanned by the vectors e 1,..., e n 1. Observe that V is an invariant subspace of T, i.e., T V V. If we denote by S the restriction of T to V we can regard S as a linear operator S : V V. The operator S is triangulable because in the basis (e 1,..., e n 1 ) of V it is represented by the upper triangular matrix λ 1 0 λ 2 B = λ n 1 Denote by µ the number of times 0 appears on the diagonal of B. The induction hypothesis implies that µ = dim ker S n 1 = dim ker S n. Clearly µ ν. Note that ker S n ker T n so that µ = dim ker S n dim ker T n. We distinguish two cases.
29 1. λ n 0. In this case we have µ = ν so it suffices to show that LINEAR ALGEBRA 29 ker T n V. Indeed, if that were the case, we would conclude that ker T n ker S n, and thus dim ker T n = dim ker S n = µ = ν. We argue by contradiction. Suppose that there exists u ker T n such that u V. Thus, we can find v V and c F \ 0 such that u = v + ce n. Note that T n v V and e n = λ n e n + vector in V. Thus T n ce n = cλ n ne n + vector in V so that T n u = cλ n ne n + vector in V 0. This contradiction completes the discussion of Case λ n = 0. In this case we have ν = µ + 1 so we have to show that We need an auxiliary result. dim ker T n = µ + 1. Lemma There exists u U \ V such that T n u = 0 so that dim(v + ker T n ) dim V + 1 = n. (2.5) Proof. Set v n := T e n. Observe that v n V. From (2.3b) we deduce that R S n 1 = R S n so that there exists v 0 V such that S n 1 v n = S n v 0. Set u := e n v 0. Note that u U \ V, Now observe that so that We conclude that which shows that T u = v n T v 0 = v n Sv, T n u = T n 1 (v n Sv 0 ) = S n 1 v n S n v 0 = 0. n = dim U dim(v + ker T n ) (2.5) n, dim(v + ker T n ) = n. n = dim(v + ker T n ) = dim(ker T n ) + dim }{{ V} dim(u ker T n ) }{{} n 1 =µ = dim(ker T n ) + n 1 µ, dim ker T n = µ + 1 = ν.
30 30 LIVIU I. NICOLAESCU This proves (a). The equalities (2.4c), (2.4b), (2.4c) follow easily from (a). In the remainder of this section we will assume that F is the field of complex numbers, C. Suppose that U is a complex vector space and T L(U) is a linear operator. We already know that T is triangulable and we deduce from the above theorem the following important consequence. Corollary Suppose that T is a linear operator on the complex vector space U. Then det T = λ m λ(t ), P T (x) = (x λ) m λ(t ), λ spec(t ) λ spec(t ) For any polynomial with complex coefficients m λ (T ). λ spec(t ) p(x) = a 0 + a 1 x + + a n x n C[x] and any linear operator T on a complex vector space U we set p(t ) = a a 1 T + + a n T n. Note that if p(x), q(x) C[x], and if we set r(x) = p(x)q(x), then r(t ) = p(t )q(t ). Theorem 2.16 (Cayley-Hamilton). Suppose T is a linear operator on the complex vector space U. If P T (x) is the characteristic polynomial of T, then P T (T ) = 0. Proof. Fix a basis e = (e 1,..., e n ) in which T is represented by the upper triangular matrix λ 1 0 λ 2 A = λ n λ n Note that so that P T (x) = det(x1 T ) = P T (T ) = n (x λ j ) j=1 n (T λ j 1). j=1 For j = 1,..., N we define U j := span{e 1,..., e j }. and we set U 0 = {0}. Note that for any j = 1,..., n we have Thus P T (T )U = (T λ j 1)U j U j 1. n n 1 ( ) (T λ j )U n = (T λ j ) (T λ n 1)U n j=1 j=1
31 LINEAR ALGEBRA 31 n 1 n 2 (T λ j )U n 1 (T λ j )U n 2 (T λ 1 )U 1 {0}. j=1 j=1 In other words, P T (T )u = 0, u U. Example Consider the 2 2-matrix [ ] 3 2 A = 2 1 Its characteristic polynomial is [ ] x 3 2 P A (x) = det(xi A) = det = (x 3)(x+1)+4 = x 2 x x 3+4 = x 2 2x+1. The Cayley-Hamilton theorem shows that Let us verify this directly. We have and [ A A + I = 4 3 We can rewrite the last equality as so that We can rewrite this as Hence A 2 2A + 1 = 0. [ ] A = 4 3 ] [ A 2 = 2A I A n+2 = 2A n+1 A n, ] [ ] = 0. A n+2 A n+1 = A n+1 A n = A n A n 1 = = A I. A n = (A n A n 1 ) + (A n 1 A n 2 ) + + (A I) +I = na (n 1)I. }{{} =n(a I) 2.4. The Jordan normal form of a complex operator. Let U be a complex n-dimensional vector space and T : U U. For each eigenvalue λ spec(t ) we denote by E λ (T ) the corresponding generalized eigenspace, i.e., u E λ (T ) k > 0 : (T λ1) k u = 0. From Proposition 2.12 we know that E λ (T ) is an invariant subspace of T. Suppose that the spectrum of T consists of l distinct eigenvalues, Proposition spec(t ) = { λ 1,..., λ l }. U = E λ1 (T ) E λl (T ).
32 32 LIVIU I. NICOLAESCU Proof. It suffices to show that U = E λ1 (T ) + + E λl (T ), (2.6a) The equality (2.6a) follows from (2.4c) since dim U = dim E λ1 (T ) + + dim E λl (T ). dim U = m λ1 (T ) + + m λl (T ) = dim E λ1 (T ) + + dim E λl (T ), so we only need to prove (2.6b). Set V := E λ1 (T ) + + E λl (T ) U. (2.6b) We have to show that V = U. Note that since each of the generalized eigenspaces E λ (T ) are invariant subspaces of T, so is there sum V. Denote by S the restriction of T to V, which we regard as an operator S : V V. If λ spec(t ) and v E λ (T ) V, then (S λ1) k v = (T λ1) k v = 0 for some k 0. Thus λ is also an eigenvalue of S and v is also a generalized eigenvector of S. This proves that and In particular, this implies that dim U = λ spec(t ) spec(t ) spec(s), E λ (T ) E λ (S), λ spec(t ). dim E λ (T ) µ spec(s) This shows that dim V = dim U and thus V = U. dim E µ (S) = dim V dim U. For any λ spec(t ) we denote by S λ the restriction of T on the generalized eigenspace E λ (T ). Since this is an invariant subspace of T we can regard S λ as a linear operator S λ : E λ (T ) E λ (T ). Arguing as in the proof of the above proposition we deduce that E λ (T ) is also a generalized eigenspace of S λ. Thus, the spectrum of S λ consists of a single eigenvalue and E λ (T ) = E λ (S) = ker(λ1 S λ ) dim E λ(t ) = ker(λ1 S λ ) m λ(t ). Thus, for any u E λ (T ) we have (λ1 S λ ) m λ(t ) u = 0, i.e., (S λ λ1) m λ(t ) = ( 1) m λ(t ) (λ1 S λ ) m λ(t ) = 0. Definition A linear operator N : U U is called nilpotent if N k = 0 for some k > 0. If we set N λ = S λ λ1 we deduce that the operator N λ is nilpotent.
33 LINEAR ALGEBRA 33 Definition Let N : U U be a nilpotent operator on a finite dimensional complex vector spec V. A tower of N is an ordered collection T of vectors satisfying the equalities u 1, u 2,..., u k U Nu 1 = 0, Nu 2 = u 1,..., Nu k = u k 1. The vector u 1 is called the bottom of the tower, the vector u k is called the top of the tower, while the integer k is called the height of the tower. In Figure 1 we depicted a tower of height 4. Observe that the vectors in a tower are generalized eigenvectors of the corresponding nilpotent operator. u u 4 u 2 3 N N N u 1 Towers interact in a rather pleasant way. FIGURE 1. Pancaking a tower of height 4. Proposition Suppose that N : U U is a nilpotent operator on a complex vector space U and T 1,..., T r are towers of N with bottoms b 1,..., b r. If the bottom vectors b 1,..., b r are linearly independent, then the following hold. (i) The towers T 1,..., T r are mutually disjoint, i.e., T i T j = if i j. (ii) The union T = T 1 T r is a linearly independent family of vectors. Proof. Denote by k i the height of the tower T i and set k = k k r. We will argue by induction on k, the sum of the heights of the towers. For k = 1 the result is trivially true. Assume the result is true for all collections of towers with total heights < k and linear independent bases, and we will prove that it is true for collection of towers with total heights = k. Denote by V the subspace spanned by the union T. It is an invariant subspace of N, and we denote by S the restriction of N to V. We regard S as a linear operator S : V V. Denote by T i the tower obtained by removing the top of the tower T i and set (see Figure 2 ) Note that T = T 1 T r. R(S) = span(t ). (2.7)
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More informationThe Determinant: a Means to Calculate Volume
The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationNotes on Linear Algebra. Peter J. Cameron
Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More informationChapter 7. Permutation Groups
Chapter 7 Permutation Groups () We started the study of groups by considering planar isometries In the previous chapter, we learnt that finite groups of planar isometries can only be cyclic or dihedral
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationSolution to Homework 2
Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More information1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]
1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More information26. Determinants I. 1. Prehistory
26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinate-independent
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationIRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction
IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible
More informationTHE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
More informationMatrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.
Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationLinear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
More informationLEARNING OBJECTIVES FOR THIS CHAPTER
CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional
More informationMATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationChapter 20. Vector Spaces and Bases
Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More informationA note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
More informationGROUP ALGEBRAS. ANDREI YAFAEV
GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationModélisation et résolutions numérique et symbolique
Modélisation et résolutions numérique et symbolique via les logiciels Maple et Matlab Jeremy Berthomieu Mohab Safey El Din Stef Graillat Mohab.Safey@lip6.fr Outline Previous course: partial review of what
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationFinite dimensional C -algebras
Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n
More informationI. GROUPS: BASIC DEFINITIONS AND EXAMPLES
I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called
More informationDETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH
DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,
More information5. Linear algebra I: dimension
5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs
More informationTheory of Matrices. Chapter 5
Chapter 5 Theory of Matrices As before, F is a field We use F[x] to represent the set of all polynomials of x with coefficients in F We use M m,n (F) and M m,n (F[x]) to denoted the set of m by n matrices
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More information1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
More informationMath 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010
Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More informationHow To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More informationGENERATING SETS KEITH CONRAD
GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors
More informationThe Ideal Class Group
Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationMATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
More informationIdeal Class Group and Units
Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals
More information15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
More information