that T (v i ) = λ i v i and so Example 1.

Size: px
Start display at page:

Download "that T (v i ) = λ i v i and so Example 1."

Transcription

1 Section 5.1: Defn 1. A linear operator T : V V on a finite-dimensional vector space V is called diagonalizable if there is an ordered basis β for V such that β [T ] β is a diagonal matrix. A square matrix A is called diagonalizable if L A is diagonalizable. Note 1. If A = B [T ] B where T = L A then to find β such that β [I] B B [T ] B B [I] β = β [T ] β is diagonal is the same thing as finding an invertible S such that S 1 AS is diagonal. Defn 2. Let T be a linear operator on V, a non-zero v V is called an eigenvector of T if λ F, such that T (v) = λv. The scalar λ is called the eigenvalue corresponding to the eigenvector v. Let A M n (F ), v F n, v 0 is called an eigenvector of A if v is an eigenvector of L A. That is, Av = λv. Theorem 5.1. A linear operator T on a finite dimensional vector space V is diagonalizable if and only if there is an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, β = {v 1, v 2,..., v n } is an ordered basis of eigenvectors of T, and for D = β [T ] β, D is a diagonal matrix and D j,j is the e-val corresponding to v j, 1 j n. Proof. If T is diagonalizable, then β[t ] β = λ λ λ n for basis β = {v 1, v 2,..., v n }, which means that for all i, T (v i ) = λ i v i. So each λ i is an eigenvalue corresponding to the eigenvector v i. Conversely, if there exists an ordered basis β = {v 1, v 2,..., v n } such that each v i is an e-vctr of T then there exist λ 1, λ 2,..., λ n such that T (v i ) = λ i v i and so β[t ] β = λ λ λ n Example 1. [ cos α sin α B[T ] B = sin α cos α whereb = {(1, 0), (0, 1)}, If α = π/2, T has no eigenvectors because the linear transformation is rotation by α. But if α = π, then it does have e-vectors (1, 0) and (0, 1). But this transformation is invertible. (Rotate by the negative angle) So invertible and diagonalizable are not the same. ], 1

2 2 Theorem 5.2. A M n (F ). λ is an e-val of A if and only if det(a λi) = 0. Proof. λ is an e-val of A v 0 such that Av = λv. v 0 such that Av λv = 0. v 0 such that (A λi n )v = 0. det(a λi n ) = 0. Defn 3. For matrix A M n (F ), f(t) = det(a ti n ) is the characteristic polynomial of A. Defn 4. Let T be a linear operator on an n-dimensional vector space V with ordered basis β. We define the characteristic polynomial f(t) of T to be the characteristic polynomial of A = β [T ] β. That is, f(t) = det(a ti n ). Note 2. Similar matrices have the same characteristic polynomials, since if B = S 1 AS, det(b ti n ) = det(s 1 AS ti n ) = det(s 1 AS ts 1 S) = det(s 1 (A ti n )S) = det(s 1 1 ) det(a ti n ) det(s) = det(s) det(a ti n) det(s) = det(a ti n ) So that characteristic polynomials are similarity invariants. If B 1 and B 2 are 2 bases of V then B1 [T ] B1 is similar to B2 [T ] B2 and so we see that the definition of characteristic polynomial of T, does not depend on the basis used in the representation. So, we may say f(t) = det(t ti n ). Theorem 5.3. Let A M n (F ) (1) The characteristic polynomial of A is a polynomial of degree n with leading coefficient ( 1) n. (2) A has at most n distinct eigenvalues. Proof. First, we will prove (1). We need a slightly stronger statement for our induction statement to be of use. If B is a square n n matrix such that for some permutation θ S n, and some subset K [n], (B) i,j = b i,θ(i) t, i K, θ(i) = j (B) i,j = b i,j, i K, j θ(i) (B) i,j = b i,j, i K where for all i, j [n], b i,j is a scalar, then det(b) is a polynomial in t of degree K. Furthermore, if K = n and θ = id, the leading coefficient is ( 1) n. In this case, the entries on the diagonal of B are of the form b i,i t. The proof is by induction. The base case is for is for n = 1. A = [a 1,1 ], B = A ti 1, det(b) = a 1,1 t, which is a polynomial of degree 1 and has leading coefficient ( 1) 1 and if B = A, we have that det(b) = a 1,1 which is a polynomial of degree 0. Assume n > 1 and the theorem is true for n 1 n 1 matrices. Assume B satisfies the hypothesis of the statement. We compute det(b) by expanding on row 1. det(b) = n ( 1) 1+i (B) 1,i B(1 i) i=1

3 We see that for each i, by induction, B(1 i) is an n 1 square matrix with K or K 1 entries of the form b i,j t. If there exists an i [n], such that (B) 1,i is of the form b 1,i t then B(1 i) has K 1 entries of the form b r,s t and satisfies the induction hypothesis. det(b(1 i)) is therefore a polynomial of degree K 1 and also for j i, B(1 j) has K 1 entries of the form b r,s t and satisfies the induction hypothesis. det(b(1 i)) is therefore a polynomial of degree K 1. We see in this case that det(b) is a polynomial of degree 1 + K 1 = K. If additionally, K = n and θ = id, then b 1,1 is of the form b 1,1 t and B(1 1) has all of its diagonal entries of the form b i,i t and we see that its leading coefficient is ( 1) n 1 by induction and so the leading coefficient of det(b) is ( 1)( 1) n 1 = ( 1) n. And if for all i [n], (B) 1,i is of the form b 1,i then for all i [n], B(1 i) has K or entries of the form b r,s t and satisfies the induction hypothesis. So we see that in this case, det(b) is a polynomial of degree K. Now (2) follows by the fact from algebra that a polynomial of degree n over a field can have at most n roots. 3 Theorem 5.4. Let T be a linear operator on a vector space V and let λ be an e-val of T. A vector v V is an e-vctr of T corresponding to λ if and only if v 0 and v N(T λi n ). Proof. λ is an e-val of T v 0 such that T (v) = λv. v 0 such that T (v) λv = 0. v 0 such that (T λi n )(v) = 0. v 0 such that v N(T λi n ). Section 5.2. Theorem 5.5. Let T be a linear operator on a vector space V and let λ 1, λ 2,..., λ k be distinct e-vals of T. If v 1, v 2,..., v k are e-vctrs of T such that for all i [k], λ i corresponds to v i, then {v 1, v 2,..., v k } is a linearly independent set. Proof. The proof is by induction on k. Let k = 1. {v 1 } is a linearly independent set. Assume k > 1 and the theorem holds for k 1 distinct e-vals and e-vctrs. Now suppose λ 1, λ 2,..., λ k be distinct e-vals of T and v 1, v 2,..., v k are e-vctrs of T such that for all i [k], λ i corresponds to v i. We wish to show {v 1, v 2,..., v k } is a linearly independent set. Let a 1 v 1 + a 2 v a k v k = 0 (1) for some scalars a 1,..., a k. Applying T λ k I to both sides of the equation, we obtain, i [k 1], (T λ k I)(a 1 v 1 + a 2 v a k v k ) = 0 a 1 (T λ k I)(v 1 ) + a 2 (T λ k I)(v 2 ) + + a k (T λ k I)(v k )) = 0 (2) a i (T λ k I)(v i ) = a i (T (v i ) λ k I(v i )) = a i (λ i v i λ k v i ) = a i (λ i λ k )v i

4 4 and a k (T λ k I)(v k ) = a k (T (v k ) λ k I(v k )) = a k (λ k v k λ k v k ) = a k (λ k λ k )v k So (2) becomes a 1 (λ 1 λ k )v 1 + a 2 (λ 2 λ k )v a k 1 (λ k 1 λ k )v k 1 = 0 By induction, {v 1, v 2,..., v k 1 } is a linearly independent set and so for all i [k 1], a i (λ i λ k ) = 0. But, since λ i λ k 0, it must be that a i = 0. Now looking back at equation (1), we have that a k v k = 0. But since v k is not the zero vector, it must be that a k = 0 as well. Therefore, {v 1, v 2,..., v k } is a linearly independent set. Cor 1. Let T be a linear operator on an n-dimensional vector space V. If T has n distinct e-vals, then T is diagonlizable. Proof. Suppose λ 1, λ 2,..., λ n are distinct e-vals of T with corresponding e-vctrs v 1, v 2,..., v n. By Theorem 5.5, {v 1, v 2,..., v n } is a linearly independent set. By Theorem 5.1, T is diagonalizable. Defn 5. A polynomial f(t) in P (F ) splits over F if there are scalars c, a 1,..., a n such that f(t) = c(t a 1 )(t a 2 ) (t a n ). Theorem 5.6. The characteristic polynomial of any diagonalizable linear operator splits. Proof. Let T be a diagonalizable linear operator on V. Suppose β is a basis of V such that D = β [T ] β is diagonal. λ D = 0 λ λ n f(t) is the characteristic polynomial so f(t) = det(d ti) = λ 1 t λ 2 t λ n t = (λ 1 t)(λ 2 t) (λ n t). Defn 6. Let λ be an e-val of a linear operator (or matrix) with characteristic polynomial f(t). The algebraic multiplicity (or just multiplicity) of λ is the largest positive integer k for which (t λ) k is a factor of f(t). Defn 7. Let T be a linear operator on a vector space V and let λ be an e-val of T. Define E λ = {x V T (x = λx} = N(T λi). The set E λ is called the eigenspace of T corresponding to λ. The eigenspace of a matrix A M n (F ) is the e-space of L A. Fact 1. E λ is a subspace. Proof. Let a F, x, y E λ. Then T (ax + y) = at (x) + T (y) = aλx + λy = λ(ax + y).

5 Theorem 5.7. Let T be a linear operator on a finite dimensional vector space V, and let λ be an e-val of T having multiplicity m. Then 1 dim(e λ ) m. Proof. Let {v 1, v 2,..., v p } be a basis of E λ. Extend it to a basis β = {v 1, v 2,..., v p, v p+1,..., v n } of V. Let A = [T ] β, then ( ) λip B A = 0 C since i p, T (v i ) = λv i. So, A ti = ( ) (λ t)ip B 0 C ti n p Expanding on the 1st column, we see that det(a ti n ) = (λ t) p det(c ti n p ) = (λ t) p q(t). So the multiplicity of λ is greater than or equal to p and the dim(e λ ) = p. 5 Lemma 1. Let T be a linear operator on a vector space V and let λ 1, λ 2,..., λ k be distinct e-vals of T. For each i = 1, 2,..., k, let v i E λi. If v 1 +v 2 + v k = 0, then i [k], v i = 0. Proof. Renumbering if necessary, suppose for 1 i p, v i 0 and for p + 1 i k, v i = 0. Then, v 1 + v 2 +, v p = 0. But this contradicts Theorem 5.5. Thus, i [k], v i = 0. Theorem 5.8. Let T be a linear operator on a vector space V and let λ 1, λ 2,..., λ k be distinct e-vals of T. For each i = 1, 2,..., k, let S i E λi, be a finite linearly independent set. Then S = S 1 S 2 S k is a linearly independent subset of V. Proof. For all i suppose S i = {v i,1, v i,2,..., v i,ni }. Then S = {v i,j : 1 i k, 1 j n i }. Suppose there exists {a i,j } such that k n i a i,j v i,j = 0. i=1 j=1 For each i, let w i = n i j=1 a i,jv i,j. Then w i E λi, for all i and w 1 + w w k = 0. So, by the lemma, w i = 0, i. But each S i is linearly independent, so for all j, a i,j = 0. Theorem 5.9. Let T be a linear operator on a finite dimensional vector space V such that the characteristic polynomial of T splits. Let λ 1, λ 2,..., λ k be distinct e-vals of T. Then (1) T is diagonalizable if and only if the multiplicity of λ i is equal to dim(e λi ), i. (2) If T is diagonalizable and β i is an ordered basis for E λi, for each i, then β = β 1 β 2 β k is an ordered basis for V consisting of eigenvectors of T. Proof. For all i [k], let m i be the multiplicity of λ i, d i = dim(e λi ), and dim(v ) = n. We will show (2) first. Suppose T is diagonalizable. let β i be a basis for E λi, i [k]. We know β = {β 1 β 2 β k } is a linearly independent set by Theorem 5.8. By Theorem 5.1, there is a basis γ of V such that γ consists of eigenvectors of T. Let x V. x Span(γ), so x = a 1 v 1 + a 2 v a n v n where v 1, v 2,..., v n are eigenvectors of T.

6 6 Each v i Span(β j ), for some j [k]. So it can be expressed as a linear combination of vectors in β j. Thus x Span(β) and we have span(()β) = V. Now we show (1). ( :) We know d i m i, i. But since by (2), β is a basis, we have n = d 1 + d d k m 1 + m m k = n Thus, by the squeeze principle we have d 1 + d d k = m 1 + m m k and m 1 d 1 + m 2 d m k d k 0 But i [k], m i d i 0 and so m i = d i. ( :) Suppose i, m i = d i. We know m 1 + m m k = n since T splits and by Theorem 5.3 f(t) has degree n. Thus, we know d 1 + d d k = n and if i, β i is an ordered basis for E λi, by Theorem 5.8 β = β 1 β 2 β k is linearly independent. And, since β = n, by Corollary 2, (b) to Theorem 1.10, β is a basis of V. Then by Theorem 5.1, T is diagonalizable. Note 3. Test for diagonalization. A linear operator T on a vector space V of dimension n is diagonalizable if and only if both of the following hold. (1) The characteristic polynomial splits. (2) For each λ, eigenvalue of T, the multiplicity of λ equals the dimension of E λ. Notice that E λ = {x (T λ I)(x) = 0} = N(T λi) and n = nullity(t λi) + rank(t λi). So, dim(e λ ) = nullity(t λi) = n rank(t λi). Proof. Assume T is diagonalizable. By Theorem 5.6, the characteristic polynomial of T splits. Now by Theorem 5.9, for each λ, eigenvalue of T, the multiplicity of λ equals the dimension of E λ. Now assume (1) and (2) hold. Then by Theorem 5.9 again, T is diagonalizable. Defn 8. Let W 1, W 2,..., W k be subspaces of a vector space V. The sum is: k W i = {v 1 + v v k : v i W i, i [k]} i=1 Fact 2. The sum is a subspace. Defn 9. Let W 1, W 2,..., W k be subspaces of a vector space V. We call V the direct sum of W 1, W 2,..., W k, written V = W 1 W 2 W k If V is the sum of W 1, W 2,..., W k and j [k], W j i j W i = {0}. Example 2. V = R 4 W 1 = {(a, b, 0, 0) a, b R} W 2 = {(0, 0, c, 0) c R} W 3 = {(0, 0, 0, d) d R} Theorem Let W 1, W 2,..., W k be subspaces of a finite-dimensional vector space V. Tfae

7 7 (1) V = W 1 W 2 W k (2) V = k i=1 W i and i, v i W i if v 1 + v 2 + v k = 0 then v i = 0, i (3) Each vector v V can be written uniquely as v = v 1 + v 2 + v k, where i [k], v i W i. (4) If i [k], γ i is an ordered basis for W i then γ 1 γ 2 γ k is an ordered basis for V. (5) For each i [k] there is an ordered basis γ i for W i such that γ 1 γ 2 γ k is an ordered basis for V. Proof. (1) (2): Suppose for i [k], v i W i v 1 + v v k = 0. Let 1 i k. Then v i = j i v j. But since by (1), V = W 1 W 2 W k, we know that v i = j i v j = 0. (2) (3): By (2), each vector v V can be written as v = v 1 + v 2 + v k, where i [k], v i W i. To show uniqueness, suppose that v = v 1 +v 2 + v k and v = w 1 +w 2 + w k, where i [k], v i, w i W i. Then we have v 1 + v 2 + v k = w 1 + w 2 + w k v 1 w 1 + v 2 w v k w k = 0. or each i [k], v i w i W i, so by (2), v i w i = 0 and we have that v i = w i. (3) (4): Let i [k] and γ i = {w i,1, w i,2,..., w i,ni } be an ordered basis for W i. By (3), we know that γ 1 γ 2 γ k spans V. To show linear independence, we suppose for some {a i,j : 1 i k, 1 j n i }, k n i a i,j w i,j = 0 Notice that for each i [k], n i j=1 a i,jw i,j W i. We also have i=1 j=1 k 0 = 0 i=1 So by uniqueness, we have that n i j=1 a i,jw i,j = 0. Now since γ i is linearly independent, it must be that j [n i ], a i,j = 0. (4) (5): We know that for each i [k], W i has a finite basis, γ i. Thus γ 1 γ 2 γ k is an ordered basis for V by (4). (5) (1): Let i [k] and γ i = {w i,1, w i,2,..., w i,ni } be an ordered basis for W i. Since γ 1 γ 2 γ k spans V, we have that V = k i=1 W i. Let j [k] and consider v W j k i=1,i j W i. Since v W j, we have that But also, v = a j,1 w j,1 + a j,2 w j,2 + + a j,nj w j,nj k v = i=1,i j x i

8 8 for some vectors x i W i, which are linear combinations of the vectors in γ i. We have that k a j,1 w j,1 + a j,2 w j,2 + + a j,nj w j,nj x i = 0 i=1,i j and hence all of the coefficients of vectors in γ 1 γ 2 γ k are zero. This implies that v = 0. Theorem A linear operator T on a finite-dimensional vector space V is diagonalizable if and only if V is the direct sum of the eigenspaces of T. Proof. Let λ 1, λ 2,..., λ k be the distinct eigenvalues of T. ( ) Let T be diagonalizable. Then i [k], let γ i be an ordered basis of E λi. By Theorem 5.9, γ 1 γ 2 γ k is an ordered basis for V. By Theorem 5.10, V is the direct sum of E λi s. ( ) If V = k i=1 E λ i. Choose a basis γ i for each E λi. By Theorem 5.10, γ 1 γ 2 γ k is an ordered basis for V. Since there is a basis for V of E-vcts of T, T is diagonalizable, by Theorem 5.1. Section Skip. Section Invariant subspaces and the Cayley-Hamilton Theorem. Defn 1. Let T be a linear operator on a vector space V. A subspace W of V is called a T -invariant subspace of V if (W ) W. Defn 2. If T is a linear operator on V and W is a T -invariant subspace of V, then the restriction T W of T to W is a mapping from W to W and it follows that T W is a linear operator on W. Lemma 2. Exercise 21 from Section 4.3. If M M n (F ) can be expressed as ( ) A B M =, O C where A M r (F ), C M s (F ), s + r = n, and O is the all s r matrix of all zeros. Proof. The proof is by induction on r. If r = 1, we form det M be expanding on column 1. Then det M = a 1,1 M(1 1) = det A det C. Now assume for all such matrices M where A is r 1 r 1. Again, we expand on column 1 of M. det M = a 1,1 det M(1 1) a 2,1 det M(2 1) + ( 1) r+1 det M(r 1) For each i, M(i 1) has the form ( A(i 1) B O where B is a submatrix of B. By induction, det M(i 1) = det A(i 1) det C. So, we have: det M = a 1,1 det A(1 1) det C a 2,1 det A(2 1) det C + ( 1) r+1 det A(r 1) det C = det C(a 1,1 det A(1 1) a 2,1 det A(2 1) + ( 1) r+1 det A(r 1)) = det C det A C ),

9 Theorem Let T be a linear operator on a finite-dimensional vector space V, and let W be a T -invariant subspace of V. Then the caracteristic polynomial of T W divides the characteristic polynomial of T. Proof. Choose an ordered basis γ = {v 1, v 2,..., v k } for W, and extend it to an ordered basis β = {v 1, v 2,..., v k, v k+1,..., v n } for V. Let A = [T ] β and B 1 = [T W ] γ. Observe that A can be written in the form ( ) B1 B A = 2. O B 3 Let f(t) be the characteristic polynomial of T and g(t) the characteristic polynomial of T W. Then ( ) B1 ti f(t) = det(a ti n ) = det k B 2 = g(t) det(b O B 3 ti 3 ti n k ) n k by the Lemma. Thus g(t) divides f(t). The following was presented by N.Vankayalapati. He provided a handout. Defn 3. Let T be a linear operator on a vector space V and let x be a nonzero vector in V. The subspace W = span({x, T (x), T 2 (x),...}) is called the T -cyclic subspace of V generated by x. Theorem Let T be a linear operator on a finite-dimensional vector space V, and let W denote the T -cyclic subspace of V generated by a nonzero vector v V. Let k = dim(w ). Then (a) {v, T (v), T 2 (v),..., a k 1 T k 1 (v)t k (v)} is a basis for W. (b) If a 0 v + a 1 T (v) + + T k 1 (v) = 0, then the characteristic polynomial of T W f(t) = ( 1) k (a 0 + a 1 t + + a k 1 t k 1 + t k ). The following was presented by J. Stockford. He provided a handout. Theorem (Cayley-Hamilton). Let T be a linear operator on a finite-dimensional vector space V, and let f(t) be the characteristic polynomial of T. Then f(t ) = T 0, the zero transformation. The following was presented by Q. Ding. He provided a handout. Cor 1. (Cayley-Hamilton Theorem for Matrices). Let A be an n n matrix, and let f(t) be the characteristic polynomial of A. Then f(a) = O, the zero matrix. We did not cover the following theorems. Theorem Let T be a linear operator on a finite-dimensional vector space V, and suppose that V = W 1 W 2 W k, where W i is a T -invariant subspace of V for each i (1 i k). Suppose that f i (t) is the characteristic polynomial of T Wi (1 i k). Then f 1 (t) f 2 (t) f k (t) is the characteristic polynomial of T. 9 is

10 10 Defn 4. Let B 1 M m (F ), and let B 2 M n (F ). We define the direct sum of B 1 and B 2, denoted B 1 B 2 as the (m + n) (m + n) matrix A such that (B 1 ) i,j for 1 i, j m A i,j = (B 2 ) (i m),(j m) for m + 1 i, j n + m 0 otherwise If B 1, B 2,..., B k are square matrices with entries from F, then we define the direct sum of B 1, B 2..., B k recursively by B 1 B 2 B k = (B 1 B 2 B k 1 ) B k Of A = B 1 B 2 B k, then we often write A = B 1 O O O B 2 O... O O B k Theorem Let T be a linear operator on a finite-dimensional vector space V, an d let W 1, W 2,..., W k be T invariant subspaces of V such that V = W 1 W 2 W k. For each i, let β i be an ordered basis for W i, and let β = β 1 β 2 β k. Let A = [T ] β and B i = [T Wi ] βi for each i. Then A = B 1 B 2 B k.

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010 Math 550 Notes Chapter 7 Jesse Crawford Department of Mathematics Tarleton State University Fall 2010 (Tarleton State University) Math 550 Chapter 7 Fall 2010 1 / 34 Outline 1 Self-Adjoint and Normal Operators

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3 Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

2 Polynomials over a field

2 Polynomials over a field 2 Polynomials over a field A polynomial over a field F is a sequence (a 0, a 1, a 2,, a n, ) where a i F i with a i = 0 from some point on a i is called the i th coefficient of f We define three special

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Notes on Linear Algebra. Peter J. Cameron

Notes on Linear Algebra. Peter J. Cameron Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

Subspaces of R n LECTURE 7. 1. Subspaces

Subspaces of R n LECTURE 7. 1. Subspaces LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

Section 4.4 Inner Product Spaces

Section 4.4 Inner Product Spaces Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix

October 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Unit 18 Determinants

Unit 18 Determinants Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

A note on companion matrices

A note on companion matrices Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Examination paper for TMA4115 Matematikk 3

Examination paper for TMA4115 Matematikk 3 Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3

4 MT210 Notebook 4 3. 4.1 Eigenvalues and Eigenvectors... 3. 4.1.1 Definitions; Graphical Illustrations... 3 MT Notebook Fall / prepared by Professor Jenny Baglivo c Copyright 9 by Jenny A. Baglivo. All Rights Reserved. Contents MT Notebook. Eigenvalues and Eigenvectors................................... Definitions;

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Figure 1.1 Vector A and Vector F

Figure 1.1 Vector A and Vector F CHAPTER I VECTOR QUANTITIES Quantities are anything which can be measured, and stated with number. Quantities in physics are divided into two types; scalar and vector quantities. Scalar quantities have

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information