590 Notes July 8, Theorem (1.11). Let W be a subspace of a finite-dimensional vector space V. Then

Size: px
Start display at page:

Download "590 Notes July 8, Theorem (1.11). Let W be a subspace of a finite-dimensional vector space V. Then"

Transcription

1 59 Notes July 8, 214 Theorem (111) Let W be a subspace of a finite-dimensional vector space V Then 1 W is finite dimensional 2 If dim(w ) = dim(v ), Then W = V Proof of theorem 111 Let dim(v ) = n If W = {}, then dim(w ) = n Otherwise, W contains a nonzero vector v 1, and {v 1 } is a linearly independent subset of W We continue choosing vectors v 2, v 3,, v k, if possible, so that {v 1, v 2,, v k } is linearly independent, and adjoining another vector from V results in a linearly dependent set Since no linearly independent subset of V has more than n vectors, this process stops at k n Then {v 1, v 2,, v k } is a basis for W, so dim(w ) dim(v ) If dim(w ) = n, then a basis for W is a linearly independent subset of V with exactly n vectors, and thus is a basis V, so W = V Example Let W be the set of diagonal n n matrices as a subset of M n n (F ) Then for E pq = A M n n (F ) with (A) ij = 1 if i = p, j = q, and A ij = otherwise, the set {E 11, E 22,, E nn } is a basis for W Therefore, dim(w ) = n Corollary If W is a subspace of a finite-dimensional vector space V Then Any basis for W can be extended to a basis for V Proof Since a basis for W is a linearly independent subset of V, c) of corollary 2 of the replacement theorem gives the result The Lagrange Interpolation Formula Let c, c 1,, c n be distinct scalars (ie no pair are equal) in an infinite field Define the polynomials f (x), f 1 (x),, f n (x) in P (F ) via: f i (x) = (x c )(x c 1 ) (x c i 1 )(x c i+1 ) (x c n ) (c i c )(c i c 1 ) (c i c i 1 )(c i c i+1 ) (c i c n ) = x c k c i c k k i

2 note that these polynomials are degree n For examlpe if our field is R and c = 1, c 1 = 2, c 2 = 4, we get f = (x 2)(x 4) (1 2)(1 4) = x2 6x = x2 3 2x { 1 if i = j Note that f i (c j ) = if i j Thus if we take a linear combination of the f i s and get zero: Then necessarily a i f i (x) = (The zero polynomial in P (F )) i= a i f i (c j ) = for every j =, 1, n i= This is only possible if a i = for i =, 1,, n Therefore, the set {f, f 1,, f n } is a set of n + 1 linearly independent vectors in P n (F ) Since dim(p n (F )) = n + 1, we conclude that {f, f 1,, f n } is a basis of P n (F ) Suppose g(x) P n (F ) What does g(x) look like in this basis? Well: Therefore if g(x) = a i f i (c j ) = a j for every j =, 1, n i= a i f i (x), g(c i ) = a i f i (c j ) = a j for every j =, 1, n i= Therefore, for any g(x) P n (F ): g(x) = g(c i )f i (x) (1) i= We call (1) the Lagrange Interpolation Formula We now state, without proof, a result of section 17: Theorem Every vector space has a basis On to chapter 2: Linear Transformations and matrices Page 2

3 * Section 21 Linear Transformations, Null Spaces and Ranges A few words on notation: 1 Suppose a function f has domain A and codomain B Then we write f : A B If for x A and y B, f(x) = y, we write f : x y 2 We use the symbol to denote for all or for every For example, v V reads for all v in V Definition Let V and W be vector spaces over a field F We call a function T : V W a linear transformation from V to W If x, y V and c F : 1 T(x+y)=T(x)+T(y) 2 T(cx)=cT(x) Facts: 1 T is linear T () = 2 T is linear T (cx + y) = ct (x) + T (y) 3 T is linear T (x y) = T (x) T (y) Examples of linear operators on R 2 : projection, rotation, reflection More explicitly, take reflection about the x-axis: T : R 2 R 2 via T : (a, b) (a, b) T is linear because: T (c(a 1, b 1 ) + (a 2, b 2 )) = T (ca 1 + a 2, cb 1 + b 2 ) = (ca 1 + a 2, cb 1 b 2 ) = c(a 1, b 1 ) + (a 2, b 2 ) = ct (a 1, b 1 ) + T (a 2, b 2 ) Some other classic linear operators are differentiation on P n (R) and integration over an interval on C(R) Two particular linear transformations appear often and get their own names For vector spaces V and W over the field F, we define: The identity transformation I V is defined by: I V : V V and I V (x) = x x V The zero transformation T defined by T : V W and T (x) = x V Page 3

4 Definitions Let V and W be vector spaces, and let T : V W be linear We define the null space or kernel N(T ) of T to be the set of all vectors in x V such that T (x) = We define the range or image R(T ) to be the set of all y W such that y = T (x) for some x V, that is, R(T ) = {T (x) : x V } Theorem (21) Let V and W be vector spaces and T : V W be linear Then N(T ) is a subspace of V and R(T ) is a subspace of W Proof Let V denote the zero vector in V and W denote the zero vector in W Since T is linear, T ( V ) = W so V N(T ) If x, y N(T ), then T (x + y) = T (x) + T (y) = W + W = W and if c F, T (cx) = ct (x) = c W = W, so N(T ) is a subspace of V Similarly, since T ( V ) = W, W R(T ) If y 1, y 2 R(T ) then there exist x 1, x 2 V such that y 1 + y 2 = T (x 1 ) + T (x 2 ) = T (x 1 + x 2 ) and if c F, cy 1 = ct (x 1 ) = T (cx 1 ) so R(T ) is a subspace of W Theorem (22) Let V and W be vector spaces, and let T : V W be linear If β = {v 1, v 2,, v n } is a basis for V then R(T ) = span(t (β)) = span({t (v 1 ), T (v 2 ),, T (v n )}) Proof We see that T (v i ) R(T ) for all v i β by the definition of R(T ) Since R(T ) is a subspace, R(T ) contains span({t (v 1 ), T (v 2 ),, T (v n )}) = span(t (β)) Let y R(T ) Then x V such that y = T (x) Write x in terms of the basis β, so x = n a iv i Then y = T (x) = T ( n a iv i ) = n a it (v i ) Therefore R(T ) span({t (v 1 ), T (v 2 ),, T (v n )}) = span(t (β)) Example Define the linear transformation T : P 2 (R) M 2 2 (R) via: ( ) f() f(1) T (f(x)) = f(2) Since β = {1, x, x 2 } is a basis for P 2 (R): R(T ) = span(t (β)) = span({t (1), T (x), T (x 2 )}) ({( ) ( ) ( )}) 1 1 = span,, ({( ) ( )}) 1 = span, 1 We have found a basis for R(T ) and we see that dim(r(t )) = 2 Page 4

5 Definitions Let V and W be vector spaces and T : V W be linear If the nullspace N(T ) is finite dimensional, we define the nullity of T, denoted nullity(t ), to be the dimension of N(T ) Similarly, If the Range R(T ) is finite dimensional, we define the rank of T, denoted rank(t ), to be the dimension of R(T ) Theorem (Dimension Theorem) Let V and W be vector spaces and let T : V W be linear If V is finite dimensional then nullity(t ) + rank(t ) = dim(v ) Proof Suppose dim(v ) = n N(T ) is a subspace of V, so dim(n(t )) = k for some k n Let {v 1, v 2,, v k } be a basis for N(T ) Now extend this basis to a basis {v 1, v 2,, v k, v k+1,, v n } We will prove that {T (v k+1 ), T (v k+2 ),, T (v n )} is a basis for R(T ) Let y R(T ) Then there exists x V such that y = T (x) Writing x as a linear combination of basis vectors, say x = a i v i, we see that ( ) y = T (x) = T a i v i = a i T (v i ) = Therefore {T (v k+1 ), T (v k+2 ),, T (v n )} spans R(T ) i=k+1 a i T (v i ) Now we prove {T (v k+1 ), T (v k+2 ),, T (v n )} is ( a linearly independent set Suppose that a i T (v i ) = Since a i T (v i ) = T a i v i ), we have a i v i N(T ) Thus i=k+1 i=k+1 a i v i = i=k+1 k b i v i for some b 1, b 2,, b k F, ie i=k+1 a i v i i=k+1 k b i v i = But {v 1, v 2,, v k, v k+1,, v n } is a basis, so all the a i s (and b i s) are zero Therefore {T (v k+1 ), T (v k+2 ),, T (v n )} is a linearly independent set, and the T (v i ) s are distinct, so dim(r(t )) = n k Theorem (24) Let V and W be vector spaces, and let T : V W be linear Then T is one-to-one N(T ) = {} Proof ) Suppose N(T ) = and x, y V Then T (x) = T (y) T (x) T (y) = T (x y) = x y N(T ) x y = x = y Therefore T is one-to-one Page 5

6 ) If x N(T ), T (x) = but also necessarily T () = Since T is one-to-one, we conclude x = So N(T ) = {} Theorem (25) Let V and W be vector spaces of equal (finite) dimension, and let T : V W be linear Then the following are equivalent a) T is one-to-one b) T is onto c) rank(t ) = dim(v ) Proof Outline of the proof: a) c) b) From the dimension theorem, we know nullity(t ) + rank(t ) = dim(v ) We know by theorem 24 that a) implies N(T ) = which implies that nullity(t ) =, thus a) implies c) If we assume c), then by the dimension theorem and theorem 24, we get that a) is true Therefore c) implies a) Lastly, we notice that T is onto if and only if dim(r(t )) = dim(w ) dim(v ) = dim(w ), we see that b) is true if and only if c) is Since Theorem (26) Let V and W be vector spaces over F, and suppose that {v 1, v 2,, v n } is a basis for V For w 1, w 2,, w n in W, there exists exactly one linear transformation T : V W such that T (v i ) = w i for i = 1, 2,, n Proof Let x V Then x = a i v i Where a 1, a 2,, a n are unique scalars We define T : V W by T (x) = n a iw i (We are sending v i to T (v i ) = w i for each i, and extending linearly All that is left to do is prove that this map is linear and unique) 1 T is linear: Let u, v V, with u = n b iv i, v = n c iv i, for some scalars b 1, b 2,, b n and c 1, c 2,, c n It follows that, for any d F, ( ) ( ) T (du + v) = T d b i v i + c i v i = T (db i + c i )v i Page 6

7 By the definiton of T, we get: ( ) T (db i + c i )v i = (db i + c i )w i and finally (db i + c i )w i = d b i w i + c i w i = dt (u) + T (v) 2 Suppose that U : V W is linear and U(v i ) = w i for i = 1, 2,, n Then ( ) U(x) = U a i v i = a i U(v i ) = a i w i = T (x) Corollary Let V and W be vector spaces, and suppose that V has a finite basis {v 1, v 2,, v n } If U, T : V W are linear and U(v i ) = T (v i ) for i = 1, 2,, n, then U = T Example Consider the map (transformation) T : R 2 R 2 defined by: T (a 1, a 2, a 3 ) = (a 1 a 2, 2a 3 ) Let x = (x 1, x 2, x 3 ), y = (y 1, y 2, y 3 ) R 3 and let c R Then T (cx + y) = T (cx 1 + y 1, cx 2 + y 2, cx 3 + y 3 ) = (cx 1 + y 1 (cx 2 + y 2 ), 2(cx 3 + y 3 ) = c(x 1 x 2, 2x 3 ) + (y 1 y 2, 2y 3 ) = ct (x) + T (y), so T is linear The kernel of T, N(T ), is the solution set to the equation a 1 a 2 =, 2a 3 =, ie the set {a 1, a 2, a 3 ) R 3 : a 1 = a 2, a 3 = } which we can span with the vector (1, 1, ) Therefore N(T ) has dimension one and {(1, 1, )} is a basis for N(T ) By the dimension theorem, we know the image must have dimension 2 Let s confirm this by finding a basis for the image R(T ) By theorem 22, we need only compute the vectors T (1,, ), T (, 1, ), T (,, 1) which are (1, ), ( 1, ), (, 2) The vectors (1, ), ( 1, ), (, 2) clearly span R 2, therefore we can confirm dim(r(t )) = 2 Finally, we see that because the nullity of T is one, T is not one-to-one, and since the rank of T is 2, T is onto * 22 The Matrix Representation of A Linear Transformation Definition Let V be a finite-dimensional vector space An ordered basis for V is a basis for V with a specific order; that is, an ordered basis for V is a finite sequence of linearly independent vectors in V that generate V Page 7

8 Example If V = F 3, then β 1 = {e 1, e 2, e 3 } and β 2 = {e 3, e 1, e 2 } are both ordered bases for V but β 1 β 2 as ordered bases Definition Let β = {u 1, u 2,, u n } be an ordered basis for a finite-dimensional vector space V For x V, let a 1, a 2,, a n be the unique scalars such that x = a i u i We define the coordinate vector of x relative to β, denoted by [x] β, by [x] β = Note that [u i ] β = e i In fact the map x [x] β is a special kind of linear map, called an isomorphism We will study this later Example Let V = P 2 (R) and let {1, x, x 2 } be the standard ordered basis for V Then, for example, if f(x) = 1 + 5x + 3x 2, 1 [f] β = 5 3 Now we will see how to represent a linear transformation as a matrix Suppose that V and W are finite dimensional vector spaces with ordered bases β = {v 1, v 2,, v n } and γ = {w 1, w 2,, w m } respectively Let T : V W be linear Then for each j, 1 j n, there exist unique scalars a ij F, 1 i m, such that a 1 a 2 a n T (v j ) = m a ij w i for 1 j n Definition For the a ij s as above, we call the m n matrix A defined by (A) ij = a ij the matrix representation of T in the ordered bases β and γ and write A = [T ] γ β If V = W and β = γ, we write A = [T ] β Note that the j-th column of A is just [T (v j )] γ Page 8

9 Example Let T : P 3 (R) P 2 (R) be defined by T (f(x)) = f (x) Let β and γ be the standard ordered bases for P 3 (R) and P 2 (R) respectively Then: T (1) = = 1 + x + x 2 [T (1)] γ = 1 T (x) = 1 = x + x 2 [T (x)] γ = T (x 2 ) = 2x = x + x 2 [T (x 2 )] γ = 2 T (x 3 ) = 3x 2 = 1 + x + 3 x 2 [T (x 3 )] γ = 3 and therefore [T ] γ β = Definition Let T, U : V W be arbitrary functions, where V and W are vector spaces over F, and let a F We define T + U : V W via (T + U)(x) = T (x) + U(x) for all x V and at : V W via (at )(x) = at (x) for all x V Theorem (27) Let V and W be vector spaces over F and let T, U : V W be linear a) for all a F, at + U is linear b) Using the addition an scalar multiplication defined above, the set of all linear transformations from V to W is a vector space over F Proof A tedius but easy exercise Write this proof for extra credit (due this Friday, June 2) if you are inclined Definitions Let V and W be vector spaces over F We denote the vector space of all linear transformations from V into W by L(V, W ) In the case V = W, we write L(V ) instead of L(V, V ) Theorem (28) Let V and W be finite-dimensional vector spaces with ordered bases β and γ, respectively, and let T, U : V W be linear transformations Then a) [T + U] γ β = [T ]γ β + [U]γ β Page 9

10 b) [at ] γ β = a [T ]γ β for all scalars a Proof We prove part a) The proof of part b) is similar Let β = {v 1, v 2,, v n } and γ = {w 1, w 2,, w m } Then there exit scalars a ij and b ij ( 1 i m, 1 j n) such that: m m T (v j ) = a ij w i and U(v j ) = b ij w i therefore (U + T )(v j ) = m a ij w i + so ( ) [U + T ] γ β ij m b ij w i = = a ij + b ij = m (a ij + b ij )w i ( ) [T ] γ β + [U]γ β ij (If this proof looks funny just keep in mind all we are doing is showing two matrices are equal) * 23 Composition of Linear Transformations and Matrix Multiplication Compositions of linear maps are linear: Theorem (29) Let V, W and Z be vector spaces over the same field F and let T : V W and U : W Z be linear Then UT : V Z is linear Proof Let x, y V, c F Then UT (cx + y) = U(cT (x) + T (y)) = cut (x) + UT (y) Furthermore: Theorem (21) Let V be a vector space Let T, U 1, U 2 L(V ) Then: a) T (U 1 + U 2 ) = T U 1 + T U 2 and (U 1 + U 2 )T = U 1 T + U 2 T b) T (U 1 U 2 ) = (T U 1 )U 2 c) T I = IT = T d) a(u 1 U 2 ) = (au 1 )U 2 = U 1 (au 2 ) for all scalars a Page 1

11 Proof This is a mind-numbing exercise of applying the definition of function composition Let s just prove b): Let x V Then on the one hand: T (U 1 U 2 )(x) = T (U 1 (U 2 )(x)) = T (U 1 (U 2 (x))) and on the other hand: (T U 1 )U 2 (x) = (T U 1 )(U 2 (x)) = T (U 1 (U 2 (x))) Therefore, T (U 1 U 2 ) = (T U 1 )U 2 (Note that we didn t even use linearity! Or the fact that V and W are vector spaces! Only a) and d) use those facts b) and c) are true in much more generality) Suppose we have these three vector spaces: Vector space Ordered Basis V α = {v 1, v 2,, v n } W β = {w 1, w 2,, w m } Z γ = {z 1, z 2,, z p } and suppose T : V W and U : W Z are linear maps Let B = [T ] β α and A = [U] γ β Then ( m ) ( m ) (UT )(v j ) =U(T (v j )) = U (B) kj w k = (B) kj U(w k ) k=1 ( p ( m )) ( m = (B) kj (A) ik z i = k=1 k=1 k=1 ) p (A) ik (B) kj z i ( p ( m ) ) ( p ) = (A) ik (B) kj z i = (C) ik z i k=1 For this reason we define the product of an m n matrix A with an m p matrix B, denoted AB, to be the m p matrix: (AB) ij = (A) ik (B) kj for 1 i m, 1 j p k=1 The way we defined matrix multiplication gives us a theorem for free: Theorem (211) Let V, W and Z be finite-dimensional vector spaces with ordered bases α, β, γ respectively Let T : V W and U : W Z be linear transformations Then [UT ] γ α = [U]γ β [T ]β α Corollary Let V be a finite-dimensional vector space with an ordered basis β Let T, U L(V ) Then [UT ] β = [U] β [T ] β Page 11

12 Definitions We define the Kronecker delta δ ij by { 1 if i = j δ ij = if i j The n n identity matrix I n is defined by (I n ) ij = δ ij Theorem (212) Let A M m n (F ), B, C M n p (F ) and D, E M q m (F ) Then a) A(B + C) = AB + AC and (D + E)A = DA + EA b) a(ab) = (aa)b = A(aB) for any scalar a F c) I m A = A = AI n d) If V is an n-dimensional vector space with an ordered basis β, then [I V ] β = I n Proof Proofs of a) and c) are in the book We ll prove b) and d): proof of b): ( ) ( ) a(ab) ij = a (A) ik (B) kj = (a(a) ik )(B) kj = (aa)b ij k=1 k=1 and ( ) A(aB) ij = (A) ik (a(b) kj ) = k=1 (a(a) ik )(B) kj = (aa)b ij k=1 proof of d): Letβ = {v 1, v 2,, v n } Then by definition, I V (v i ) = v i for i = 1, 2,, n and 1 1 [v 1 ] β =, [v 2 ] β = and so on Therefore [I V ] β = I n Corollary Let A be an m n matrix, B 1, B 2,, B k be n p matrices, C 1, C 2,, C k be q m matrices, and a 1, a 2,, a k be scalars Then A ( k a ib i ) = k a iab i and ( k a ic i ) A = k a ic i A Proof Exercise Hint: use induction and theorem 212 Page 12

13 Note that this corollary is just generalizing parts a) and b) of theorem 212 Theorem (213) Let A be an m n matrix and B be an n p matrix For each j, (1 j p), let u j and v j denote the j-th columns of AB and B respectively Then a) u j = Av j b) v j = Be j Proof Exercise Note that this theorem just says that AB = [ Av 1 Av 2 Av n ] = [ ABe1 ABe 2 ABe n ] Theorem (214) Let V and W be finite-dimensional vector spaces having ordered bases β and γ respectively, and let T : V W be linear Then, for each u V, we have: [T (u)] γ = [T ] γ β [u] β Proof The idea of the proof is that Theorem 211 says that [T U] γ α = [T ]γ β [U]β α, so we look for an appropriate map U and an ordered basis α so that we can apply Theorem 211 We will also need a map g to make α appear as desired To this end We define the linear map f : F V by f(a) = au and g : F W by g(a) = at (u), for all a F We let α = {1} be the standard ordered basis for F Then g = T f and [T (u)] γ = [g(1)] γ = [g] γ α = [T f]γ α = [T ]γ β [f]β α = [T ]γ β [f(1)] β = [T ]γ β [u] β Next we make the bridge between matrices and linear maps more explicit We do this by giving the name L A to the linear map that corresponds to left multiplication by the matrix A Definition Leta A be an m n matrix with entries in the field F We denote by L A the mapping L A : F n F m defined by L A (x) = Ax for each column vector x F n We call L A a left-multiplication-transformation Theorem (215) Let A be an m n matrix with entries from F Then the left multiplication transformation L A : F n F m is linear Furthermore, if B is any other m n matrix (with entries in F ) and β and γ are the standard ordered bases for F n and F m respectively, then we have the following properties: Page 13

14 a) [L A ] γ β = A b) L A = L B if and only if A = B c) L A+B = L A + L B and L aa = al A for all a F d) If T : F n F m is linear, then there exists a unique m n matrix C such that T = L C In fact, C = [T ] γ β e) If E is an n p matrix, then L AE = L A L E f) If m = n, then L In = I F n Proof Parts a),b),d),e) are in the book so we will prove c) and f) proof of c): Let x F n Then and if a F, L A+B (x) = [L A+B ] γ β [x] β = (A + B) [x] β = A [x] β + B [x] β = [L A ] γ β [x] β + [L B] γ β [x] β = L A(x) + L B (x) L aa (x) = [L aa ] γ β [x] β = (aa) [x] β = a(a [x] β ) = a([l A] γ β [x] β ) = al A(x) proof of f): If m = n then L In : F n F n and for any x F n, L In (x) = [L In ] γ β [x] β = I n [x] β = [x] β = x We can use this correspondence between matrices and linear maps to carry our theorem about associativity of linear maps over to a theorem about associativity of matrix multiplication Theorem (216) Let A, B, and C be matrices such that A(BC) is defined Then (AB)C is also defined and A(BC) = (AB)C Proof Notice that for A(BC) to be defined these must be matrices over the same field and the number of rows of C must match the number of columns of B and the number of rows of B must match the number of columns of A These are the same conditions we need to ensure that (AB)C is defined So (AB)C is defined Associativity of the matrix product now follows from the associativity of function composition: L A(BC) = L A L BC = L A (L B L C ) = (L A L B )L C = L AB L C = L (AB)C Page 14

15 24 Invertibility and Isomorphisms Definition Let V and W be vector spaces, and let T : V W be linear A function U : W V is said to be the inverse of T if T U = I W and UT = I V If T has an inverse, T is said to be invertible The inverse is unique and is denoted by T 1 Uniqueness follows from this argument: Assume T U = I W and UT = I V and also T U = I W and U T = I V Then U = UI W = U(T U ) = (UT )U = I V U = U Facts: For all invertible functions T and U, 1 (T U) 1 = U 1 T 1 2 (T 1 ) 1 = T ; so in particular T 1 is invertible Luckily, the inverse of a linear map is also a linear map: Theorem (217) Let V and W be vector spaces, and let T : V W be linear and invertible Then T 1 : W V is linear Proof Let y 1, y 2 W and c F Since T is invertible, T is one-to-one and onto, hence there exist x 1, x 2 V such that T (x 1 ) = y 1, T (x 2 ) = y 2 and x 1 = T 1 (y 1 ), x 2 = T 1 (y 2 ) Then T 1 (cy 1 + y 2 ) =T 1 (ct (x 1 ) + T (x 2 )) = T 1 (T (cx 1 + x 2 )) = cx 1 + x 2 =ct 1 (y 1 ) + T 1 (y 2 ) Definition Let A be an n n matrix Then A is invertible if there exists an n n matrix B such that AB = BA = I Only square matrices are invertible This suggests that in the set of linear maps between finite dimensional vector spaces V and W, we should only find invertible maps if the dimensions of V and W match Lemma Let T be an invertible linear transformation from V to W Then V is finite-dimensional if and only if W is finite-dimensional In this case, dim(v ) = dim(w ) Proof Suppose V is finite-dimensional Then since T is one-to-one, N(T ) = {} so by the dimension theorem rank T = dim(v ) Since T is onto, dim(w ) = dim(r(t )) = dim(v ), and W is finite-dimensional On the other hand, if W is finite dimensional, making the above argument with T 1 instead shows that V is finite-dimensional Page 15

16 Fact: A function T is invertible if and only if T is one-to-one and onto Also note that Theorem 25 says the if V, W are vector spaces of equal and finite dimension and T : V W is linear, then tfae (the following are equivalent): a) T is one-to-one b) T is onto c) rank(t ) = dim(v ) Therefore, we can reinterpret the conclusion of Theorem 25 to read T is invertible if and only if rank(t ) = dim(v ) Theorem (218) Let V and W be finite-dimensional vector spaces with ordered bases β and γ respectively Let T : V W be linear Then T is invertible if and only if [T ] γ β is invertible Furthermore, [T 1 ] β γ = ([T ] γ β) 1 Proof Suppose T is invertible Then V and W have the same dimension, call it n So [T ] γ β is an n n matrix and T T 1 = I W and T 1 T = I V Therefore, for I n the n n identity matrix, I n = [I V ] β = [ T 1 T ] β = [ T 1 ] β γ [T ]γ β and similiarly [T ] γ β [T 1 ] β γ = I n So [T ] γ β is invertible with inverse [T 1 ] β γ Suppose A = [T ] γ β is invertible Then there exists an n n matrix B such that AB = BA = I n By theorem 26 there exists U L(W, V ) such that U(w j ) = B ij v i for j = 1, 2,, n, where γ = {w 1, w 2,, w n } and β = {v 1, v 2,, v n } It follows that [U] β γ = B Furthermore [UT ] β = [U] β γ [T ]γ β = BA = I n = [I V ] β So UT = I V and similarly T U = I W Question: Why didn t we choose U = L B in the proof above? Corollary (1) Let V be a finite-dimensional vector space with an ordered basis β, and let T : V V be linear Then T is invertible if and only if [T ] β is invertible Furthermore, [T 1 ] β = ( [T ] β ) 1 Corollary (2) Let A be an n n matrix Then A is invertible if and only if L A is invertible Furthermore, (L 1 ) 1 = L A 1 Page 16

17 Definitions Let V and W be vector spaces We say V is isomorphic to W if there exists an invertible linear transformation T : V W Such a linear transformation is called an isomorphism from V onto W Isomorphism is an equivalence relation Essentially, if two vector spaces V and W are isomorphic, then as vectors, the vectors in V are just the vectors in W just given different names, and vice-versa Examples The following linear maps are isomorphisms [ ] a b 1 T : F 3 M 2 2 (F ), T (a, b, c, d) = c d [ ] a b 2 T : M 2 2 (F ) P 3 (F ), T : a + bx + cx c d 2 + dx 3 [ ] f(2) f(4) 3 T : P 3 (R) M 2 2 (R), T (f) = f(6) f(8) At this point, we may wonder, for each positive integer n, is each vector space of dimension n isomorphic to all other vector spaces of dimension n over the same field? Theorem (219) Let V and W be finite-dimensional vetor spaces (over the same field) Then V is isomorphic to W if and only if dim V = dim W Proof Suppose V is isomorphic to W Then there exists an isomorphism T : V W Since T is invertible, by the lemma above dim(v ) = dim(w ) On the other hand, if dim(v ) = dim(w ) then let their common dimension be n Then let β = {v 1, v 2,, v n } be a basis for V and γ = {w 1, w 2,, w n } be a basis for W, respectively We claim the linear map T such that T (v i ) = w i for i = 1, 2,, n is an isomorphism Firstly, T ( n a iv i ) = n a it (v i ) = n a iw i = and since γ is a basis for W, we see that N(T ) = {} On the other hand R(T ) = span(t (β)) = span(γ) = W Since T is one-to-one and onto, T is invertible Therefore T is an isomorphism Corollary Let V be a vector space over F Then V is isomorphic to F n if and only if dim(v ) = n So in terms of just the vector space structure, every finite dimensional vector space V with dimension n is a re-labeling of F n Page 17

18 Theorem (22) Let V and W be finite-dimensional vector spaces over F of dimensions m and n, respectively, and let β and γ be ordered bases for V and W respectively Then the function Φ : L(V, W ) M m n (F ), defined by Φ(T ) = [T ] γ β for T L(V, W ), is an isomorphism Proof By Theorem 28, Φ is linear (ie [cu + T ] γ β = c [U]γ β +[T ]γ β ) Thus we need to show T is one-to-one and onto Let β = {v 1, v 2,, v n } and γ = {w 1, w 2,, w m } To show that T is onto, we can show that for each A M m n (F ), there is a map T L(V, W ) such that Φ(T ) = A Well, by Theorem 26, there is a unique map T defined by the two properties 1) T is linear and 2) T (v j ) = m A ijw i, for 1 j n The second property says that A = [T ] γ β Since this map is T is unique, we have also shown that Φ is one-to-one Now that the relationship between matrices and linear maps is becoming clearer, we rename the transformation x [x] β Definition Let β be an ordered basis for an n-dimensional vector space V over a field F The standard representation of V with respect to β is the function φ β, φ β : V F n defined by φ β (x) = [x] β for every x V Theorem (221) For any finite-dimensional vector space V with ordered basis β, φ β is an isomorphism Proof Exercise 12 of section 24 With Theorem 221 in hand we can construct the following commutative diagram: V T W φ β φ γ F n L A F m By commutative diagram we mean that you can start with any x V in the top left and either apply T and then φ γ or first apply φ β and then L A, and either way the result is the same, ie φ γ (T (x)) = L A (φ β (x)) Page 18

19 * 25 The Change of Coordinate Matrix In calculus 2 we claim that the level sets of degree two polynomials in two variables with coefficients in R give conic sections For example, the set 4x 2 + 9y 2 = 36 is easily seen to be a parabola (for example, use the parameterization x = 3 cos t, y = 2 sin t But what about something like 2x 2 4xy + 5y 2 = 1? Well, using the coordinates x, y defined by x = 2 5 x 1 5 y y = 1 5 x y we turn the equation 2x 2 4xy + 5y 2 = 1 into the equation (x ) 2 + 6(y ) 2 = 1, which is much more familiar Geometrically, we have rotated our frame of reference In other words, we have changed the basis of unit vectors whose linear combinations we use to express any point P in R 2 The unit vectors along the x -axis and y -axis form the ordered basis { ( ) ( )} 1 2 β 1 1 = 5, {( 1 In other words, if we let β be the standard ordered basis ( ) ( x x changed variables from [P ] β = to [P ] y β = y new coordinates are related by the equation ( ) x = 1 ( ) ( ) 2 1 x y y The matrix Q = 1 ( ) is exactly [I] β β, and we have [v] β = Q [v] β, for all v R 2 ) (, 1 )}, we have ) Notice that the old and Theorem (222) Let β and β be two ordered bases for a finite-dimensional vector space V, and let Q = [I V ] β β Then a) Q is invertible b) For any v V, [v] β = Q [v] β Proof a) Since the identity map I V : V V is invertible, so is Q by theorem 218 b) [v] β = [I V (v)] β = [I V ] β β [v] β = Q [v] β Page 19

20 With this theorem in hand we are ready to change coordinates at will We call the matrix Q = [I V ] β β a change of coordinates matrix In particular we say that Q changes β -coordinates into β-coordinates For the remainder of this section we consider only linear maps from V into itself We call such maps linear operators on V Given a linear operator T : V V, with ordered bases β and β of V, we know we can represent T via both [T ] β and [T ] β How are these matrices related? Theorem (223) Let T be a linear operator on a finite dimensioinal vector space V, and let β and β be ordered bases for V Suppose that Q is the change of coordinate matrix that changes β -coordinates into β-coordinates Then Proof We show Q [T ] β = [T ] β Q Let I = I V Then T = IT = T I, so [T ] β = Q 1 [T ] β Q Q [T ] β = [I] β β [T ] β β = [IT ] β β = [T I] β β = [T ] β β [I]β β = [T ] β Q Example Let T be the linear operator on R 2 defined by ( ) ( ) a 2a + b T = b a 3b {( ) ( )} {( ) ( )} and let β =,, β =, ( ) ( ) ( ) Then since T (e 1 ) = and T (e 1 2 ) =, we see that [T ] 3 β = 1 3 We can also see that the change of coordinate matrix that changes β -coordinates into β-coordinates is Q = ( ) Now suppose we are handed the fact that ( Then by Theorem 223, we get: ( 2 1 [T ] β = Q 1 [T ] β Q = 1 1 ) 1 = ( ) ( ) ) ( Did we get the right matrix? Well, the first column of [T ] β is ( ) ( ) ( ) that for v =, [T (v)] 1 β = [T ] β =, ie 5 ) ( ) 8 13 = 5 9 ( 8 5 ), so we claim Page 2

21 ( 1 T 1 out too ) ( 1 = 8 1 ) Similarly, we claim T ( ) ( ) 1 2 ( ) 3 = This checks out 2 ( ) ( ) 1 1 = 13 9 = 1 2 ( 4 5 ) This checks Since our map is clearly linear (as it is given by matrix multiplication) and agrees with T on a basis for the domain (R 2 ) of T, our map is the desired map Example 3 starting on page 113 is a good advertisement for this section, you should work through this example Corollary Let A M n n (F ), and let γ be an ordered basis for F n Then [L A ] γ = Q 1 AQ, where Q is the n n matrix whose j-th column is the j-th vector of γ Definition Let A and B be matrices in M m n (F ) We say B is similar to A if there exists and invertible matrix Q such that B = Q 1 AQ Note that similarity is an equivalence relation (see exercise 9 in this section) Things we need from chapters 3 and 4 Definitions Let A be an m n matrix Any of the following three operations on the rows[columns] of A is called an elementary row[column] operation: 1 interchanging any two rows[columns] of A 2 multiplying any row[column] of A by a nonzero scalar 3 adding a scalar multiple of a row[column] of A to another row[column] Any of these three operations is called an elementary operation Elementary operations are of type 1, type 2, or type 3 according to the list above We also define elementary matrices Definition An n n elementary matrix is a matrix obtained by performing an elementary operation on I n The elementary matrix is said to by of type 1, type 2 or type 3 according to which type of elementary operation was applied to I n Example The matrix is an elementary matrix of type 1 The next theorem connects elementary matrices to elementary operations: Page 21

22 Theorem (31) Let A M m n (F ), and suppose that B is obtained from A by performing an elementary row[column] operation Then there exists an m m [n n] elementary matrix E such that B = EA[B = AE] In fact, E is obtained from I m [I n ] by performing the same elementary row[column] operation as that was performed on A to obtain B Conversely, if E is an elementary m m [n n] matrix, then EA[AE] is the matrix obtained from A by performing the same elementary row[column] operation that produces E from I m [I n ] To prove this theorem you have to verify it for each operation, which is super tedious We ll skip it The point is, you can produce row operations by left multiplication by an elemenatry matrix and you can produce column operations by right multiplication by an elementary matrix Theorem (32) Elementary matrices are invertible, and the inverse of an elementary matrix is an elementary matrix of the same type Proof Let E be an elementary n n matrix Then E can be obtained by an elementary row operation on I n By reversing the steps applied to transform I n into E, we can transform E back into I n So I n can be obtained from E by an elementary row operation of the same type By Theorem 31, there is an elementary matrix Ẽ such that ẼE = I n Therefore, E is invertible and E 1 = Ẽ Definition If A M m n (F ), we define the rank of A, denoted rank(a) to the the rank of the linear map L A : F n F m Note that an n n matrix is invertible if and only if its rank is n Theorem (34) Let A be an m n matrix If P and Q are invertible m m and n n matrices, respectively, then a) rank(aq) = rank(a) b) rank(p A) = rank(a) and therefore b) rank(p AQ) = rank(a) Proof R(L AQ ) = R(L A L Q ) = L A L Q (F n ) = L A (L Q (F n )) = L A (F n ) = R(L A ) since L Q is onto So rank(aq) = dim(r(l AQ )) = dim(r(l A )) = rank(a) The proof of b) is similar With a) and b) in hand, we have rank(p AQ) = rank(aq) = rank(a) Page 22

23 Corollary Elementary row and column operations on a matrix are rank-preserving Theorem (36) Let A be an m n matrix of rank r Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir D = 1, 2 3 where 1, 2, and 3 are zero matrices So D ii = 1 for i r and D ij = otherwise The proof of this theorem is not hard but it is long details See pages for Corollary Every invertible matrix is a product of elementary matrices Proof If A is an invertible n n matrix, then rank(a) = n Hence the matrix D is I n and there exists invertible matrices (compositions of elementary matrices corresponding to row and column operations) B and C such that I n = BAC Let B = E p E p 1 E 1, C = G 1 G 2 G q Then A = B 1 I n C 1 = B 1 C 1, that is, A = E 1 1 E 1 2 E 1 p G 1 q G 1 q 1 G 1 1 Now we are ready to talk about determinants ( ) a b Definition If A = is a 2 2 matrix over a field F, we define the c d determinant of A, denoted by det(a) or A to be the scalar (ad bc) Definition If A M n n (F ), we define the determinant of A as follows: 1 If n = 1, A = A 11 and det(a) = A 11 2 if n 2 we define det(a) recursively : det(a) = ( 1) j+1 A 1j det(ã1j) i=j where Ãij is the (n 1) (n 1) matrix obtained by removing the i-th row and j-th column of A We call the scalar ( 1) n+1 det(ã1j) the cofactor of A ij We call the formula above cofactor expansion along the first row of A In fact you can compute a determinant by cofactor expansion along any row: Theorem Let A M n n (F ) Then for any i {1, 2,, n}, det(a) = n j=1 ( 1)j+i A ij det(ãij) Page 23

24 Adapted from Robert A Beezer s book A First Course in Linear Algebra We use induction The n = 1 case is trivial When n = 2 we just compute expansion along the second row: det(a) = ( 1) 2+1 A 12 det(a 21 ) + ( 1) 2+2 A 22 det(a 11 ) = A 11 A 22 A 21 A 12 For the general case we ll introduce some notation Let A ij,kl denote the matrix obtained from A be deleting the i-th and j-th rows and k-th and l-th columns As we sum up to a missing column, { our index will bump up by one To make l < j this easier to deal with, we define ɛ lj = Let i {2, 3,, n} 1 l j Now: det(a) = = = = = = = ( 1) j+1 A 1j det(ã1j) j=1 ( 1) j+1 A 1j ( 1) i 1+l ɛ lj A il det(a 1i,jl ) j=1 l j ( 1) j+i+l ɛ lj A 1j A il det(a 1i,jl ) j=1 l j ( 1) j+i+l ɛ lj A 1j A il det(a 1i,jl ) l=1 j l ( 1) i+l A il ( 1) j ɛ lj A 1j det(a 1i,jl ) l=1 j l ( 1) i+l A il ( 1) ɛlj+j A 1j det(a i1,lj ) l=1 j l ( 1) i+l A il det(ãil) l=1 Theorem (43) The determinant of an n n matrix is a linear function of each row then the remaining rows are held fixed In other words, for 1 r n we have: det a 1 a r 1 u + kv = det a r+1 a n a 1 a r 1 u a r+1 a n + k det a 1 a r 1 v a r+1 a n Page 24

25 Proof We induct on n The n = 1 case: det(u + kv) = u + kv = det(u) + k det(v) Now assume n 2 and the theorem holds for the n 1 case Let A be an n n matrix with rows a 1, a 2,, a n, respectively, and suppose that for some r, 1 r n, we have a r = u + kv, for some u, v F n and k F Let u = (b 1, b 2,, b n ) and v = (c 1, c 2,, c n ) Let B and C denote the matrices obtained from A by replacing row r of A by u and v respectively Then we want to show det(a) = det(b) + k det(c) Firstly, if r = 1, we get det(a) = = = ( 1) j+1 A 1j det(ã1j) = ( 1) n+1 (b j ) det(ã1j) + j=1 ( 1) n+1 (b j + kc j ) det(ã1j) j=1 ( 1) n+1 (b j ) det(ã1j) + k j=1 = det(b) + k det(c) ( 1) n+1 (kc j ) det(ã1j) j=1 ( 1) n+1 (c j ) det(ã1j) Now for r > 1 and 1 j n the rows of Ã1j, B 1j, C 1j are the same except for row r 1 Row r 1 of Ã1j is: (b 1 + kc 1, b 2 + kc 2,, b j 1 + kc j 1, b j+1 + kc j+1,, b n + kc n ) which is the sum of row r 1 of B 1j and k times row r 1 of C 1j By the induction hypothesis, det(ãij) = det( B 1j ) + k det( C 1j ) Since A 1j = B 1j = C 1j for each j, we finally have: det(a) = = = ( 1) j+1 A 1j det(ã1j) j=1 ( 1) j+1 A 1j j=1 j=1 ( det( B 1j ) + k det( C ) 1j ) ( 1) j+1 A 1j det( B 1j ) + k j=1 = det(b) + k det(c) ( 1) j+1 A 1j det( C 1j ) j=1 Corollary If A M n n (F ) has a row of all zeros, then det(a) = Proof Suppose every entry of the r-th row of A M n n (F ) is zero Then the r-th row of A is equal to the r-th row of A plus itself By theorem 43, det(a) = det(a) + det(a), which is only possible if det(a) = Page 25

26 51 Eigenvalues and Eigenvectors Definitions A linear operator T on a finite-dimensional vector space V is called diagonalizable if there is an ordered basis β for V such that [T ] β is a diagonal matrix A square matrix A is called diagonalizable if L A is diagonalizable Definitions Let T be a linear operator on a vector space V A nonzero vector v V is called an eigenvector of T if there exists a scalar λ such that T (v) = λv The scalar λ is called the eigenvalue corresponding to the eigenvector v Let A M n n (F ) A nonzero vector v F n is called an eigenvector of A if v is an eigenvector of L A, ie Av = λv for some scalar λ The scalar λ is called an eigenvalue of A corresponding to the eigenvector v Theorem (51) A linear operator T on a finite-dimensional vector space V is diagonalizable if and only if there exists an ordered basis β for V consisting of eigenvectors of T Furthermore, if T is diagonalizable, β = {v 1, v 2,, v n } is an ordered basis of eigenvectors of T, and D = [T ] β, then D is a diagonal matrix and D jj is the eigenvalue corresponding to v j for 1 j n Proof A direct consequence of the definitions above To diagonalize a matrix or linear operator is to find a basis of eigenvectors and corresponding eigenvalues A few more tidbits from chapter 4 We begin with a corollary to Theorem 36 Recall that Theorem 36 states: Let A be an m n matrix of rank r Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir D = 1, 2 3 where 1, 2, and 3 are zero matrices So D ii = 1 for i r and D ij = otherwise The corollary is: Corollary Let A be an m n matrix Then rank(a t ) = rank(a) Proof A consequence of Theorem 36 is that there exist invertible matrices (products of elementary matrices) B and C such that D = BAC for D as in Theorem 36 Now D t = (BAC) t = C t A t B t and since B and C are invertible, so are their transposes (by homework exercise 5 of section 24) Thus rank(a t ) = rank(c t A t B t ) = rank(d t ) Page 26

27 But D t has the same rank as D, and rank(d) = rank(a) Some corollaries of Theorem that a determinant can be computed by cofactor expansion on any row 1 If A M n n has two identical rows, then det(a) = : Proof By induction The 2 2 case is clear Otherwise, n 3 and suppose row r of A is the same as row s of A, r s The we can take the determinant of A by cofactor expansion on a row that is not row r or row s, say row i Then det(a) = ( 1) i+j A ij det(ãij) j=1 and by the induction hypotheses, each det(ãij) = for each j {1, 2,, n} 2 If A is a triangular matrix, then det(a) is the product of the diagonal entries Proof (Really a sketch of the proof): Suppose T is upper triangular Use induction and evaluate the determinant by successively cofactor expanding first on the last row, then the second to last row, and so on Theorem If A M n n (F ) and B is a matrix obtained from A by interchanging two rows of A, then det(b) = det(a) Proof Let A M n n (F ) have rows a 1, a 2,, a n, and suppose B M n n is A but with rows r and s switched (say r < s) Then A = a 1 a r a s a n, B = a 1 a s a r a n Page 27

28 Now = det a 1 a r + a s a r + a s a n = det a 1 a r a r + a s a n + det a 1 a s a r + a s a n = det a 1 a r a r a n + det a 1 a r a s a n + det a 1 a s a r a n + det a 1 a s a s a n = + det(a) + det(b) + Very similarly we can prove Theorem If A M n n (F ) and B is obtained by adding a multiple of one row of A to another row of A, then det(a) = det(b) Proof Using the same terminology as before: det a 1 a r + ka s a s a n = det a 1 a r a s a n + k det a 1 a s a s a n Page 28

29 that is, det(b) = det(a) + = det(a) Corollary If A M n n (F ) has rank less than n, then det(a) = Proof If A has rank less than n, then the rows of A are linearly dependent because the columns of A t are linearly dependent, and these columns as column vectors span R(L A ) So we may write one row as a linear combination of the others Using linearity of the determinant on this row, we can write det(a) as a sum of scalar multiples of determinants of matrices each of which has at least one row duplicated (say row i equals row j) These determinants are all zero thus det(a) is zero Finally we can prove: Theorem (47) For any A, B M n n (F ), det(ab) = det(a) det(b) Proof Firstly we establish the result for elementary matrices For example, if A is obtained by multiplying the r-th row of I by the nonzero scalar k, then A is diagonal with all diagonal entries one expect for the entry in the r-th row, so det(a) = k det(i) If AB is the matrix obtained by multiplying the r-th row of B by k, then det(ab) = k det(b) Similarly one shows the result for type 1 or type 3 matrices If A is an n n matrix with rank less than n, then det(a) = So the result holds in this case In the remaing case, A has rank n and so is invertible and hence is a product of elementary matrices Therefore, A = E m E 2 E 1 and thus det(ab) = det(e m E 2 E 1 B) = det(e m ) det(e m 1 E 2 E 1 B) = det(e m ) det(e m 1 ) det(e m 2 E 2 E 1 B) = det(e m ) det(e m 1 ) det(e 1 ) det(b) = det(e m E 2 E 1 ) det(b) = det(a) det(b) Corollary A matrix A M n n (F ) is invertible if and only if det(a) Furthermore, if A is invertible, then det(a 1 1 ) = det(a) Proof ) We show the contrapositive: If A is not invertible, then the rank of A is less than n, so det(a) = ) On the other hand, if A is invertible, and det(a) det(a) det(a 1 ) = det(aa 1 ) = det(i) = 1 det(a 1 ) = 1 det(a) Page 29

30 Theorem (48) For any A M n n (F ), det(a t ) = det(a) Proof If A is not invertible, then rank(a) < n but rank(a) = rank(a t ) so A t is not invertible either On the other hand, if A is invertible, it is a product of elementary matrices Each elementary matrix E satisfies det(e) = det(e t ) (you can check this) So A = E m E 2 E 1 and thus det(a t ) = det(e t 1E t 2 E t m) = det(e t 1) det(e t 2) det(e t m) = det(e 1 ) det(e 2 ) det(e m ) = det(e m ) det(e m 1 ) det(e 1 ) = det(e m E m 1 E 1 ) = det(a) Back to chapter 5: ( ) ( ) 5 1 Notice that A = has eigenvector x = with corresponding eigenvalue λ = 5 because Ax = λx However a matrix (or linear operator) need not have any eigenvalues: ( ) 1 The matrix A = is [T ] 1 β in the standard ordered basis where T is rotation by the angle π, thus neither T nor A have any real eigenvalues 2 Theorem (52) Let A M n n (F ) Then a scalar λ is an eigenvalue of A if and only if det(a λi n ) = Proof If λ is an eigenvalue of A, then for some nonzero vector x V, Ax = λx, ie (A λi n )x = Thus A λi n is not invertible, so det(a λi n ) = On the other hand, if det(a λi n ) =, then A λi n is not invertible, so there exists a nonzero vector x N(A λi n ), ie λ is an eigenvalue of A Definition Let A M n n (F ) The polynomial f(t) = det(a ti n ) is called the characteristic polynomial of A ( ) 5 Looking back at A = we see that the charateristic polynomial of A is: 4 1 ( ) 5 t det(a ti) = det = t 2 6t + 5 = (t 5)(t 1) 4 1 t It turns out (exercise 12) that similar matrices have the same characteristic polynomial For this reason we define characteristic polynomials for linear operators as follows: Page 3

31 Definition Let T be a linear operator on an n-dimensional vector space V with ordered basis β We define the characteristic polynomial f(t) of T to be the characteristic polynomial of [T ] β Using induction we can easily prove: Theorem (53) Let A M n n (F ) Then a) The characteristic polynomial of A is a polynomial of degree n with leading coefficient ( 1) n b) A has at most n distinct eigenvalues This theorem tells us how to find the eigenvector(s) corresponding to an eigenvalue: Theorem (54) Let T be a linear operator on a vector space V, and let λ be an eigenvalue of T A vector v V is an eigenvector of T corresponding to λ if and only if v and v N(T λi) Proof Exercise Recall the commutative diagram for Theorem 221: V T W φ β φ γ F n L A F m In the special case where V = W, β = γ, we get: V T W φ β φ β F n L A F m Page 31

32 Now suppose V is a finite dimensional vector space and T : V V is a linear operator with eigenvector v V and corresponding eigenvalue λ In other words, assume T (v) = λv Choose an ordered basis β for V and let A = [T ] β Then using the diagram immediately above we see Aφ β (v) = L A (φ β (v)) = φ β (T (v)) = φ β (λv) = λφ β (v) That is, the matrix A has eigenvector φ β (v) with corresponding eigenvalue λ Since the φ β s are isomorphisms, we can reverse the argument and show that if φ β (v) is an eigenvector of the matrix A with corresponding eigenvalue λ, then v is an eigenvector of T with corresponding eigenvalue λ The point is, we can find the eigenvalues and eigenvectors of T by finding the eigenvalues and eigenvectors of A = [T ] β, for any (ordered) basis β of V 2 3 Example Let A = and F = R Let s find the eigenvalues (if any) of A, and the corresponding eigenvectors To find the eigenvalues, we find the characteristic polynomial of A and then find it s zeroes, which are the eigenvalues of A: det(a λi) = det λ λ λ = (λ 3)(λ 2)(λ 1) = λ 3 + 6λ 2 11λ + 6 Now let s find an eigenvector corresponding to λ = 1 To accomplish this we find the kernel of L A 1I : A 1I = *We take the rref of A 1I and get: This matrix tells us that if nonzero vector in span eigenvalue λ = 1 a b c N(L A 1I ) then a = b = c Accordingly, any is an eigenvector of A corresponding to the Page 32

33 1 Similarly we can find that is a basis for the kernel of A 3I and is a basis for the kernel of A 2I These three vectors form a basis for R 3, so L A is diagonalizable and in the ordered basis β = 1,, [L A ] β = 2 3 Note (*) rref-ing A 1I is applying a finite sequence of row operations to A 1I, ie multiplying A 1I on the left by an product of elementary matrices, which is a product of invertible matrices, ie which is an invertible matrix, call it E In other words, if rref(a 1I) = B, then B = E(A 1I), so if (A 1I)x = then E(A 1I)x = and on the other hand, if E(A 1I)x = we have (A 1I)x = E 1 = The point is, (A 1I) and it s rref have the same kernel as left multiplication transformations, so we are justified in this procedure, 52 Diagonalizability Theorem (55) Let T be a linear operator on a vector space V, and let λ 1, λ 2,, λ k be distinct eigenvalues of T If v 1, v 2, v k are eigenvectors of T such that λ i corresponds to v i, (1 i k), then {v 1, v 2,, v k } is linearly independent Proof We use induction on k If k = 1, {v 1 } is a set of one nonzero vector and is therefore linearly independent Now assume the theorem holds for k 1 distinct eigenvalues, where k 1 1, and we have k distinct eigenvalues {λ 1, λ 2,, λ k } with corresponding eigenvectors {v 1, v 2,, v k } We shall prove the contrapositive for the n = k case, that is, assume that {v 1, v 2, v k } is linearly dependent Then by the inductive assumption, this means we can write v k as a linear combination of v 1, v 2,, v k 1 Say v k = k 1 a iv i Then Which gives k 1 k 1 a i v i = λ k v k = Av k = A( a i v i ) = a i λ i v i λ k k 1 k 1 a i (λ i λ k )v i = Page 33

34 Since the v i s are linearly independent for 1 i k 1, this is only possible if a i (λ i λ k ) = for i = 1, 2,, k 1 The λ s are all distinct by hypothesis, so we conclude that the a i s are all zero, which says that v k is zero, which contradicts the assumption that v k is an eigenvector We conclude that the set {v 1, v 2,, v k } is linearly independent Corollary Let T be a linear operator on an n-dimensional vector space V If T has n distinct eigenvalues, then T is diagonalizable This if is not an if and only if For example, any diagonal matrix with a repeated entry on the diagonal is both diagonalizable and does not have distinct eigenvalues More concretely, such an example would be the identity matrix Definition A polynomial f(t) in P (F ) splits over F if there are scalars c, a 1, a 2,, a n such that f(t) = c(t a 1 )(t a 2 ) (t a n ) So, for example, t 2 2 splits over R but not over Q, the field of rationals Similarly, t splits over C but not over R If f(t) is a characteristic polynomial of a linear operator or matrix over a field F, then the statement f(t) splits is understood to mean f(t) splits over F Theorem (56) The characteristic polynomial of any diagonalizable linear operator splits Proof If T is a diagonalizable linear operator on an n-dimensional vector space V, then there exists a basis β = {v 1, v 2,, v n } of V composed of eigenvectors, with corresponding eigenvalues λ 1, λ 2,, λ n Then [T ] β = λ 1 λ 2 λ n and [T ] β ti = λ 1 t λ 2 t λ n t so det([t ] β ti) = (λ 1 t)(λ 2 t) (λ n t) = ( 1) n (t λ 1 )(t λ 2 ) (t λ n ) On the other hand, If the characteristic polynomial T splits, T need not be diagonalizable Definition Let λ be an eigenvalue of a linear operator or matrix with a characteristic polynomial f(t) The (algebraic) multiplicity of λ is the largest positive integer k such that (t λ) k is a factor of f(t) 1 For example, If A = 2 3 then the characteristic polynomial of A is 5 3 f(t) = (t 1)(t 3) 2 So λ = 1 is an eigenvalue of A with multiplicity 1 and λ = 3 Page 34

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA. September 23, 2010 LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices. Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n. ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation Chapter 6 Linear Transformation 6 Intro to Linear Transformation Homework: Textbook, 6 Ex, 5, 9,, 5,, 7, 9,5, 55, 57, 6(a,b), 6; page 7- In this section, we discuss linear transformations 89 9 CHAPTER

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

Unit 18 Determinants

Unit 18 Determinants Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Brief Introduction to Vectors and Matrices

Brief Introduction to Vectors and Matrices CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued

More information

MAT 242 Test 2 SOLUTIONS, FORM T

MAT 242 Test 2 SOLUTIONS, FORM T MAT 242 Test 2 SOLUTIONS, FORM T 5 3 5 3 3 3 3. Let v =, v 5 2 =, v 3 =, and v 5 4 =. 3 3 7 3 a. [ points] The set { v, v 2, v 3, v 4 } is linearly dependent. Find a nontrivial linear combination of these

More information

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0. Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

DETERMINANTS TERRY A. LORING

DETERMINANTS TERRY A. LORING DETERMINANTS TERRY A. LORING 1. Determinants: a Row Operation By-Product The determinant is best understood in terms of row operations, in my opinion. Most books start by defining the determinant via formulas

More information

Chapter 6. Orthogonality

Chapter 6. Orthogonality 6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Notes on Linear Algebra. Peter J. Cameron

Notes on Linear Algebra. Peter J. Cameron Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

Using row reduction to calculate the inverse and the determinant of a square matrix

Using row reduction to calculate the inverse and the determinant of a square matrix Using row reduction to calculate the inverse and the determinant of a square matrix Notes for MATH 0290 Honors by Prof. Anna Vainchtein 1 Inverse of a square matrix An n n square matrix A is called invertible

More information

Similar matrices and Jordan form

Similar matrices and Jordan form Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive

More information

Unified Lecture # 4 Vectors

Unified Lecture # 4 Vectors Fall 2005 Unified Lecture # 4 Vectors These notes were written by J. Peraire as a review of vectors for Dynamics 16.07. They have been adapted for Unified Engineering by R. Radovitzky. References [1] Feynmann,

More information

Mathematics Review for MS Finance Students

Mathematics Review for MS Finance Students Mathematics Review for MS Finance Students Anthony M. Marino Department of Finance and Business Economics Marshall School of Business Lecture 1: Introductory Material Sets The Real Number System Functions,

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information