Matrix Representations of Linear Transformations and Changes of Coordinates

Size: px
Start display at page:

Download "Matrix Representations of Linear Transformations and Changes of Coordinates"

Transcription

1 Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under addition and scalar multiplication: (1) 0 V (2) u, v V u + v V (3) u V and k R ku V Equivalently, V is a subspace if au + bv V for all a, b R and u, v V (You should try to prove that this is an equivalent statement to the first) Example 01 Let V {(t, 3t, 2t) t R} Then V is a subspace of R 3 : (1) 0 V because we can take t 0 (2) If u, v V, then u (s, 3s, 2s) and v (t, 3t, 2t) for some real numbers s and t But then u + v (s + t, 3s + 3t, 2s 2t) (s + t, 3(s + t), 2(s + t)) (t, 3t, 2t ) V where t s + t R (3) If u V, then u (t, 3t, 2t) for some t R, so if k R, then ku (kt, 3(kt), 2(kt)) (t, 3t, 2t ) V where t kt R Example 02 The unit circle S 1 in R 2 is not a subspace because it doesn t contain 0 (0, 0) and because, for example, (1, 0) and (0, 1) lie in S but (1, 0) + (0, 1) (1, 1) does not Similarly, (1, 0) lies in S but 2(1, 0) (2, 0) does not A linear combination of vectors v 1,, v k R n is the finite sum a 1 v a k v k (01) which is a vector in R n (because R n is a subspace of itself, right?) The a i R are called the coefficients of the linear combination If a 1 a k 0, then the linear combination is said to be trivial In particular, considering the special case of 0 R n, the zero vector, we note that 0 may always be represented as a linear combination of any vectors u 1,, u k R n, 0u u k 0 This representation is called the trivial representation of 0 by u 1,, u k If, on the other hand, there are vectors u 1,, u k R n and scalars a 1,, a n R such that a 1 u a k u k 0 1

2 where at least one a i 0, then that linear combination is called a nontrivial representation of 0 Using linear combinations we can generate subspaces, as follows If S is a nonempty subset of R n, then the span of S is given by span(s) : {v R n v is a linear combination of vectors in S} (02) The span of the empty set,, is by definition span( ) : {0} (03) Remark 03 We showed in class that span(s) is always a subspace of R n (well, we showed this for S a finite collection of vectors S {u 1,, u k }, but you should check that it s true for any S) Let V : span(s) be the subspace of R n spanned by some S R n Then S is said to generate or span V, and to be a generating or spanning set for V If V is already known to be a subspace, then finding a spanning set S for V can be useful, because it is often easier to work with the smaller spanning set than with the entire subspace V, for example if we are trying to understand the behavior of linear transformations on V Example 04 x-y plane Let S be the unit circle in R 3 which lies in the x-y plane Then span(s) is the entire Example 05 Let S {(x, y, z) R 3 x y 0, 1 < z < 3} Then span(s) is the z-axis A nonempty subset S of a vector space R n is said to be linearly independent if, taking any finite number of distinct vectors u 1,, u k S, we have for all a 1,, a k R that a 1 u 1 + a 2 u a k u k 0 a 1 a n 0 That is S is linearly independent if the only representation of 0 R n by vectors in S is the trivial one In this case, the vectors u 1,, u k themselves are also said to be linearly independent Otherwise, if there is at least one nontrivial representation of 0 by vectors in S, then S is said to be linearly dependent Example 06 The vectors u (1, 2) and v (0, 1) in R 2 are linearly independent, because if that is au + bv 0 a(1, 2) + b(0, 1) (0, 0) then (a, 2a b) (0, 0), which gives a system of equations: But the matrix by it gives which means a b 0 a 0 2a b 0 or 1 0 a b is invertible, in fact it is its own inverse, so that left-multiplying both sides 2 1 a b a b a b

3 Example 07 The vectors (1, 2, 3), (4, 5, 6), (7, 8, 9) R 3 are not linearly independent because 1(1, 2, 3) 2(4, 5, 6) + 1(7, 8, 9) (0, 0, 0) That is, we have found a 1, b 2 and c 1, not all of which are zero, such that a(1, 2, 3) + b(4, 5, 6) + c(7, 8, 9) (0, 0, 0) Given S V, a nonzero vector v S is said to be an essentially unique linear combination of the vectors in S if, up to order of terms, there is one and only one way to express v as a linear combination of u 1,, u k S That is, if there are a 1,, a n, b 1,, b l R\{0} and distinct u 1,, u k S and distinct v 1,, v l S distinct, then, re-indexing the b i s if necessary, } { } v a 1 u a n u k ai b i k l and for all i 1,, k b 1 v b l v l u i v i If V is a subspace of R n, then a subset of V is called a basis for V if it is linearly independent and spans V We also say that the vectors of form a basis for V Equivalently, as explained in Theorem 011 below, is a basis if every nonzero vector v V is an essentially unique linear combination of vectors in Remark 08 In the context of inner product spaces V of inifinite dimension, there is a difference between a vector space basis, the Hamel basis of V, and an orthonormal basis for V, the Hilbert basis for V, because though the two always exist, they are not always equal unless dim(v ) < The dimension of a subspace V of R n is the cardinality of any basis for V, ie the number of elements in (which may in principle be infinite), and is denoted dim(v ) This is a well defined concept, by Theorem 013 below, since all bases have the same size V is finite-dimensional if it is the zero vector space {0} or if it has a basis of finite cardinality Otherwise, if it s basis has infinite cardinality, it is called infinite-dimensional In the former case, dim(v ) k < for some n N, and V is said to be k-dimensional, while in the latter case, dim(v ) κ, where κ is a cardinal number, and V is said to be κ-dimensional Remark 09 bases for R 2 Bases are not unique For example, {e 1, e 2 } and γ {(1, 1), (1, 0)} are both If V is finite-dimensional, say of dimension n, then an ordered basis for V a finite sequence or n-tuple (v 1,, v n ) of linearly independent vectors v 1,, v n V such that {v 1,, v n } is a basis for V If V is infinite-dimensional but with a countable basis, then an ordered basis is a sequence (v n ) n N such that the set {v n n N} is a basis for V 3

4 012 Properties of Bases Theorem 010 of the other v j Vectors v 1,, v k R n are linearly independent iff no v i is a linear combination Proof: Let v 1,, v k R n be linearly independent and suppose that v k c 1 v c k 1 v k 1 (we may suppose v k is a linear combination of the other v j, else we can simply re-index so that this is the case) Then c 1 v c k 1 v k 1 + ( 1)v k 0 But this contradicts linear independence, since 1 0 Hence v k cannot be a linear combination of the other v k By re-indexing the v i we can conclude this for all v i Conversely, suppose v 1,, v k are linearly dependent, ie there are scalars c 1,, c k R not all zero such that c 1 v c k v k 0 Say c i 0 Then, ( v i c ) ( 1 v c ) ( i 1 v i 1 + c ) ( i+1 v i c ) k v k c i c i c i c i so that v i is a linear combination of the other v j This is the contrapositive of the equivalent statement, If no v i is a linear combination of the other v j, then v 1,, v k are linearly independent Theorem 011 Let V be a subspace of R n Then a collection {v 1,, v k } is a basis for V iff every vector v V has an essentially unique expression as a linear combination of the basis vectors v i Proof: Suppose is a basis and suppose that v has two representations as a linear combination of the v i : Then, v c 1 v c k v k d 1 v d k v k 0 v v (c 1 d 1 )v (c k d k )v k so by linear independence we must have c 1 d 1 c k d k 0, or c i d i for all i, and so v has only one expression as a linear combination of basis vectors, up to order of the v i Conversely, suppose every v V has an essentially unique expression as a linear combination of the v i Then clearly is a spanning set for V, and moreover the v i are linearly independent: for note, since 0v v k 0, by uniqueness of representations we must have c 1 v c k v k 0 c 1 c k 0 Thus is a basis 4

5 Theorem 012 (Replacement Theorem) Let V be a subspace of R n and let v 1,, v p and w 1,, w q be vectors in V If v 1,, v p are linearly independent and w 1,, w q span V, then p q Proof: Let A w 1 w q R n q be the matrix whose columns are the w j and let B v 1 v p R n p be the matrix whose columns are the v k Then note that {v 1,, v p } V span(w 1,, w q ) im A Thus, there exist u 1,, u p R q such that Au i v i Consequently, B v 1 v p Au 1 u p A u 1 u p AC where C u 1 u p R q p Now, since v 1 v p are linearly independent, c 1 v 1 + +c p v p 0 implies all c i 0, ie Bc 0 implies c 0, or ker B {0} But you will notice that ker C ker B, since if x ker C, then the fact that B AC implies Bx (AC)x A(Cx) A0 0, or x ker C Since ker B {0}, this means that ker C {0} as well But then C must have at least as many rows as columns, ie p q, because rref(c) must have the form submatrix, but at least with I p in the top portion Ip O, possibly with no O Theorem 013 Let V be a subspace for R n Then all bases for V have the same size Proof: By the previous theorem two bases {v 1,, v p } and γ {w 1,, w q } for V both span V and both are linearly independent, so we have p q and p q Therefore p q Corollary 014 All bases for R n have n vectors Proof: Notice that ρ {e 1,, e n } forms a basis for R n : first, the elementary vectors e i span R n, since if x (a 1,, a n ) R n, then x (a 1,, a n ) a 1 (1, 0,, 0) + a 2 (0, 1, 0,, 0) + + a n (0,, 0, 1) Also, e 1,, e n are linearly independent, for if 0 c 1 e c n e n a 1 e 1 + a 2 e a n e n span(e 1,, e n ) 1 e 1 e n c I n c c n then c (c 1,, c n ) ker I n {0}, so c 1 c n 0 Since ρ n, all bases for R n satisfy n be the previous theorem Theorem 015 (Characterizations of Bases) If V is a subspace of R n and dim(v ) k, then (1) There are at most k linearly independent vectors in V Consequently, a basis is a maximal linearly independent set in V (2) At least k vectors are needed to span V Thus a basis is a minimal spanning set (3) If k vectors in V are linearly independent, then they form a basis for V (4) If k vectors span V, then they form a basis for V 5

6 Proof: (1) If v 1,, v p V are linearly independent and w 1,, w k V form a basis for V, then p k by the Replacement Theorem (2) If v 1,, v p V span V and w 1,, w k V form a basis for V then again we must have k p by the Replacement Theorem (3) If v 1,, v k V are linearly independent, we must show they also span V Pick v V and note that by (1) the vectors v 1,, v k, V are linearly dependent, because there are k + 1 of them (4) If v 1,, v k, V span V but are not linearly independent, then say v k span(v 1,, v k 1 ) But in this case V span(v 1,, v k 1 ), contradicting (2) Theorem 016 If A R m n, then dim(im A) rank A Proof: This follows from Theorem 015 in Systems of Linear Equations, since if B rref(a), then rank A rank B # of columns of the form e i in B # of nonredundant vectors in A Theorem 017 (Rank-Nullity Theorem) If A R m n, then dim(ker A) + dim(im A) n (04) or null A + rank A n (05) Proof: If B rref(a), then dim(ker A) n # of leading 1s n rank A 6

7 02 Coordinate Representations of Vectors and Matrix Representations of Linear Transformations 021 Definitions If (v 1,, v k ) is an ordered basis for a subspace V of R n, then we know that for any vector v V there are unique scalars a 1,, a k R such that v a 1 v a k v k The coordinate vector of v R n with respect to, or relative to, is defined to be the (column) vector in R k consisting of the scalars a i : a 1 a 2 x : a k and the coordinate map, also called the standard representation of V with respect to, is given by (06) φ : V R k (07) φ (x) x (08) Example 018 Let v (5, 7, 9) R 3 and let (v 1, v 2 ) be the ordered basis for V span(v 1, v 2 ), where v 1 (1, 1, 1) and v 2 (1, 2, 3) Can you express v as a linear combination of v 1 and v 2? In other words, does v lie in V? If so, find v Solution: To find out whether v lies in V, we must see if there are scalars a, b R such that v av 1 + bv 2 Note, that if we treat v and the v i as column vectors we get a matrix equation: a v av 1 + bv 2 v 1 v 2 b or a b This is a system of equations, Ax b with 1 1 A 1 2, x 1 3 a b, b Well, the agmented matrix A b reduces to rref(a b) as follows: This means that a 3 and b 2, so that v 3v 1 + 2v and v lies in V, and moreover v 3 2 7

8 In general, if (v 1,, v k ) is a basis for a subspace V of R n and v V, then the coordinate map will give us a matrix equation if we treat all the vectors v, v 1,, v k as column vectors: or v a 1 v a k v k v 1 v k v Bv where B v 1 v k, the n k matrix with columns v j If V R n, then B will be an n n matrix whose columns are linearly independent Therefore, im B R n, so that by the Rank-Nullity Theorem ker B {0}, which means B represents an injective and surjective linear transformation, and is therefore invertible In this case, we can solve for v rather easily: a 1 a k v B 1 v (09) Let V and W be finite dimensional subspaces of R n and R m, respectively, with ordered bases (v 1,, v k ) and γ (w 1,, w l ), respectively If there exist (and there do exist) unique scalars a ij R such that l T (v j ) a ij w i for j 1,, k (010) i1 then the matrix representation of a linear transformation T L(V, W ) in the ordered bases and γ is the l k matrix A defined by A ij a ij, A T γ : T (v 1 ) γ T (v 2 ) γ T (v k ) γ a 11 a 21 a l1 a 12 a 22 a l2 a 1k a 2k a lk a 11 a 1k a l1 a lk Note that T (v) γ ϕ γ (T (v)) (ϕ γ T )(v) Notation 019 If V W and γ, we write T instead of T Example 020 Let T L(R 2, R 3 ) be given by T (x, y) (x + y, 2x y, 3x + 5y) In terms of matrices and column vectors T behaves as follows: 1 1 x T 2 1 x y y 3 5 But this matrix, call it A, is actually the representation of T with respect to the standard ordered bases ρ 2 (e 1, e 2 ) and ρ 3 (e 1, e 2, 3 ), that A T ρ3 ρ 2 What if we were to choose different bases for R 2 and R 3? Say, ( (1, 1), (0, 1) ), γ ( (1, 1, 1), (1, 0, 1), (0, 0, 1) ) 8

9 How would T look with respect to these bases? Let us first find the coordinate representations of T (1, 1) and T (0, 1) with resepct to γ: Note, T (1, 1) (2, 1, 8) and T (0, 1) ( 1, 1, 5), and to find (2, 1, 8) γ and ( 1, 1, 5) we have to solve the equations: γ and If B is the matrix , then B , so and γ Let us verify this: γ γ (1, 1, 1) + 1(1, 0, 1) + 6(0, 0, 1) (2, 1, 8) and 1(1, 1, 1) 2(1, 0, 1) 4(0, 0, 1) ( 1, 1, 5) so indeed we have found T (1, 1) γ and T (0, 1) γ, and therefore T γ : T (1, 1) γ T (0, 1) γ Properties of Coordinate and Matrix Representations Theorem 021 (Linearity of Coordinates) all x, y V and all k R we have Let be a basis for a subspace V of R n Then, for (1) x + y x + y (2) kx kx Proof: (1) On the one hand, x + y Bx + y, and on the other x Bx and y By, so x + y Bx + By Thus, Bx + y x + y Bx + By B(x + y ) so that, subtracting the right hand side from both sides, we get B ( x + y (x + y ) ) 0 Now, B s columns are basis vectors, so they are linearly independent, which means Bx 0 has only the trivial solution, because if (v 1,, v k ) and x (x 1,, x k ), then 0 x 1 v x k v k Bx x 1 x k 0, or x 0 But this means the kernel of B is {0}, so that x + y (x + y ) 0 or x + y x + y The proof of (2) follows even more straighforwardly: First, note that if x a 1 x a 1 v a k v k, so that kx ka 1 v ka k v k, and therefore a k T, then kx ka 1 ka n T ka 1 a k T kx 9

10 Corollary 022 The coordinate maps ϕ are linear, ie ϕ L(V, R k ), and further they are isomorphisms, that is they are invertible, and so ϕ GL(V, R k ) Proof: Linearity was shown in the previous theorem To see that ϕ is an isomorphism, note first that ϕ takes bases to bases: if (v 1,, v k ) is a basis for V, then ϕ (v i ) e i, since v i 0v v i + + 0v k Thus, it takes to the standard basis ρ k (e 1,, e k ) for R k Consequently, it is surjective, because if x (x 1,, x k ) R k, then x x 1 e x k e k x 1 ϕ (v 1 ) + + x k ϕ (v k ) ϕ (x 1 v x k v k ) If we let v x 1 v x k v k, then we see that v V satisfies ϕ (v) x But ϕ is also injective: if v ker ϕ, then v a 1 v a k v k, so that 0 ϕ (v) ϕ (a 1 v a k v k ) a 1 ϕ (v 1 ) + + a k ϕ (v k ) a 1 e a k e k By the linear independence of the e i we must have a 1 a k 0, and so v 0 Thus, ϕ is also injective Theorem 023 Let V and W be finite-dimensional subspaces of R n having ordered bases (v 1,, v k ) and γ (w 1,, w l ), respectively, and let T L(V, W ) Then for all v V we have T (v) γ T γ v (011) In other words, if D T γ is the matrix representation of T in and γ coordinates, with T D L(R k, R l ) the corresponding matrix multiplication map, if A T ρ l ρ k is the matrix representation of T in standard coordinates, with corresponding T A L(R k, R l ), and if φ GL(V, R k ) and φ γ GL(W, R l ) are the respective coordinate maps, with matrix representations B 1 and C 1, respectively, where B v 1 v k and C w1 w l, then φ γ T T D φ or, in terms of matrices, C 1 A DB 1 (012) and the following diagrams commute: V T W V A W φ R k φ γ R l T D or, in terms of matrices, B 1 R k C 1 D R l Proof: If (v 1,, v k ) is an odered basis for V and γ (w 1,, w l ) is an ordered basis for W, then let a 11 a 1kn a 11 a 1k T γ T (v 1 ) γ T (v k ) γ a l1 a lk a l1 a lk Now, for all u V there are unique b 1,, b n R such that u b 1 v b k v k Therefore, T (u) T (b 1 v b k v k ) b 1 T (v 1 ) + + b k T (v k ) 10

11 so that by the linearity of ϕ, T (u) γ φ γ ( T (u) ) φ γ ( b1 T (v 1 ) + + b k T (v k ) ) b 1 φ γ ( T (v1 ) ) + + b k φ γ ( T (vk ) ) b 1 T (v 1 ) γ + + b k T (v k ) γ T (v 1 ) γ T (v k ) γ T γ u This shows that φ γ T T D φ, since T (u) γ (ϕ γ T )(u) and T γ u (T D ϕ )(u) Finally, since ϕ (x) B 1 x and ϕ γ (y) C 1 y, we have the equivalent statement C 1 A DB 1 b 1 b k Remark 024 Two square matrices A, B R n n are said to be similar, denoted A B, if there exists an invertible matrix P GL(n, R) such that A P BP 1 Similarity is an equivalence relation (it is reflexive, symmetric and transitive): (1) (reflexivity) A A because we can take P I n, so that A I n AI 1 n (2) (symmetry) If A B, then there is an invertible P such that A P BP 1 But then leftmultiplying by P 1 and right-multiplying P gives P 1 AP B, so since P 1 is invertible, we have B A (3) (transitivity) If A B and B C, then there are invertible matrices P and Q such that A P BP 1 and B QCQ 1 Plugging the second into the first gives A P BP 1 P (QCQ 1 )P 1 (P Q)C(Q 1 P 1 ) (P Q)C(P Q) 1 so since P Q is invertible, with inverse Q 1 P 1, A C In the previous theorem, if we take W V and γ, we ll have B C, so that which shows that A D, ie B 1 A DB 1, or A BDB 1 T ρ T Since similarity is an equivalence relation, if and γ are any two bases for V, then T T ρ and T ρ T γ T T γ This demonstrates that if we represent a linear transformation T L(V, V ) with respect to two different bases, then the corresponding matrices are similar T T γ Indeed, the invertible matrix P is B 1 C, because T B 1 T ρ B and T ρ CT γ C 1, so that T B 1 T ρ B B 1 CT γ C 1 B (B 1 C)T γ (B 1 C) 1 We will show below that the converse is also true: if two matrices are similar, then they represent the same linear transformation, possibly with respect to different bases! 11

12 Theorem 025 If V, W, Z are finite-dimensional subspaces of R n, R m and R p, respectively, with ordered bases α, and γ, respectively, and if T L(V, W ) and U L(W, Z), then U T γ α U γ T α (013) Proof: By the previous two theorems we have U T γ α (U T )(v 1 ) γ (U T )(v k ) γ U(T (v 1 )) γ U(T (v k )) γ U γ T (v 1) U γ T (v k) U γ T (v 1 ) T (v k ) U γ T α which completes the proof Corollary 026 If V is an k-dimensional subspace of R n with an ordered basis and if I L(V, V ) is the identity operator, then I I k R k k Proof: Given any T L(V ), we have T I T, so that T I T I T, and similarly T T I Hence, taking T I we will have I 2 I Note also that I 1 I, because I 1 I I I 1 I I 1 I But then I I n, because any A R n n that is invertible and satisfies A 2 A will satisfy A I n : A AI n A(AA 1 ) (AA)A 1 A 2 A 1 AA 1 I k An alternative proof follows directly from the definition, since I(v i ) v i, so that I(v i ) e i, whence I I(v 1 ) I(v k ) e1 e k I k Theorem 027 If V and W are subspaces of R n with ordered bases (b 1,, b k ) and γ (c 1,, c l ), respectively, then the function Φ : L(V, W ) R l k (014) Φ(T ) T γ (015) is an isomorphism, that is Φ GL ( L(V, W ), R l k), and consequently the space of linear transformations is isomorphic to the space of all l k matrices: L(V, W ) R l k (016) Proof: First, Φ is linear: there exist scalars r ij, s ij R, for i 1,, l and j 1,, k, such that for any T L(V, W ) we have T (b 1 ) r 11 c r l1 c l T (b k ) r 1k c r lk c l Hence, for all s, t R and j 1,, n we have and U(b 1 ) s 11 c s l1 c l U(b k ) s 1k c s lk c l l l l (st + tu)(b j ) st (b j ) + tu(b j ) s r ij c i + t s ij c i (sr ij + ts ij )c i i1 i1 i1 12

13 As a consequence of this and the rules of matrix addition and scalar multiplication we have Φ(sT + tu) st + tu γ sr 11 + ts 11 sr 1k + ts 1k sr l1 + ts l1 sr lk + ts lk r 11 r 1k s 11 s 1k s + t r l1 r lk s l1 s lk st γ + tuγ sφ(t ) + tφ(u) Moreover, Φ is bijective, since for all A R l k there is a unique linear transformation T L(V, W ) such that Φ(T ) A, because there exists a unique T L(V, W ) such that T (b j ) A 1j c A kj c l for j 1,, k This makes Φ onto, and also 1-1 because ker(φ) {T 0 }, the zero operator, because for O R l k there is only T 0 L(V, W ) satisying Φ(T 0 ) O, because T (b j ) 0c c l 0 for j 1,, k defines a unique transformation, T 0 Theorem 028 Let V and W be subspaces of R n of the same dimension k, with ordered bases (b 1,, b k ) and γ (c 1,, c k ), respectively, and let T L(V, W ) Then T is an isomporhpism iff T γ is invertible, that is T GL(V, W ), iff T γ GL(k, R) In this case T 1 γ ( T γ ) 1 (017) Proof: If T GL(V, W ) and dim(v ) dim(w ) k, then T 1 L(W, V ), and T T 1 I W and T 1 T I V, so that by Theorem 025 and Corollary 026 we have T γ T 1 γ T T 1 γ I W γ I n I V T 1 T T 1 γt γ so that T γ is invertibe with inverse T 1 γ, and by the uniqueness of the multiplicative inverse in R k k, which follows from the uniqueness of T 1 A L(Rk, R k ), we have T 1 γ ( T γ Conversely, if A T γ is invertible, there is a n n matrix B such that AB BA I n Define U L(W, V ) on the basis elements as follows, U(c j ) v j n i1 B ijb j, and extend U by linearity Then B U γ To show that U T 1, note that ) 1 U T U γt γ BA I n I V and T U γ T γ U γ AB I n I W γ But since Φ GL ( L(V, W ), R k k) is an isomorphism, and therefore 1-1, we must have that U T I V and T U I W By the uniqueness of the inverse, however, U T 1, and T is an isomorphism 13

14 03 Change of Coordinates 031 Definitions We now define the change of coordinates, or change of basis, operator If V is a k-dimensional subspace of R n and (b 1,, b k ) and γ (c 1,, c k ) are two ordered bases for V, then the coordinate maps φ, φ γ GL(V, R k ), which are isomorphisms by Corollary 022 above, may be used to define a change of coordinates operator φ,γ GL(R k, R k ) changing coordinates into γ coordinates, that is having the property We define the operator as follows: φ,γ (v ) v γ (018) φ,γ : φ γ φ 1 (019) The relationship between these three functions is illustrated in the following commutative diagram: R k φ γ φ 1 R k φ As we will see below, the change of coordinates operator has a matrix representation M,γ φ,γ ρ φ γ ρ b 1 γ b 2 γ b n γ V φ γ (020) 032 Properties of Change-of-Coordinate Maps and Matrices Theorem 029 (Change of Coordinate Matrix) Let V be a k-dimensional subspace of R n and let (b 1,, b k ) and γ (c 1,, c k ) be two ordered bases for V Since φ and φ γ are isomorphisms, the following diagram commutes, R k φ γ φ 1 R k φ V and the change of basis operator φ,γ : φ γ φ 1 L(R k, R k ), changing coordinates into γ coordinates, is an isomorphism It s matrix representation, φ γ M,γ φ,γ ρ R k k (021) where ρ (e 1,, e n ) is the standard ordered basis for R k, is called the change of coordinate matrix, and it satisfies the following conditions: 1 M,γ φ,γ ρ φ γ ρ b 1 γ b 2 γ b k γ 2 v γ M,γ v, v V 3 M,γ is invertible and M 1,γ M γ, φ ρ γ 14

15 Proof: The first point is shown as follows: M,γ φ,γ ρ φ,γ (e 1 ) φ,γ (e 2 ) φ,γ (e k ) (φ γ φ 1 )(e 1) (φ γ φ 1 )(e 2) (φ γ φ 1 )(e k) (φ γ φ 1 )(b 1 ) (φ γ φ 1 )(b 2 ) (φ γ φ 1 )(b k ) φ γ (b 1 ) φ γ (b 2 ) φ γ (b k ) b 1 γ b 2 γ b n γ φ γ or, alternatively, by Theorem 025 and Theorem 028 we have that M,γ φ,γ ρ φ γ φ 1 The second point follows from Theorem 023, since ρ φ γ ρ φ 1 ρ φ γ ρ γ(φ ρ ) 1 φ γ ρ I 1 n φ γ ρ I n φ γ ρ implies that φ γ (v) (φ γ I)(v) (φ γ (φ 1 φ ))(v) ((φ γ φ 1 ) φ )(v) v γ φ γ (v) ρ ((φ γ φ 1 ) φ )(v) ρ φ γ φ 1 ρφ (v) ρ φ,γ ρ v M,γ v And the last point follows from the fact that φ and φ γ are isomorphism, so that φ,γ is an isomorphism, and hence φ 1,γ L(Rk ) is an isomorphism, and because the diagram above commutes we must have φ 1,γ (φ γ φ 1 ) 1 φ φ 1 γ φ,γ so that by (1) or alternatively by Theorem 028 M 1,γ φ 1,γ ρ φ γ, ρ M γ, M 1,γ (φ,γ ρ ) 1 φ 1,γ ρ φ γ, ρ φ ρ γ M γ, Corollary 030 (Change of Basis) Let V and W be subspaces of R n and let (, γ) and (, γ ) be pairs of ordered bases for V and W, respectively If T L(V, W ), then where M γ,γ T γ M γ,γ T γ M, (022) M γ,γ T γ M 1, (023) and M, are change of coordinate matrices That is, the following diagram commutes R k T A R k φ, φ γ,γ φ R k T A φ φ γ R k V T W φ γ 15

16 Proof: This follows from the fact that if (b 1,, b k ), (b 1,, b k ), γ (c 1,, c l ), γ (c 1,, c l ), then for each i 1,, k we have so T (b i) γ (φ γ,γ T φ 1, )(b i) γ T γ φ γ,γ T φ 1, γ φ γ,γ ρm T γ φ 1, ρn φ γ,γ ρm T γ (φ, ρ n ) 1 M γ,γ T γ M 1, which completes the proof If V is a subspaces of R n with or- Corollary 031 (Change of Basis for a Linear Operator) dered bases and γ, and T L(V ), then T γ M,γ T M 1,γ (024) where M,γ is the change of coordinates matrix Corollary 032 If we are given any two of the following: (1) A R n invertible (2) an ordered basis for R n (3) an ordered basis γ for R n The third is uniquely determined by the equation A M,γ, where M,γ is the change of coordinates matrix of the previous theorem Proof: If we have A M,γ φ γ b 1 γ b 2 γ b n γ, suppose we know A and γ Then, b i γ is given by A, so b i A i1 c A in c n, so is uniquely determined If and γ are given, then by the previous theorem M,γ is given by M,γ b 1 γ b 2 γ b n γ Lastly, if A and are given, then γ is given by the first case applied to A 1 M γ, 16

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, Chapter 1: Linear Equations and Matrices MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

More information

Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that

Definition 12 An alternating bilinear form on a vector space V is a map B : V V F such that 4 Exterior algebra 4.1 Lines and 2-vectors The time has come now to develop some new linear algebra in order to handle the space of lines in a projective space P (V ). In the projective plane we have seen

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

1.5 Elementary Matrices and a Method for Finding the Inverse

1.5 Elementary Matrices and a Method for Finding the Inverse .5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

Sec 4.1 Vector Spaces and Subspaces

Sec 4.1 Vector Spaces and Subspaces Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Lecture Note on Linear Algebra 15. Dimension and Rank

Lecture Note on Linear Algebra 15. Dimension and Rank Lecture Note on Linear Algebra 15. Dimension and Rank Wei-Shi Zheng, wszheng@ieee.org, 211 November 1, 211 1 What Do You Learn from This Note We still observe the unit vectors we have introduced in Chapter

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

More information

Problems for Advanced Linear Algebra Fall 2012

Problems for Advanced Linear Algebra Fall 2012 Problems for Advanced Linear Algebra Fall 2012 Class will be structured around students presenting complete solutions to the problems in this handout. Please only agree to come to the board when you are

More information

4.6 Null Space, Column Space, Row Space

4.6 Null Space, Column Space, Row Space NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

More information

Linear Dependence Tests

Linear Dependence Tests Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

More information

Linear Transformations

Linear Transformations a Calculus III Summer 2013, Session II Tuesday, July 23, 2013 Agenda a 1. Linear transformations 2. 3. a linear transformation linear transformations a In the m n linear system Ax = 0, Motivation we can

More information

POWER SETS AND RELATIONS

POWER SETS AND RELATIONS POWER SETS AND RELATIONS L. MARIZZA A. BAILEY 1. The Power Set Now that we have defined sets as best we can, we can consider a sets of sets. If we were to assume nothing, except the existence of the empty

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

7 - Linear Transformations

7 - Linear Transformations 7 - Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

Vector Spaces II: Finite Dimensional Linear Algebra 1

Vector Spaces II: Finite Dimensional Linear Algebra 1 John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold: Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

More information

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc.

Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Pearson Education, Inc. 2 Matrix Algebra 2.3 CHARACTERIZATIONS OF INVERTIBLE MATRICES Theorem 8: Let A be a square matrix. Then the following statements are equivalent. That is, for a given A, the statements are either all true

More information

Inverses and powers: Rules of Matrix Arithmetic

Inverses and powers: Rules of Matrix Arithmetic Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

(b) Prove that there is NO ordered basis γ of V s.t. Solution: (a) We have

(b) Prove that there is NO ordered basis γ of V s.t. Solution: (a) We have Advanced Linear Algebra, Fall 2011. Solutions to midterm #1. 1. Let V = P 2 (R), the vector space of polynomials of degree 2 over R. Let T : V V be the differentiation map, that is, T (f(x)) = f (x). (a)

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

2.5 Gaussian Elimination

2.5 Gaussian Elimination page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

More information

3. Prime and maximal ideals. 3.1. Definitions and Examples.

3. Prime and maximal ideals. 3.1. Definitions and Examples. COMMUTATIVE ALGEBRA 5 3.1. Definitions and Examples. 3. Prime and maximal ideals Definition. An ideal P in a ring A is called prime if P A and if for every pair x, y of elements in A\P we have xy P. Equivalently,

More information

An Advanced Course in Linear Algebra. Jim L. Brown

An Advanced Course in Linear Algebra. Jim L. Brown An Advanced Course in Linear Algebra Jim L. Brown July 20, 2015 Contents 1 Introduction 3 2 Vector spaces 4 2.1 Getting started............................ 4 2.2 Bases and dimension.........................

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam

MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam MA 4 LINEAR ALGEBRA C, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 6, 9:3am - :5am Problem (5 points). Let the matrix A be given by 5 6 5 4 5 (a) Find the inverse A of A, if it exists.

More information

Lecture 6. Inverse of Matrix

Lecture 6. Inverse of Matrix Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

More information

Diagonal, Symmetric and Triangular Matrices

Diagonal, Symmetric and Triangular Matrices Contents 1 Diagonal, Symmetric Triangular Matrices 2 Diagonal Matrices 2.1 Products, Powers Inverses of Diagonal Matrices 2.1.1 Theorem (Powers of Matrices) 2.2 Multiplying Matrices on the Left Right by

More information

Introduction to Matrix Algebra I

Introduction to Matrix Algebra I Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

2.1: MATRIX OPERATIONS

2.1: MATRIX OPERATIONS .: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants. Dr. Doreen De Leon Math 152, Fall 2015 Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Chapter 10. Abstract algebra

Chapter 10. Abstract algebra Chapter 10. Abstract algebra C.O.S. Sorzano Biomedical Engineering December 17, 2013 10. Abstract algebra December 17, 2013 1 / 62 Outline 10 Abstract algebra Sets Relations and functions Partitions and

More information

1 Eigenvalues and Eigenvectors

1 Eigenvalues and Eigenvectors Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

More information

Solutions A ring A is called a Boolean ring if x 2 = x for all x A.

Solutions A ring A is called a Boolean ring if x 2 = x for all x A. 1. A ring A is called a Boolean ring if x 2 = x for all x A. (a) Let E be a set and 2 E its power set. Show that a Boolean ring structure is defined on 2 E by setting AB = A B, and A + B = (A B c ) (B

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

MATH 110 Spring 2015 Homework 6 Solutions

MATH 110 Spring 2015 Homework 6 Solutions MATH 110 Spring 2015 Homework 6 Solutions Section 2.6 2.6.4 Let α denote the standard basis for V = R 3. Let α = {e 1, e 2, e 3 } denote the dual basis of α for V. We would first like to show that β =

More information

LECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A

LECTURE 1 I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find x such that IA = A LECTURE I. Inverse matrices We return now to the problem of solving linear equations. Recall that we are trying to find such that A = y. Recall: there is a matri I such that for all R n. It follows that

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

(January 14, 2009) End k (V ) End k (V/W )

(January 14, 2009) End k (V ) End k (V/W ) (January 14, 29) [16.1] Let p be the smallest prime dividing the order of a finite group G. Show that a subgroup H of G of index p is necessarily normal. Let G act on cosets gh of H by left multiplication.

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Matrix Calculations: Kernels & Images, Matrix Multiplication

Matrix Calculations: Kernels & Images, Matrix Multiplication Matrix Calculations: Kernels & Images, Matrix Multiplication A. Kissinger (and H. Geuvers) Institute for Computing and Information Sciences Intelligent Systems Version: spring 2016 A. Kissinger Version:

More information

0.1 Linear Transformations

0.1 Linear Transformations .1 Linear Transformations A function is a rule that assigns a value from a set B for each element in a set A. Notation: f : A B If the value b B is assigned to value a A, then write f(a) = b, b is called

More information

1 Orthogonal projections and the approximation

1 Orthogonal projections and the approximation Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

More information

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

More information

DETERMINANTS. b 2. x 2

DETERMINANTS. b 2. x 2 DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

More information

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

More information

1 Gaussian Elimination

1 Gaussian Elimination Contents 1 Gaussian Elimination 1.1 Elementary Row Operations 1.2 Some matrices whose associated system of equations are easy to solve 1.3 Gaussian Elimination 1.4 Gauss-Jordan reduction and the Reduced

More information

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n, LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

More information

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, 2005. Dr. Grigore CĂLUGĂREANU Department of Mathematics

More information

Row and column operations

Row and column operations Row and column operations It is often very useful to apply row and column operations to a matrix. Let us list what operations we re going to be using. 3 We ll illustrate these using the example matrix

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information