Matrix Representations of Linear Transformations and Changes of Coordinates

Size: px
Start display at page:

Download "Matrix Representations of Linear Transformations and Changes of Coordinates"

Transcription

1 Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under addition and scalar multiplication: (1) 0 V (2) u, v V u + v V (3) u V and k R ku V Equivalently, V is a subspace if au + bv V for all a, b R and u, v V (You should try to prove that this is an equivalent statement to the first) Example 01 Let V {(t, 3t, 2t) t R} Then V is a subspace of R 3 : (1) 0 V because we can take t 0 (2) If u, v V, then u (s, 3s, 2s) and v (t, 3t, 2t) for some real numbers s and t But then u + v (s + t, 3s + 3t, 2s 2t) (s + t, 3(s + t), 2(s + t)) (t, 3t, 2t ) V where t s + t R (3) If u V, then u (t, 3t, 2t) for some t R, so if k R, then ku (kt, 3(kt), 2(kt)) (t, 3t, 2t ) V where t kt R Example 02 The unit circle S 1 in R 2 is not a subspace because it doesn t contain 0 (0, 0) and because, for example, (1, 0) and (0, 1) lie in S but (1, 0) + (0, 1) (1, 1) does not Similarly, (1, 0) lies in S but 2(1, 0) (2, 0) does not A linear combination of vectors v 1,, v k R n is the finite sum a 1 v a k v k (01) which is a vector in R n (because R n is a subspace of itself, right?) The a i R are called the coefficients of the linear combination If a 1 a k 0, then the linear combination is said to be trivial In particular, considering the special case of 0 R n, the zero vector, we note that 0 may always be represented as a linear combination of any vectors u 1,, u k R n, 0u u k 0 This representation is called the trivial representation of 0 by u 1,, u k If, on the other hand, there are vectors u 1,, u k R n and scalars a 1,, a n R such that a 1 u a k u k 0 1

2 where at least one a i 0, then that linear combination is called a nontrivial representation of 0 Using linear combinations we can generate subspaces, as follows If S is a nonempty subset of R n, then the span of S is given by span(s) : {v R n v is a linear combination of vectors in S} (02) The span of the empty set,, is by definition span( ) : {0} (03) Remark 03 We showed in class that span(s) is always a subspace of R n (well, we showed this for S a finite collection of vectors S {u 1,, u k }, but you should check that it s true for any S) Let V : span(s) be the subspace of R n spanned by some S R n Then S is said to generate or span V, and to be a generating or spanning set for V If V is already known to be a subspace, then finding a spanning set S for V can be useful, because it is often easier to work with the smaller spanning set than with the entire subspace V, for example if we are trying to understand the behavior of linear transformations on V Example 04 x-y plane Let S be the unit circle in R 3 which lies in the x-y plane Then span(s) is the entire Example 05 Let S {(x, y, z) R 3 x y 0, 1 < z < 3} Then span(s) is the z-axis A nonempty subset S of a vector space R n is said to be linearly independent if, taking any finite number of distinct vectors u 1,, u k S, we have for all a 1,, a k R that a 1 u 1 + a 2 u a k u k 0 a 1 a n 0 That is S is linearly independent if the only representation of 0 R n by vectors in S is the trivial one In this case, the vectors u 1,, u k themselves are also said to be linearly independent Otherwise, if there is at least one nontrivial representation of 0 by vectors in S, then S is said to be linearly dependent Example 06 The vectors u (1, 2) and v (0, 1) in R 2 are linearly independent, because if that is au + bv 0 a(1, 2) + b(0, 1) (0, 0) then (a, 2a b) (0, 0), which gives a system of equations: But the matrix by it gives which means a b 0 a 0 2a b 0 or 1 0 a b is invertible, in fact it is its own inverse, so that left-multiplying both sides 2 1 a b a b a b

3 Example 07 The vectors (1, 2, 3), (4, 5, 6), (7, 8, 9) R 3 are not linearly independent because 1(1, 2, 3) 2(4, 5, 6) + 1(7, 8, 9) (0, 0, 0) That is, we have found a 1, b 2 and c 1, not all of which are zero, such that a(1, 2, 3) + b(4, 5, 6) + c(7, 8, 9) (0, 0, 0) Given S V, a nonzero vector v S is said to be an essentially unique linear combination of the vectors in S if, up to order of terms, there is one and only one way to express v as a linear combination of u 1,, u k S That is, if there are a 1,, a n, b 1,, b l R\{0} and distinct u 1,, u k S and distinct v 1,, v l S distinct, then, re-indexing the b i s if necessary, } { } v a 1 u a n u k ai b i k l and for all i 1,, k b 1 v b l v l u i v i If V is a subspace of R n, then a subset of V is called a basis for V if it is linearly independent and spans V We also say that the vectors of form a basis for V Equivalently, as explained in Theorem 011 below, is a basis if every nonzero vector v V is an essentially unique linear combination of vectors in Remark 08 In the context of inner product spaces V of inifinite dimension, there is a difference between a vector space basis, the Hamel basis of V, and an orthonormal basis for V, the Hilbert basis for V, because though the two always exist, they are not always equal unless dim(v ) < The dimension of a subspace V of R n is the cardinality of any basis for V, ie the number of elements in (which may in principle be infinite), and is denoted dim(v ) This is a well defined concept, by Theorem 013 below, since all bases have the same size V is finite-dimensional if it is the zero vector space {0} or if it has a basis of finite cardinality Otherwise, if it s basis has infinite cardinality, it is called infinite-dimensional In the former case, dim(v ) k < for some n N, and V is said to be k-dimensional, while in the latter case, dim(v ) κ, where κ is a cardinal number, and V is said to be κ-dimensional Remark 09 bases for R 2 Bases are not unique For example, {e 1, e 2 } and γ {(1, 1), (1, 0)} are both If V is finite-dimensional, say of dimension n, then an ordered basis for V a finite sequence or n-tuple (v 1,, v n ) of linearly independent vectors v 1,, v n V such that {v 1,, v n } is a basis for V If V is infinite-dimensional but with a countable basis, then an ordered basis is a sequence (v n ) n N such that the set {v n n N} is a basis for V 3

4 012 Properties of Bases Theorem 010 of the other v j Vectors v 1,, v k R n are linearly independent iff no v i is a linear combination Proof: Let v 1,, v k R n be linearly independent and suppose that v k c 1 v c k 1 v k 1 (we may suppose v k is a linear combination of the other v j, else we can simply re-index so that this is the case) Then c 1 v c k 1 v k 1 + ( 1)v k 0 But this contradicts linear independence, since 1 0 Hence v k cannot be a linear combination of the other v k By re-indexing the v i we can conclude this for all v i Conversely, suppose v 1,, v k are linearly dependent, ie there are scalars c 1,, c k R not all zero such that c 1 v c k v k 0 Say c i 0 Then, ( v i c ) ( 1 v c ) ( i 1 v i 1 + c ) ( i+1 v i c ) k v k c i c i c i c i so that v i is a linear combination of the other v j This is the contrapositive of the equivalent statement, If no v i is a linear combination of the other v j, then v 1,, v k are linearly independent Theorem 011 Let V be a subspace of R n Then a collection {v 1,, v k } is a basis for V iff every vector v V has an essentially unique expression as a linear combination of the basis vectors v i Proof: Suppose is a basis and suppose that v has two representations as a linear combination of the v i : Then, v c 1 v c k v k d 1 v d k v k 0 v v (c 1 d 1 )v (c k d k )v k so by linear independence we must have c 1 d 1 c k d k 0, or c i d i for all i, and so v has only one expression as a linear combination of basis vectors, up to order of the v i Conversely, suppose every v V has an essentially unique expression as a linear combination of the v i Then clearly is a spanning set for V, and moreover the v i are linearly independent: for note, since 0v v k 0, by uniqueness of representations we must have c 1 v c k v k 0 c 1 c k 0 Thus is a basis 4

5 Theorem 012 (Replacement Theorem) Let V be a subspace of R n and let v 1,, v p and w 1,, w q be vectors in V If v 1,, v p are linearly independent and w 1,, w q span V, then p q Proof: Let A w 1 w q R n q be the matrix whose columns are the w j and let B v 1 v p R n p be the matrix whose columns are the v k Then note that {v 1,, v p } V span(w 1,, w q ) im A Thus, there exist u 1,, u p R q such that Au i v i Consequently, B v 1 v p Au 1 u p A u 1 u p AC where C u 1 u p R q p Now, since v 1 v p are linearly independent, c 1 v 1 + +c p v p 0 implies all c i 0, ie Bc 0 implies c 0, or ker B {0} But you will notice that ker C ker B, since if x ker C, then the fact that B AC implies Bx (AC)x A(Cx) A0 0, or x ker C Since ker B {0}, this means that ker C {0} as well But then C must have at least as many rows as columns, ie p q, because rref(c) must have the form submatrix, but at least with I p in the top portion Ip O, possibly with no O Theorem 013 Let V be a subspace for R n Then all bases for V have the same size Proof: By the previous theorem two bases {v 1,, v p } and γ {w 1,, w q } for V both span V and both are linearly independent, so we have p q and p q Therefore p q Corollary 014 All bases for R n have n vectors Proof: Notice that ρ {e 1,, e n } forms a basis for R n : first, the elementary vectors e i span R n, since if x (a 1,, a n ) R n, then x (a 1,, a n ) a 1 (1, 0,, 0) + a 2 (0, 1, 0,, 0) + + a n (0,, 0, 1) Also, e 1,, e n are linearly independent, for if 0 c 1 e c n e n a 1 e 1 + a 2 e a n e n span(e 1,, e n ) 1 e 1 e n c I n c c n then c (c 1,, c n ) ker I n {0}, so c 1 c n 0 Since ρ n, all bases for R n satisfy n be the previous theorem Theorem 015 (Characterizations of Bases) If V is a subspace of R n and dim(v ) k, then (1) There are at most k linearly independent vectors in V Consequently, a basis is a maximal linearly independent set in V (2) At least k vectors are needed to span V Thus a basis is a minimal spanning set (3) If k vectors in V are linearly independent, then they form a basis for V (4) If k vectors span V, then they form a basis for V 5

6 Proof: (1) If v 1,, v p V are linearly independent and w 1,, w k V form a basis for V, then p k by the Replacement Theorem (2) If v 1,, v p V span V and w 1,, w k V form a basis for V then again we must have k p by the Replacement Theorem (3) If v 1,, v k V are linearly independent, we must show they also span V Pick v V and note that by (1) the vectors v 1,, v k, V are linearly dependent, because there are k + 1 of them (4) If v 1,, v k, V span V but are not linearly independent, then say v k span(v 1,, v k 1 ) But in this case V span(v 1,, v k 1 ), contradicting (2) Theorem 016 If A R m n, then dim(im A) rank A Proof: This follows from Theorem 015 in Systems of Linear Equations, since if B rref(a), then rank A rank B # of columns of the form e i in B # of nonredundant vectors in A Theorem 017 (Rank-Nullity Theorem) If A R m n, then dim(ker A) + dim(im A) n (04) or null A + rank A n (05) Proof: If B rref(a), then dim(ker A) n # of leading 1s n rank A 6

7 02 Coordinate Representations of Vectors and Matrix Representations of Linear Transformations 021 Definitions If (v 1,, v k ) is an ordered basis for a subspace V of R n, then we know that for any vector v V there are unique scalars a 1,, a k R such that v a 1 v a k v k The coordinate vector of v R n with respect to, or relative to, is defined to be the (column) vector in R k consisting of the scalars a i : a 1 a 2 x : a k and the coordinate map, also called the standard representation of V with respect to, is given by (06) φ : V R k (07) φ (x) x (08) Example 018 Let v (5, 7, 9) R 3 and let (v 1, v 2 ) be the ordered basis for V span(v 1, v 2 ), where v 1 (1, 1, 1) and v 2 (1, 2, 3) Can you express v as a linear combination of v 1 and v 2? In other words, does v lie in V? If so, find v Solution: To find out whether v lies in V, we must see if there are scalars a, b R such that v av 1 + bv 2 Note, that if we treat v and the v i as column vectors we get a matrix equation: a v av 1 + bv 2 v 1 v 2 b or a b This is a system of equations, Ax b with 1 1 A 1 2, x 1 3 a b, b Well, the agmented matrix A b reduces to rref(a b) as follows: This means that a 3 and b 2, so that v 3v 1 + 2v and v lies in V, and moreover v 3 2 7

8 In general, if (v 1,, v k ) is a basis for a subspace V of R n and v V, then the coordinate map will give us a matrix equation if we treat all the vectors v, v 1,, v k as column vectors: or v a 1 v a k v k v 1 v k v Bv where B v 1 v k, the n k matrix with columns v j If V R n, then B will be an n n matrix whose columns are linearly independent Therefore, im B R n, so that by the Rank-Nullity Theorem ker B {0}, which means B represents an injective and surjective linear transformation, and is therefore invertible In this case, we can solve for v rather easily: a 1 a k v B 1 v (09) Let V and W be finite dimensional subspaces of R n and R m, respectively, with ordered bases (v 1,, v k ) and γ (w 1,, w l ), respectively If there exist (and there do exist) unique scalars a ij R such that l T (v j ) a ij w i for j 1,, k (010) i1 then the matrix representation of a linear transformation T L(V, W ) in the ordered bases and γ is the l k matrix A defined by A ij a ij, A T γ : T (v 1 ) γ T (v 2 ) γ T (v k ) γ a 11 a 21 a l1 a 12 a 22 a l2 a 1k a 2k a lk a 11 a 1k a l1 a lk Note that T (v) γ ϕ γ (T (v)) (ϕ γ T )(v) Notation 019 If V W and γ, we write T instead of T Example 020 Let T L(R 2, R 3 ) be given by T (x, y) (x + y, 2x y, 3x + 5y) In terms of matrices and column vectors T behaves as follows: 1 1 x T 2 1 x y y 3 5 But this matrix, call it A, is actually the representation of T with respect to the standard ordered bases ρ 2 (e 1, e 2 ) and ρ 3 (e 1, e 2, 3 ), that A T ρ3 ρ 2 What if we were to choose different bases for R 2 and R 3? Say, ( (1, 1), (0, 1) ), γ ( (1, 1, 1), (1, 0, 1), (0, 0, 1) ) 8

9 How would T look with respect to these bases? Let us first find the coordinate representations of T (1, 1) and T (0, 1) with resepct to γ: Note, T (1, 1) (2, 1, 8) and T (0, 1) ( 1, 1, 5), and to find (2, 1, 8) γ and ( 1, 1, 5) we have to solve the equations: γ and If B is the matrix , then B , so and γ Let us verify this: γ γ (1, 1, 1) + 1(1, 0, 1) + 6(0, 0, 1) (2, 1, 8) and 1(1, 1, 1) 2(1, 0, 1) 4(0, 0, 1) ( 1, 1, 5) so indeed we have found T (1, 1) γ and T (0, 1) γ, and therefore T γ : T (1, 1) γ T (0, 1) γ Properties of Coordinate and Matrix Representations Theorem 021 (Linearity of Coordinates) all x, y V and all k R we have Let be a basis for a subspace V of R n Then, for (1) x + y x + y (2) kx kx Proof: (1) On the one hand, x + y Bx + y, and on the other x Bx and y By, so x + y Bx + By Thus, Bx + y x + y Bx + By B(x + y ) so that, subtracting the right hand side from both sides, we get B ( x + y (x + y ) ) 0 Now, B s columns are basis vectors, so they are linearly independent, which means Bx 0 has only the trivial solution, because if (v 1,, v k ) and x (x 1,, x k ), then 0 x 1 v x k v k Bx x 1 x k 0, or x 0 But this means the kernel of B is {0}, so that x + y (x + y ) 0 or x + y x + y The proof of (2) follows even more straighforwardly: First, note that if x a 1 x a 1 v a k v k, so that kx ka 1 v ka k v k, and therefore a k T, then kx ka 1 ka n T ka 1 a k T kx 9

10 Corollary 022 The coordinate maps ϕ are linear, ie ϕ L(V, R k ), and further they are isomorphisms, that is they are invertible, and so ϕ GL(V, R k ) Proof: Linearity was shown in the previous theorem To see that ϕ is an isomorphism, note first that ϕ takes bases to bases: if (v 1,, v k ) is a basis for V, then ϕ (v i ) e i, since v i 0v v i + + 0v k Thus, it takes to the standard basis ρ k (e 1,, e k ) for R k Consequently, it is surjective, because if x (x 1,, x k ) R k, then x x 1 e x k e k x 1 ϕ (v 1 ) + + x k ϕ (v k ) ϕ (x 1 v x k v k ) If we let v x 1 v x k v k, then we see that v V satisfies ϕ (v) x But ϕ is also injective: if v ker ϕ, then v a 1 v a k v k, so that 0 ϕ (v) ϕ (a 1 v a k v k ) a 1 ϕ (v 1 ) + + a k ϕ (v k ) a 1 e a k e k By the linear independence of the e i we must have a 1 a k 0, and so v 0 Thus, ϕ is also injective Theorem 023 Let V and W be finite-dimensional subspaces of R n having ordered bases (v 1,, v k ) and γ (w 1,, w l ), respectively, and let T L(V, W ) Then for all v V we have T (v) γ T γ v (011) In other words, if D T γ is the matrix representation of T in and γ coordinates, with T D L(R k, R l ) the corresponding matrix multiplication map, if A T ρ l ρ k is the matrix representation of T in standard coordinates, with corresponding T A L(R k, R l ), and if φ GL(V, R k ) and φ γ GL(W, R l ) are the respective coordinate maps, with matrix representations B 1 and C 1, respectively, where B v 1 v k and C w1 w l, then φ γ T T D φ or, in terms of matrices, C 1 A DB 1 (012) and the following diagrams commute: V T W V A W φ R k φ γ R l T D or, in terms of matrices, B 1 R k C 1 D R l Proof: If (v 1,, v k ) is an odered basis for V and γ (w 1,, w l ) is an ordered basis for W, then let a 11 a 1kn a 11 a 1k T γ T (v 1 ) γ T (v k ) γ a l1 a lk a l1 a lk Now, for all u V there are unique b 1,, b n R such that u b 1 v b k v k Therefore, T (u) T (b 1 v b k v k ) b 1 T (v 1 ) + + b k T (v k ) 10

11 so that by the linearity of ϕ, T (u) γ φ γ ( T (u) ) φ γ ( b1 T (v 1 ) + + b k T (v k ) ) b 1 φ γ ( T (v1 ) ) + + b k φ γ ( T (vk ) ) b 1 T (v 1 ) γ + + b k T (v k ) γ T (v 1 ) γ T (v k ) γ T γ u This shows that φ γ T T D φ, since T (u) γ (ϕ γ T )(u) and T γ u (T D ϕ )(u) Finally, since ϕ (x) B 1 x and ϕ γ (y) C 1 y, we have the equivalent statement C 1 A DB 1 b 1 b k Remark 024 Two square matrices A, B R n n are said to be similar, denoted A B, if there exists an invertible matrix P GL(n, R) such that A P BP 1 Similarity is an equivalence relation (it is reflexive, symmetric and transitive): (1) (reflexivity) A A because we can take P I n, so that A I n AI 1 n (2) (symmetry) If A B, then there is an invertible P such that A P BP 1 But then leftmultiplying by P 1 and right-multiplying P gives P 1 AP B, so since P 1 is invertible, we have B A (3) (transitivity) If A B and B C, then there are invertible matrices P and Q such that A P BP 1 and B QCQ 1 Plugging the second into the first gives A P BP 1 P (QCQ 1 )P 1 (P Q)C(Q 1 P 1 ) (P Q)C(P Q) 1 so since P Q is invertible, with inverse Q 1 P 1, A C In the previous theorem, if we take W V and γ, we ll have B C, so that which shows that A D, ie B 1 A DB 1, or A BDB 1 T ρ T Since similarity is an equivalence relation, if and γ are any two bases for V, then T T ρ and T ρ T γ T T γ This demonstrates that if we represent a linear transformation T L(V, V ) with respect to two different bases, then the corresponding matrices are similar T T γ Indeed, the invertible matrix P is B 1 C, because T B 1 T ρ B and T ρ CT γ C 1, so that T B 1 T ρ B B 1 CT γ C 1 B (B 1 C)T γ (B 1 C) 1 We will show below that the converse is also true: if two matrices are similar, then they represent the same linear transformation, possibly with respect to different bases! 11

12 Theorem 025 If V, W, Z are finite-dimensional subspaces of R n, R m and R p, respectively, with ordered bases α, and γ, respectively, and if T L(V, W ) and U L(W, Z), then U T γ α U γ T α (013) Proof: By the previous two theorems we have U T γ α (U T )(v 1 ) γ (U T )(v k ) γ U(T (v 1 )) γ U(T (v k )) γ U γ T (v 1) U γ T (v k) U γ T (v 1 ) T (v k ) U γ T α which completes the proof Corollary 026 If V is an k-dimensional subspace of R n with an ordered basis and if I L(V, V ) is the identity operator, then I I k R k k Proof: Given any T L(V ), we have T I T, so that T I T I T, and similarly T T I Hence, taking T I we will have I 2 I Note also that I 1 I, because I 1 I I I 1 I I 1 I But then I I n, because any A R n n that is invertible and satisfies A 2 A will satisfy A I n : A AI n A(AA 1 ) (AA)A 1 A 2 A 1 AA 1 I k An alternative proof follows directly from the definition, since I(v i ) v i, so that I(v i ) e i, whence I I(v 1 ) I(v k ) e1 e k I k Theorem 027 If V and W are subspaces of R n with ordered bases (b 1,, b k ) and γ (c 1,, c l ), respectively, then the function Φ : L(V, W ) R l k (014) Φ(T ) T γ (015) is an isomorphism, that is Φ GL ( L(V, W ), R l k), and consequently the space of linear transformations is isomorphic to the space of all l k matrices: L(V, W ) R l k (016) Proof: First, Φ is linear: there exist scalars r ij, s ij R, for i 1,, l and j 1,, k, such that for any T L(V, W ) we have T (b 1 ) r 11 c r l1 c l T (b k ) r 1k c r lk c l Hence, for all s, t R and j 1,, n we have and U(b 1 ) s 11 c s l1 c l U(b k ) s 1k c s lk c l l l l (st + tu)(b j ) st (b j ) + tu(b j ) s r ij c i + t s ij c i (sr ij + ts ij )c i i1 i1 i1 12

13 As a consequence of this and the rules of matrix addition and scalar multiplication we have Φ(sT + tu) st + tu γ sr 11 + ts 11 sr 1k + ts 1k sr l1 + ts l1 sr lk + ts lk r 11 r 1k s 11 s 1k s + t r l1 r lk s l1 s lk st γ + tuγ sφ(t ) + tφ(u) Moreover, Φ is bijective, since for all A R l k there is a unique linear transformation T L(V, W ) such that Φ(T ) A, because there exists a unique T L(V, W ) such that T (b j ) A 1j c A kj c l for j 1,, k This makes Φ onto, and also 1-1 because ker(φ) {T 0 }, the zero operator, because for O R l k there is only T 0 L(V, W ) satisying Φ(T 0 ) O, because T (b j ) 0c c l 0 for j 1,, k defines a unique transformation, T 0 Theorem 028 Let V and W be subspaces of R n of the same dimension k, with ordered bases (b 1,, b k ) and γ (c 1,, c k ), respectively, and let T L(V, W ) Then T is an isomporhpism iff T γ is invertible, that is T GL(V, W ), iff T γ GL(k, R) In this case T 1 γ ( T γ ) 1 (017) Proof: If T GL(V, W ) and dim(v ) dim(w ) k, then T 1 L(W, V ), and T T 1 I W and T 1 T I V, so that by Theorem 025 and Corollary 026 we have T γ T 1 γ T T 1 γ I W γ I n I V T 1 T T 1 γt γ so that T γ is invertibe with inverse T 1 γ, and by the uniqueness of the multiplicative inverse in R k k, which follows from the uniqueness of T 1 A L(Rk, R k ), we have T 1 γ ( T γ Conversely, if A T γ is invertible, there is a n n matrix B such that AB BA I n Define U L(W, V ) on the basis elements as follows, U(c j ) v j n i1 B ijb j, and extend U by linearity Then B U γ To show that U T 1, note that ) 1 U T U γt γ BA I n I V and T U γ T γ U γ AB I n I W γ But since Φ GL ( L(V, W ), R k k) is an isomorphism, and therefore 1-1, we must have that U T I V and T U I W By the uniqueness of the inverse, however, U T 1, and T is an isomorphism 13

14 03 Change of Coordinates 031 Definitions We now define the change of coordinates, or change of basis, operator If V is a k-dimensional subspace of R n and (b 1,, b k ) and γ (c 1,, c k ) are two ordered bases for V, then the coordinate maps φ, φ γ GL(V, R k ), which are isomorphisms by Corollary 022 above, may be used to define a change of coordinates operator φ,γ GL(R k, R k ) changing coordinates into γ coordinates, that is having the property We define the operator as follows: φ,γ (v ) v γ (018) φ,γ : φ γ φ 1 (019) The relationship between these three functions is illustrated in the following commutative diagram: R k φ γ φ 1 R k φ As we will see below, the change of coordinates operator has a matrix representation M,γ φ,γ ρ φ γ ρ b 1 γ b 2 γ b n γ V φ γ (020) 032 Properties of Change-of-Coordinate Maps and Matrices Theorem 029 (Change of Coordinate Matrix) Let V be a k-dimensional subspace of R n and let (b 1,, b k ) and γ (c 1,, c k ) be two ordered bases for V Since φ and φ γ are isomorphisms, the following diagram commutes, R k φ γ φ 1 R k φ V and the change of basis operator φ,γ : φ γ φ 1 L(R k, R k ), changing coordinates into γ coordinates, is an isomorphism It s matrix representation, φ γ M,γ φ,γ ρ R k k (021) where ρ (e 1,, e n ) is the standard ordered basis for R k, is called the change of coordinate matrix, and it satisfies the following conditions: 1 M,γ φ,γ ρ φ γ ρ b 1 γ b 2 γ b k γ 2 v γ M,γ v, v V 3 M,γ is invertible and M 1,γ M γ, φ ρ γ 14

15 Proof: The first point is shown as follows: M,γ φ,γ ρ φ,γ (e 1 ) φ,γ (e 2 ) φ,γ (e k ) (φ γ φ 1 )(e 1) (φ γ φ 1 )(e 2) (φ γ φ 1 )(e k) (φ γ φ 1 )(b 1 ) (φ γ φ 1 )(b 2 ) (φ γ φ 1 )(b k ) φ γ (b 1 ) φ γ (b 2 ) φ γ (b k ) b 1 γ b 2 γ b n γ φ γ or, alternatively, by Theorem 025 and Theorem 028 we have that M,γ φ,γ ρ φ γ φ 1 The second point follows from Theorem 023, since ρ φ γ ρ φ 1 ρ φ γ ρ γ(φ ρ ) 1 φ γ ρ I 1 n φ γ ρ I n φ γ ρ implies that φ γ (v) (φ γ I)(v) (φ γ (φ 1 φ ))(v) ((φ γ φ 1 ) φ )(v) v γ φ γ (v) ρ ((φ γ φ 1 ) φ )(v) ρ φ γ φ 1 ρφ (v) ρ φ,γ ρ v M,γ v And the last point follows from the fact that φ and φ γ are isomorphism, so that φ,γ is an isomorphism, and hence φ 1,γ L(Rk ) is an isomorphism, and because the diagram above commutes we must have φ 1,γ (φ γ φ 1 ) 1 φ φ 1 γ φ,γ so that by (1) or alternatively by Theorem 028 M 1,γ φ 1,γ ρ φ γ, ρ M γ, M 1,γ (φ,γ ρ ) 1 φ 1,γ ρ φ γ, ρ φ ρ γ M γ, Corollary 030 (Change of Basis) Let V and W be subspaces of R n and let (, γ) and (, γ ) be pairs of ordered bases for V and W, respectively If T L(V, W ), then where M γ,γ T γ M γ,γ T γ M, (022) M γ,γ T γ M 1, (023) and M, are change of coordinate matrices That is, the following diagram commutes R k T A R k φ, φ γ,γ φ R k T A φ φ γ R k V T W φ γ 15

16 Proof: This follows from the fact that if (b 1,, b k ), (b 1,, b k ), γ (c 1,, c l ), γ (c 1,, c l ), then for each i 1,, k we have so T (b i) γ (φ γ,γ T φ 1, )(b i) γ T γ φ γ,γ T φ 1, γ φ γ,γ ρm T γ φ 1, ρn φ γ,γ ρm T γ (φ, ρ n ) 1 M γ,γ T γ M 1, which completes the proof If V is a subspaces of R n with or- Corollary 031 (Change of Basis for a Linear Operator) dered bases and γ, and T L(V ), then T γ M,γ T M 1,γ (024) where M,γ is the change of coordinates matrix Corollary 032 If we are given any two of the following: (1) A R n invertible (2) an ordered basis for R n (3) an ordered basis γ for R n The third is uniquely determined by the equation A M,γ, where M,γ is the change of coordinates matrix of the previous theorem Proof: If we have A M,γ φ γ b 1 γ b 2 γ b n γ, suppose we know A and γ Then, b i γ is given by A, so b i A i1 c A in c n, so is uniquely determined If and γ are given, then by the previous theorem M,γ is given by M,γ b 1 γ b 2 γ b n γ Lastly, if A and are given, then γ is given by the first case applied to A 1 M γ, 16

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

NOTES ON LINEAR TRANSFORMATIONS

NOTES ON LINEAR TRANSFORMATIONS NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all

More information

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0

1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0 Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are

More information

THE DIMENSION OF A VECTOR SPACE

THE DIMENSION OF A VECTOR SPACE THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

Math 312 Homework 1 Solutions

Math 312 Homework 1 Solutions Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold:

Linear Algebra. A vector space (over R) is an ordered quadruple. such that V is a set; 0 V ; and the following eight axioms hold: Linear Algebra A vector space (over R) is an ordered quadruple (V, 0, α, µ) such that V is a set; 0 V ; and the following eight axioms hold: α : V V V and µ : R V V ; (i) α(α(u, v), w) = α(u, α(v, w)),

More information

( ) which must be a vector

( ) which must be a vector MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Math 333 - Practice Exam 2 with Some Solutions

Math 333 - Practice Exam 2 with Some Solutions Math 333 - Practice Exam 2 with Some Solutions (Note that the exam will NOT be this long) Definitions (0 points) Let T : V W be a transformation Let A be a square matrix (a) Define T is linear (b) Define

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

1 Sets and Set Notation.

1 Sets and Set Notation. LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A = MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

3. Prime and maximal ideals. 3.1. Definitions and Examples.

3. Prime and maximal ideals. 3.1. Definitions and Examples. COMMUTATIVE ALGEBRA 5 3.1. Definitions and Examples. 3. Prime and maximal ideals Definition. An ideal P in a ring A is called prime if P A and if for every pair x, y of elements in A\P we have xy P. Equivalently,

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

5. Linear algebra I: dimension

5. Linear algebra I: dimension 5. Linear algebra I: dimension 5.1 Some simple results 5.2 Bases and dimension 5.3 Homomorphisms and dimension 1. Some simple results Several observations should be made. Once stated explicitly, the proofs

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Vector Spaces 4.4 Spanning and Independence

Vector Spaces 4.4 Spanning and Independence Vector Spaces 4.4 and Independence October 18 Goals Discuss two important basic concepts: Define linear combination of vectors. Define Span(S) of a set S of vectors. Define linear Independence of a set

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

LEARNING OBJECTIVES FOR THIS CHAPTER

LEARNING OBJECTIVES FOR THIS CHAPTER CHAPTER 2 American mathematician Paul Halmos (1916 2006), who in 1942 published the first modern linear algebra book. The title of Halmos s book was the same as the title of this chapter. Finite-Dimensional

More information

Finite dimensional C -algebras

Finite dimensional C -algebras Finite dimensional C -algebras S. Sundar September 14, 2012 Throughout H, K stand for finite dimensional Hilbert spaces. 1 Spectral theorem for self-adjoint opertors Let A B(H) and let {ξ 1, ξ 2,, ξ n

More information

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION 4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

GROUP ALGEBRAS. ANDREI YAFAEV

GROUP ALGEBRAS. ANDREI YAFAEV GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite

More information

MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj Section 5. l j v j = [ u u j u m ] l jj = l jj u j + + l mj u m. l mj Section 5. 5.. Not orthogonal, the column vectors fail to be perpendicular to each other. 5..2 his matrix is orthogonal. Check that

More information

PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin

PROJECTIVE GEOMETRY. b3 course 2003. Nigel Hitchin PROJECTIVE GEOMETRY b3 course 2003 Nigel Hitchin hitchin@maths.ox.ac.uk 1 1 Introduction This is a course on projective geometry. Probably your idea of geometry in the past has been based on triangles

More information

Orthogonal Projections

Orthogonal Projections Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors

More information

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. ( 1 1 5 2 0 6 Chapter 7 Matrices Definition An m n matrix is an array of numbers set out in m rows and n columns Examples (i ( 1 1 5 2 0 6 has 2 rows and 3 columns and so it is a 2 3 matrix (ii 1 0 7 1 2 3 3 1 is a

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

Let H and J be as in the above lemma. The result of the lemma shows that the integral

Let H and J be as in the above lemma. The result of the lemma shows that the integral Let and be as in the above lemma. The result of the lemma shows that the integral ( f(x, y)dy) dx is well defined; we denote it by f(x, y)dydx. By symmetry, also the integral ( f(x, y)dx) dy is well defined;

More information

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Recall that two vectors in are perpendicular or orthogonal provided that their dot Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal

More information

Notes on Linear Algebra. Peter J. Cameron

Notes on Linear Algebra. Peter J. Cameron Notes on Linear Algebra Peter J. Cameron ii Preface Linear algebra has two aspects. Abstractly, it is the study of vector spaces over fields, and their linear maps and bilinear forms. Concretely, it is

More information

16.3 Fredholm Operators

16.3 Fredholm Operators Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col

More information

Section 1.7 22 Continued

Section 1.7 22 Continued Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

So let us begin our quest to find the holy grail of real analysis.

So let us begin our quest to find the holy grail of real analysis. 1 Section 5.2 The Complete Ordered Field: Purpose of Section We present an axiomatic description of the real numbers as a complete ordered field. The axioms which describe the arithmetic of the real numbers

More information

Lecture Notes 2: Matrices as Systems of Linear Equations

Lecture Notes 2: Matrices as Systems of Linear Equations 2: Matrices as Systems of Linear Equations 33A Linear Algebra, Puck Rombach Last updated: April 13, 2016 Systems of Linear Equations Systems of linear equations can represent many things You have probably

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL R. DRNOVŠEK, T. KOŠIR Dedicated to Prof. Heydar Radjavi on the occasion of his seventieth birthday. Abstract. Let S be an irreducible

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone

More information

Chapter 20. Vector Spaces and Bases

Chapter 20. Vector Spaces and Bases Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence

Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is a vector? What is a vector space? Span, linear dependence, linear independence Math 115A - Week 1 Textbook sections: 1.1-1.6 Topics covered: What is Linear algebra? Overview of course What is a vector? What is a vector space? Examples of vector spaces Vector subspaces Span, linear

More information

Row Ideals and Fibers of Morphisms

Row Ideals and Fibers of Morphisms Michigan Math. J. 57 (2008) Row Ideals and Fibers of Morphisms David Eisenbud & Bernd Ulrich Affectionately dedicated to Mel Hochster, who has been an inspiration to us for many years, on the occasion

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

1 = (a 0 + b 0 α) 2 + + (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain

1 = (a 0 + b 0 α) 2 + + (a m 1 + b m 1 α) 2. for certain elements a 0,..., a m 1, b 0,..., b m 1 of F. Multiplying out, we obtain Notes on real-closed fields These notes develop the algebraic background needed to understand the model theory of real-closed fields. To understand these notes, a standard graduate course in algebra is

More information

ZORN S LEMMA AND SOME APPLICATIONS

ZORN S LEMMA AND SOME APPLICATIONS ZORN S LEMMA AND SOME APPLICATIONS KEITH CONRAD 1. Introduction Zorn s lemma is a result in set theory that appears in proofs of some non-constructive existence theorems throughout mathematics. We will

More information

GROUPS ACTING ON A SET

GROUPS ACTING ON A SET GROUPS ACTING ON A SET MATH 435 SPRING 2012 NOTES FROM FEBRUARY 27TH, 2012 1. Left group actions Definition 1.1. Suppose that G is a group and S is a set. A left (group) action of G on S is a rule for

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

BILINEAR FORMS KEITH CONRAD

BILINEAR FORMS KEITH CONRAD BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

More information

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions. 3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

More information