LINEAR ALGEBRA W W L CHEN


 Regina Barker
 6 years ago
 Views:
Transcription
1 LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied, with or without permission from author However, this document may not be kept on any information storage and retrieval system without permission from author, unless such system is not accessible to any individuals or than its owners Chapter 8 LINEAR TRANSFORMATIONS 81 Euclidean Linear Transformations By a transformation from R n into R m, we mean a function of type T : R n R m, with domain R n and codomain R m For every vector x R n, vector T (x) R m is called image of x under transformation T, and set R(T ) = {T (x) : x R n }, of all images under T, is called range of transformation T Remark For our convenience later, we have chosen to use R(T ) instead of usual T (R n ) to denote range of transformation T For every x = (x 1,, x n ) R n, we can write Here, for every i = 1,, m, we have where T i : R n R is a real valued function T (x) = T (x 1,, x n ) = (y 1,, y m ) y i = T i (x 1,, x n ), (1) Definition A transformation T : R n R m is called a linear transformation if re exists a real matrix a 11 a 1n A = a m1 a mn Chapter 8 : Linear Transformations page 1 of 35
2 such that for every x = (x 1,, x n ) R n, we have T (x 1,, x n ) = (y 1,, y m ), where y 1 = a 11 x a 1n x n, y m = a m1 x a mn x n, or, in matrix notation, y 1 y m = a 11 a 1n a m1 a mn x 1 x n (2) The matrix A is called standard matrix for linear transformation T Remarks (1) In or words, a transformation T : R n R m is linear if equation (1) for every i = 1,, m is linear (2) If we write x R n and y R m as column matrices, n (2) can be written in form y = Ax, and so linear transformation T can be interpreted as multiplication of x R n by standard matrix A Definition A linear transformation T : R n R m is said to be a linear operator if n = m In this case, we say that T is a linear operator on R n Example 811 The linear transformation T : R 5 R 3, defined by equations can be expressed in matrix form as If (x 1, x 2, x 3, x 4, x 5 ) = (1, 0, 1, 0, 1), n so that T (1, 0, 1, 0, 1) = ( 2, 6, 4) y 1 = 2x 1 + 3x 2 + 5x 3 + 7x 4 9x 5, y 2 = 3x 2 + 4x 3 + 2x 5, y 3 = x 1 + 3x 3 2x 4, y 1 y 2 = x 1 x x 3 y x 4 x 5 y 1 y 2 = = 2 6, y Example 812 Suppose that A is zero m n matrix The linear transformation T : R n R m, where T (x) = Ax for every x R n, is zero transformation from R n into R m Clearly T (x) = 0 for every x R n Example 813 Suppose that I is identity n n matrix The linear operator T : R n R n, where T (x) = Ix for every x R n, is identity operator on R n Clearly T (x) = x for every x R n Chapter 8 : Linear Transformations page 2 of 35
3 PROPOSITION 8A Suppose that T : R n R m is a linear transformation, and that {e 1,, e n } is standard basis for R n Then standard matrix for T is given by A = ( T (e 1 ) T (e n ) ), where T (e j ) is a column matrix for every j = 1,, n Proof This follows immediately from (2) 82 Linear Operators on R 2 In this section, we consider special case when n = m = 2, and study linear operators on R 2 For every x R 2, we shall write x = (x 1, x 2 ) Example 821 Consider reflection across x 2 axis, so that T (x 1, x 2 ) = ( x 1, x 2 ) Clearly we have T (e 1 ) = 1 0 and T (e 2 ) = 0, 1 and so it follows from Proposition 8A that standard matrix is given by A = It is not difficult to see that standard matrices for reflection across x 1 axis and across line x 1 = x 2 are given respectively by A = ( ) 0 1 and A = Also, standard matrix for reflection across origin is given by A = 0 1 We give a summary in table below: Linear operator Equations Standard matrix { y1 = x 1 Reflection across x 2 axis y 2 = x 2 { y1 = x 1 Reflection across x 1 axis y 2 = x { y1 = x 2 Reflection across x 1 = x 2 y 2 = x 1 { y1 = x 1 Reflection across origin y 2 = x Example 822 For orthogonal projection onto x 1 axis, we have T (x 1, x 2 ) = (x 1, 0), with standard matrix A = 0 0 Chapter 8 : Linear Transformations page 3 of 35
4 Similarly, standard matrix for orthogonal projection onto x 2 axis is given by We give a summary in table below: A = 0 0 Linear operator Equations Standard matrix { y1 = x 1 Orthogonal projection onto x 1 axis y 2 = { y1 = Orthogonal projection onto x 2 axis y 2 = x 2 Example 823 For anticlockwise rotation by an angle θ, we have T (x 1, x 2 ) = (y 1, y 2 ), where y 1 + iy 2 = (x 1 + ix 2 )(cos θ + i sin θ), and so ( y1 y 2 ) ( cos θ sin θ x1 = sin θ cos θ x 2 ) It follows that standard matrix is given by We give a summary in table below: cos θ sin θ A = sin θ cos θ Linear operator Equations Standard matrix { y1 = x Anticlockwise rotation by angle θ 1 cos θ x 2 sin θ cos θ sin θ y 2 = x 1 sin θ + x 2 cos θ sin θ cos θ Example 824 For contraction or dilation by a nonnegative scalar k, we have T (x 1, x 2 ) = (kx 1, kx 2 ), with standard matrix k 0 A = 0 k The operator is called a contraction if 0 < k < 1 and a dilation if k > 1, and can be extended to negative values of k by noting that for k < 0, we have k 0 k 0 = 0 k 0 k This describes contraction or dilation by nonnegative scalar k followed by reflection across origin We give a summary in table below: Linear operator Equations Standard matrix { y1 = kx Contraction or dilation by factor k 1 k 0 y 2 = kx 2 0 k Chapter 8 : Linear Transformations page 4 of 35
5 Linear Algebra c WWLChen, W W L 1997, Example 825 For expansion or compression in x 1 direction by a positive factor k, we have T (x 1,x, x 2 )=(kx ) = 1,x, x 2 ), with standard matrix k 0 A = This can be extended to negative values of k by noting that for k<0, k < we have k 0 k 0 = This describes expansion or compression in x 1 direction by positive factor k followed by reflection across x 2 axis Similarly, for expansion or compression in x 2 direction by a nonzero factor k, we have standard matrix A = 0 k We give a summary in table below: Linear operator Equations Standard matrix { y1 y1 = kx Expansion or compression in x 1 direction 1 k 0 y 2 = x 2 { y1 y1 = x 1 Expansion or compression in x 2 direction y 2 = kx 2 0 k Example 826 For shears in x 1 direction with factor k, we have T (x 1,x, x 2 )=(x ) = 1 + kx 2,x, x 2 ), with standard matrix 1 k A = For For case case k =1,wehave = 1, we have following following T (k=1) T For case k = 1, we have following T (k= 1) T Chapter 8 : Linear Transformations page 5 of 35
6 Similarly, for shears in x 2 direction with factor k, we have standard matrix We give a summary in table below: A = k 1 Linear operator Equations Standard matrix { y1 = x Shear in x 1 direction 1 + kx 2 1 k y 2 = x 2 { y1 = x 1 Shear in x 2 direction y 2 = kx 1 + x 2 k 1 Example 827 Consider a linear operator T : R 2 R 2 which consists of a reflection across x 2 axis, followed by a shear in x 1 direction with factor 3 and n reflection across x 1 axis To find standard matrix, consider effect of T on a standard basis {e 1, e 2 } of R 2 Note that e 1 = = T (e ), e 2 = = T (e ), so it follows from Proposition 8A that standard matrix for T is A = Let us summarize above and consider a few special cases We have following table of invertible linear operators with k 0 Clearly, if A is standard matrix for an invertible linear operator T, n inverse matrix A 1 is standard matrix for inverse linear operator T 1 Linear operator T Standard matrix A Inverse matrix A 1 Linear operator T 1 Reflection across Reflection across line x 1=x 2 line x 1=x 2 k 0 k 1 0 Expansion or compression in x 1 direction Expansion or compression in x 2 direction Shear in x 1 direction Shear in x 2 direction 0 k 0 k 1 1 k 1 k k 1 k 1 Expansion or compression in x 1 direction Expansion or compression in x 2 direction Shear in x 1 direction Shear in x 2 direction Next, let us consider question of elementary row operations on 2 2 matrices It is not difficult to see that an elementary row operation performed on a 2 2 matrix A has effect of multiplying Chapter 8 : Linear Transformations page 6 of 35
7 matrix A by some elementary matrix E to give product EA We have following table Elementary row operation Interchanging two rows Multiplying row 1 by nonzero factor k Multiplying row 2 by nonzero factor k Adding k times row 2 to row 1 Adding k times row 1 to row 2 Elementary matrix E k 0 0 k 1 k k 1 Now, we know that any invertible matrix A can be reduced to identity matrix by a finite number of elementary row operations In or words, re exist a finite number of elementary matrices E 1,, E s of types above with various nonzero values of k such that so that We have proved following result E s E 1 A = I, A = E 1 1 E 1 s PROPOSITION 8B Suppose that linear operator T : R 2 R 2 has standard matrix A, where A is invertible Then T is product of a succession of finitely many reflections, expansions, compressions and shears In fact, we can prove following result concerning images of straight lines PROPOSITION 8C Suppose that linear operator T : R 2 R 2 has standard matrix A, where A is invertible Then (a) image under T of a straight line is a straight line; (b) image under T of a straight line through origin is a straight line through origin; and (c) images under T of parallel straight lines are parallel straight lines Proof Suppose that T (x 1, x 2 ) = (y 1, y 2 ) Since A is invertible, we have x = A 1 y, where x = ( x1 x 2 ) and y = ( y1 The equation of a straight line is given by αx 1 + βx 2 = γ or, in matrix form, by ( α β ) ( x1 x 2 ) = ( γ ) y 2 ) Hence ( α β ) A 1 y1 = ( γ ) y 2 Chapter 8 : Linear Transformations page 7 of 35
8 Let ( α β ) = ( α β ) A 1 Then ( α β ) ( y1 y 2 ) = ( γ ) In or words, image under T of straight line αx 1 + βx 2 = γ is α y 1 + β y 2 = γ, clearly anor straight line This proves (a) To prove (b), note that straight lines through origin correspond to γ = 0 To prove (c), note that parallel straight lines correspond to different values of γ for same values of α and β 83 Elementary Properties of Euclidean Linear Transformations In this section, we establish a number of simple properties of euclidean linear transformations PROPOSITION 8D Suppose that T 1 : R n R m and T 2 : R m R k are linear transformations Then T = T 2 T 1 : R n R k is also a linear transformation Proof Since T 1 and T 2 are linear transformations, y have standard matrices A 1 and A 2 respectively In or words, we have T 1 (x) = A 1 x for every x R n and T 2 (y) = A 2 y for every y R m It follows that T (x) = T 2 (T 1 (x)) = A 2 A 1 x for every x R n, so that T has standard matrix A 2 A 1 Example 831 Suppose that T 1 : R 2 R 2 is anticlockwise rotation by π/2 and T 2 : R 2 R 2 is orthogonal projection onto x 1 axis Then respective standard matrices are A 1 = 0 1 and A 2 = 0 0 It follows that standard matrices for T 2 T 1 and T 1 T 2 are respectively A 2 A 1 = Hence T 2 T 1 and T 1 T 2 are not equal and A 1 A 2 = 0 0 Example 832 Suppose that T 1 : R 2 R 2 is anticlockwise rotation by θ and T 2 : R 2 R 2 is anticlockwise rotation by φ Then respective standard matrices are cos θ sin θ cos φ sin φ A 1 = and A sin θ cos θ 2 = sin φ cos φ It follows that standard matrix for T 2 T 1 is cos φ cos θ sin φ sin θ cos φ sin θ sin φ cos θ A 2 A 1 = = sin φ cos θ + cos φ sin θ cos φ cos θ sin φ sin θ Hence T 2 T 1 is anticlockwise rotation by φ + θ cos(φ + θ) sin(φ + θ) sin(φ + θ) cos(φ + θ) Example 833 The reader should check that in R 2, reflection across x 1 axis followed by reflection across x 2 axis gives reflection across origin Linear transformations that map distinct vectors to distinct vectors are of special importance Chapter 8 : Linear Transformations page 8 of 35
9 Definition A linear transformation T : R n R m is said to be onetoone if for every x, x R n, we have x = x whenever T (x ) = T (x ) Example 834 If we consider linear operators T : R 2 R 2, n T is onetoone precisely when standard matrix A is invertible To see this, suppose first of all that A is invertible If T (x ) = T (x ), n Ax = Ax Multiplying on left by A 1, we obtain x = x Suppose next that A is not invertible Then re exists x R 2 such that x 0 and Ax = 0 On or hand, we clearly have A0 = 0 It follows that T (x) = T (0), so that T is not onetoone PROPOSITION 8E Suppose that linear operator T : R n R n has standard matrix A Then following statements are equivalent: (a) The matrix A is invertible (b) The linear operator T is onetoone (c) The range of T is R n ; in or words, R(T ) = R n Proof ((a) (b)) Suppose that T (x ) = T (x ) Then Ax = Ax Multiplying on left by A 1 gives x = x ((b) (a)) Suppose that T is onetoone Then system Ax = 0 has unique solution x = 0 in R n It follows that A can be reduced by elementary row operations to identity matrix I, and is refore invertible ((a) (c)) For any y R n, clearly x = A 1 y satisfies Ax = y, so that T (x) = y ((c) (a)) Suppose that {e 1,, e n } is standard basis for R n Let x 1,, x n R n be chosen to satisfy T (x j ) = e j, so that Ax j = e j, for every j = 1,, n Write Then AC = I, so that A is invertible C = ( x 1 x n ) Definition Suppose that linear operator T : R n R n has standard matrix A, where A is invertible Then linear operator T 1 : R n R n, defined by T 1 (x) = A 1 x for every x R n, is called inverse of linear operator T Remark Clearly T 1 (T (x)) = x and T (T 1 (x)) = x for every x R n Example 835 Consider linear operator T : R 2 R 2, defined by T (x) = Ax for every x R 2, where 1 1 A = 1 2 Clearly A is invertible, and A 1 = Hence inverse linear operator is T 1 : R 2 R 2, defined by T 1 (x) = A 1 x for every x R 2 Example 836 Suppose that T : R 2 R 2 is anticlockwise rotation by angle θ The reader should check that T 1 : R 2 R 2 is anticlockwise rotation by angle 2π θ Next, we study linearity properties of euclidean linear transformations which we shall use later to discuss linear transformations in arbitrary real vector spaces Chapter 8 : Linear Transformations page 9 of 35
10 PROPOSITION 8F A transformation T : R n R m is linear if and only if following two conditions are satisfied: (a) For every u, v R n, we have T (u + v) = T (u) + T (v) (b) For every u R n and c R, we have T (cu) = ct (u) Proof Suppose first of all that T : R n R m is a linear transformation Let A be standard matrix for T Then for every u, v R n and c R, we have and T (u + v) = A(u + v) = Au + Av = T (u) + T (v) T (cu) = A(cu) = c(au) = ct (u) Suppose now that (a) and (b) hold To show that T is linear, we need to find a matrix A such that T (x) = Ax for every x R n Suppose that {e 1,, e n } is standard basis for R n As suggested by Proposition 8A, we write A = ( T (e 1 ) T (e n ) ), where T (e j ) is a column matrix for every j = 1,, n For any vector x 1 x = x n in R n, we have x 1 Ax = ( T (e 1 ) T (e n ) ) = x 1 T (e 1 ) + + x n T (e n ) x n Using (b) on each summand and n using (a) inductively, we obtain as required Ax = T (x 1 e 1 ) + + T (x n e n ) = T (x 1 e x n e n ) = T (x) To conclude our study of euclidean linear transformations, we briefly mention problem of eigenvalues and eigenvectors of euclidean linear operators Definition Suppose that T : R n R n is a linear operator Then any real number λ R is called an eigenvalue of T if re exists a nonzero vector x R n such that T (x) = λx This nonzero vector x R n is called an eigenvector of T corresponding to eigenvalue λ Remark Note that equation T (x) = λx is equivalent to equation Ax = λx It follows that re is no distinction between eigenvalues and eigenvectors of T and those of standard matrix A We refore do not need to discuss this problem any furr 84 General Linear Transformations Suppose that V and W are real vector spaces To define a linear transformation from V into W, we are motivated by Proposition 8F which describes linearity properties of euclidean linear transformations Chapter 8 : Linear Transformations page 10 of 35
11 By a transformation from V into W, we mean a function of type T : V W, with domain V and codomain W For every vector u V, vector T (u) W is called image of u under transformation T Definition A transformation T : V W from a real vector space V into a real vector space W is called a linear transformation if following two conditions are satisfied: (LT1) For every u, v V, we have T (u + v) = T (u) + T (v) (LT2) For every u V and c R, we have T (cu) = ct (u) Definition A linear transformation T : V V from a real vector space V into itself is called a linear operator on V Example 841 Suppose that V and W are two real vector spaces The transformation T : V W, where T (u) = 0 for every u V, is clearly linear, and is called zero transformation from V to W Example 842 Suppose that V is a real vector space The transformation I : V V, where I(u) = u for every u V, is clearly linear, and is called identity operator on V Example 843 Suppose that V is a real vector space, and that k R is fixed The transformation T : V V, where T (u) = ku for every u V, is clearly linear This operator is called a dilation if k > 1 and a contraction if 0 < k < 1 Example 844 Suppose that V is a finite dimensional vector space, with basis {w 1,, w n } Define a transformation T : V R n as follows For every u V, re exists a unique vector (β 1,, β n ) R n such that u = β 1 w β n w n We let T (u) = (β 1,, β n ) In or words, transformation T gives coordinates of any vector u V with respect to given basis {w 1,, w n } Suppose now that v = γ 1 w γ n w n is anor vector in V Then u + v = (β 1 + γ 1 )w (β n + γ n )w n, so that T (u + v) = (β 1 + γ 1,, β n + γ n ) = (β 1,, β n ) + (γ 1,, γ n ) = T (u) + T (v) Also, if c R, n cu = cβ 1 w cβ n w n, so that T (cu) = (cβ 1,, cβ n ) = c(β 1,, β n ) = ct (u) Hence T is a linear transformation We shall return to this in greater detail in next section Example 845 Suppose that P n denotes vector space of all polynomials with real coefficients and degree at most n Define a transformation T : P n P n as follows For every polynomial in P n, we let p = p 0 + p 1 x + + p n x n T (p) = p n + p n 1 x + + p 0 x n Suppose now that q = q 0 + q 1 x + + q n x n is anor polynomial in P n Then so that p + q = (p 0 + q 0 ) + (p 1 + q 1 )x + + (p n + q n )x n, T (p + q) = (p n + q n ) + (p n 1 + q n 1 )x + + (p 0 + q 0 )x n = (p n + p n 1 x + + p 0 x n ) + (q n + q n 1 x + + q 0 x n ) = T (p) + T (q) Chapter 8 : Linear Transformations page 11 of 35
12 Also, for any c R, we have cp = cp 0 + cp 1 x + + cp n x n, so that T (cp) = cp n + cp n 1 x + + cp 0 x n = c(p n + p n 1 x + + p 0 x n ) = ct (p) Hence T is a linear transformation Example 846 Let V denote vector space of all real valued functions differentiable everywhere in R, and let W denote vector space of all real valued functions defined on R Consider transformation T : V W, where T (f) = f for every f V It is easy to check from properties of derivatives that T is a linear transformation Example 847 Let V denote vector space of all real valued functions that are Riemann integrable over interval [0, 1] Consider transformation T : V R, where T (f) = 1 0 f(x) dx for every f V It is easy to check from properties of Riemann integral that T is a linear transformation Consider a linear transformation T : V W from a finite dimensional real vector space V into a real vector space W Suppose that {v 1,, v n } is a basis of V Then every u V can be written uniquely in form u = β 1 v β n v n, where β 1,, β n R It follows that T (u) = T (β 1 v β n v n ) = T (β 1 v 1 ) + + T (β n v n ) = β 1 T (v 1 ) + + β n T (v n ) We have refore proved following generalization of Proposition 8A PROPOSITION 8G Suppose that T : V W is a linear transformation from a finite dimensional real vector space V into a real vector space W Suppose furr that {v 1,, v n } is a basis of V Then T is completely determined by T (v 1 ),, T (v n ) Example 848 Consider a linear transformation T : P 2 R, where T (1) = 1, T (x) = 2 and T (x 2 ) = 3 Since {1, x, x 2 } is a basis of P 2, this linear transformation is completely determined In particular, we have, for example, T (5 3x + 2x 2 ) = 5T (1) 3T (x) + 2T (x 2 ) = 5 Example 849 Consider a linear transformation T : R 4 R, where T (1, 0, 0, 0) = 1, T (1, 1, 0, 0) = 2, T (1, 1, 1, 0) = 3 and T (1, 1, 1, 1) = 4 Since {(1, 0, 0, 0), (1, 1, 0, 0), (1, 1, 1, 0), (1, 1, 1, 1)} is a basis of R 4, this linear transformation is completely determined In particular, we have, for example, T (6, 4, 3, 1) = T (2(1, 0, 0, 0) + (1, 1, 0, 0) + 2(1, 1, 1, 0) + (1, 1, 1, 1)) = 2T (1, 0, 0, 0) + T (1, 1, 0, 0) + 2T (1, 1, 1, 0) + T (1, 1, 1, 1) = 14 We also have following generalization of Proposition 8D PROPOSITION 8H Suppose that V, W, U are real vector spaces Suppose furr that T 1 : V W and T 2 : W U are linear transformations Then T = T 2 T 1 : V U is also a linear transformation Proof Suppose that u, v V Then T (u + v) = T 2 (T 1 (u + v)) = T 2 (T 1 (u) + T 1 (v)) = T 2 (T 1 (u)) + T 2 (T 1 (v)) = T (u) + T (v) Also, if c R, n Hence T is a linear transformation T (cu) = T 2 (T 1 (cu)) = T 2 (ct 1 (u)) = ct 2 (T 1 (u)) = ct (u) Chapter 8 : Linear Transformations page 12 of 35
13 85 Change of Basis Suppose that V is a real vector space, with basis B = {u 1,, u n } Then every vector u V can be written uniquely as a linear combination u = β 1 u β n u n, where β 1,, β n R (3) It follows that vector u can be identified with vector (β 1,, β n ) R n Definition Suppose that u V and (3) holds Then matrix [u] B = is called coordinate matrix of u relative to basis B = {u 1,, u n } Example 851 The vectors u 1 = (1, 2, 1, 0), u 2 = (3, 3, 3, 0), u 3 = (2, 10, 0, 0), u 4 = ( 2, 1, 6, 2) are linearly independent in R 4, and so B = {u 1, u 2, u 3, u 4 } is a basis of R 4 It follows that for any u = (x, y, z, w) R 4, we can write In matrix notation, this becomes β 1 β n u = β 1 u 1 + β 2 u 2 + β 3 u 3 + β 4 u 4 x y z w = β 1 β 2 β 3 β 4, so that β β [u] B = = β β x y z w Remark Consider a function φ : V R n, where φ(u) = [u] B for every u V It is not difficult to see that this function gives rise to a onetoone correspondence between elements of V and elements of R n Furrmore, note that [u + v] B = [u] B + [v] B and [cu] B = c[u] B, so that φ(u + v) = φ(u) + φ(v) and φ(cu) = cφ(u) for every u, v V and c R Thus φ is a linear transformation, and preserves much of structure of V We also say that V is isomorphic to R n In practice, once we have made this identification between vectors and ir coordinate matrices, n we can basically forget about basis B and imagine that we are working in R n with standard basis Clearly, if we change from one basis B = {u 1,, u n } to anor basis C = {v 1,, v n } of V, n we also need to find a way of calculating [u] C in terms of [u] B for every vector u V To do this, note that each of vectors v 1,, v n can be written uniquely as a linear combination of vectors u 1,, u n Suppose that for i = 1,, n, we have v i = a 1i u a ni u n, where a 1i,, a ni R, Chapter 8 : Linear Transformations page 13 of 35
14 so that a 1i [v i ] B = a ni For every u V, we can write u = β 1 u β n u n = γ 1 v γ n v n, where β 1,, β n, γ 1,, γ n R, so that [u] B = β 1 β n and [u] C = γ 1 γ n Clearly u = γ 1 v γ n v n = γ 1 (a 11 u a n1 u n ) + + γ n (a 1n u a nn u n ) = (γ 1 a γ n a 1n )u (γ 1 a n1 + + γ n a nn )u n = β 1 u β n u n Hence Written in matrix notation, we have β 1 β n We have proved following result β 1 = γ 1 a γ n a 1n, β n = γ 1 a n1 + + γ n a nn = a 11 a 1n a n1 a nn PROPOSITION 8J Suppose that B = {u 1,, u n } and C = {v 1,, v n } are two bases of a real vector space V Then for every u V, we have where columns of matrix [u] B = P [u] C, P = ( [v 1 ] B [v n ] B ) are precisely coordinate matrices of elements of C relative to basis B Remark Strictly speaking, Proposition 8J gives [u] B in terms of [u] C However, note that matrix P is invertible (why?), so that [u] C = P 1 [u] B Definition The matrix P in Proposition 8J is sometimes called transition matrix from basis C to basis B γ 1 γ n Chapter 8 : Linear Transformations page 14 of 35
15 Example 852 We know that with u 1 = (1, 2, 1, 0), u 2 = (3, 3, 3, 0), u 3 = (2, 10, 0, 0), u 4 = ( 2, 1, 6, 2), and with v 1 = (1, 2, 1, 0), v 2 = (1, 1, 1, 0), v 3 = (1, 0, 1, 0), v 4 = (0, 0, 0, 2), both B = {u 1, u 2, u 3, u 4 } and C = {v 1, v 2, v 3, v 4 } are bases of R 4 It is easy to check that v 1 = u 1, v 2 = 2u 1 + u 2, v 3 = 11u 1 4u 2 + u 3, v 4 = 27u u 2 2u 3 + u 4, so that P = ( [v 1 ] B [v 2 ] B [v 3 ] B [v 4 ] B ) = Hence [u] B = P [u] C for every u R 4 It is also easy to check that u 1 = v 1, u 2 = 2v 1 + v 2, u 3 = 3v 1 + 4v 2 + v 3, u 4 = v 1 3v 2 + 2v 3 + v 4, so that Q = ( [u 1 ] C [u 2 ] C [u 3 ] C [u 4 ] C ) = Hence [u] C = Q[u] B for every u R 4 Note that P Q = I Now let u = (6, 1, 2, 2) We can check that u = v 1 + 3v 2 + 2v 3 + v 4, so that 1 3 [u] C = 2 1 Then [u] B = = Check that u = 10u 1 + 6u 2 + u 4 Chapter 8 : Linear Transformations page 15 of 35
16 Example 853 Consider vector space P 2 It is not too difficult to check that u 1 = 1 + x, u 2 = 1 + x 2, u 3 = x + x 2 form a basis of P 2 Let u = 1 + 4x x 2 Then u = β 1 u 1 + β 2 u 2 + β 3 u 3, where 1 + 4x x 2 = β 1 (1 + x) + β 2 (1 + x 2 ) + β 3 (x + x 2 ) = (β 1 + β 2 ) + (β 1 + β 3 )x + (β 2 + β 3 )x 2, so that β 1 + β 2 = 1, β 1 + β 3 = 4 and β 2 + β 3 = 1 Hence (β 1, β 2, β 3 ) = (3, 2, 1) If we write B = {u 1, u 2, u 3 }, n [u] B = On or hand, it is also not too difficult to check that v 1 = 1, v 2 = 1 + x, v 3 = 1 + x + x 2 form a basis of P 2 Also u = γ 1 v 1 + γ 2 v 2 + γ 3 v 3, where 1 + 4x x 2 = γ 1 + γ 2 (1 + x) + γ 3 (1 + x + x 2 ) = (γ 1 + γ 2 + γ 3 ) + (γ 2 + γ 3 )x + γ 3 x 2, so that γ 1 + γ 2 + γ 3 = 1, γ 2 + γ 3 = 4 and γ 3 = 1 Hence (γ 1, γ 2, γ 3 ) = ( 3, 5, 1) If we write C = {v 1, v 2, v 3 }, n Next, note that [u] C = v 1 = 1 2 u u u 3, v 2 = u 1, v 3 = 1 2 u u u 3 Hence 1/2 1 1/2 P = ( [v 1 ] B [v 2 ] B [v 3 ] B ) = 1/2 /2 1/2 /2 To verify that [u] B = P [u] C, note that /2 1 1/2 = 1/2 /2 1/2 / Kernel and Range Consider first of all a euclidean linear transformation T : R n R m Suppose that A is standard matrix for T Then range of transformation T is given by R(T ) = {T (x) : x R n } = {Ax : x R n } Chapter 8 : Linear Transformations page 16 of 35
17 It follows that R(T ) is set of all linear combinations of columns of matrix A, and is refore column space of A On or hand, set is nullspace of A {x R n : Ax = 0} Recall that sum of dimension of nullspace of A and dimension of column space of A is equal to number of columns of A This is known as Ranknullity orem The purpose of this section is to extend this result to setting of linear transformations To do this, we need following generalization of idea of nullspace and column space Definition Suppose that T : V W is a linear transformation from a real vector space V into a real vector space W Then set is called kernel of T, and set is called range of T ker(t ) = {u V : T (u) = 0} R(T ) = {T (u) : u V } Example 861 For a euclidean linear transformation T with standard matrix A, we have shown that ker(t ) is nullspace of A, while R(T ) is column space of A Example 862 Suppose that T : V W is zero transformation Clearly we have ker(t ) = V and R(T ) = {0} Example 863 Suppose that T : V V is identity operator on V Clearly we have ker(t ) = {0} and R(T ) = V Example 864 Suppose that T : R 2 R 2 is orthogonal projection onto x 1 axis Then ker(t ) is x 2 axis, while R(T ) is x 1 axis Example 865 Suppose that T : R n R n is onetoone Then ker(t ) = {0} and R(T ) = R n, in view of Proposition 8E Example 866 Consider linear transformation T : V W, where V denotes vector space of all real valued functions differentiable everywhere in R, where W denotes space of all real valued functions defined in R, and where T (f) = f for every f V Then ker(t ) is set of all differentiable functions with derivative 0, and so is set of all constant functions in R Example 867 Consider linear transformation T : V R, where V denotes vector space of all real valued functions Riemann integrable over interval [0, 1], and where T (f) = 1 0 f(x) dx for every f V Then ker(t ) is set of all Riemann integrable functions in [0, 1] with zero mean, while R(T ) = R PROPOSITION 8K Suppose that T : V W is a linear transformation from a real vector space V into a real vector space W Then ker(t ) is a subspace of V, while R(T ) is a subspace of W Chapter 8 : Linear Transformations page 17 of 35
18 Proof Since T (0) = 0, it follows that 0 ker(t ) V and 0 R(T ) W For any u, v ker(t ), we have T (u + v) = T (u) + T (v) = = 0, so that u + v ker(t ) Suppose furr that c R Then T (cu) = ct (u) = c0 = 0, so that cu ker(t ) Hence ker(t ) is a subspace of V Suppose next that w, z R(T ) Then re exist u, v V such that T (u) = w and T (v) = z Hence T (u + v) = T (u) + T (v) = w + z, so that w + z R(T ) Suppose furr that c R Then T (cu) = ct (u) = cw, so that cw R(T ) Hence R(T ) is a subspace of W To complete this section, we prove following generalization of Ranknullity orem PROPOSITION 8L Suppose that T : V W is a linear transformation from an ndimensional real vector space V into a real vector space W Then dim ker(t ) + dim R(T ) = n Proof Suppose first of all that dim ker(t ) = n Then ker(t ) = V, and so R(T ) = {0}, and result follows immediately Suppose next that dim ker(t ) = 0, so that ker(t ) = {0} If {v 1,, v n } is a basis of V, n it follows that T (v 1 ),, T (v n ) are linearly independent in W, for orwise re exist c 1,, c n R, not all zero, such that c 1 T (v 1 ) + + c n T (v n ) = 0, so that T (c 1 v c n v n ) = 0, a contradiction since c 1 v c n v n 0 On or hand, elements of R(T ) are linear combinations of T (v 1 ),, T (v n ) Hence dim R(T ) = n, and result again follows immediately We may refore assume that dim ker(t ) = r, where 1 r < n Let {v 1,, v r } be a basis of ker(t ) This basis can be extended to a basis {v 1,, v r, v r+1,, v n } of V It suffices to show that {T (v r+1 ),, T (v n )} (4) is a basis of R(T ) Suppose that u V Then re exist β 1,, β n R such that so that u = β 1 v β r v r + β r+1 v r β n v n, T (u) = β 1 T (v 1 ) + + β r T (v r ) + β r+1 T (v r+1 ) + + β n T (v n ) = β r+1 T (v r+1 ) + + β n T (v n ) It follows that (4) spans R(T ) It remains to prove that its elements are linearly independent Suppose that c r+1,, c n R and c r+1 T (v r+1 ) + + c n T (v n ) = 0 (5) Chapter 8 : Linear Transformations page 18 of 35
19 We need to show that c r+1 = = c n = 0 (6) By linearity, it follows from (5) that T (c r+1 v r c n v n ) = 0, so that Hence re exist c 1,, c r R such that so that c r+1 v r c n v n ker(t ) c r+1 v r c n v n = c 1 v c r v r, c 1 v c r v r c r+1 v r+1 c n v n = 0 Since {v 1,, v n } is a basis of V, it follows that c 1 = = c r = c r+1 = = c n = 0, so that (6) holds This completes proof Remark We sometimes say that dim R(T ) and dim ker(t ) are respectively rank and nullity of linear transformation T 87 Inverse Linear Transformations In this section, we generalize some of ideas first discussed in Section 83 Definition A linear transformation T : V W from a real vector space V into a real vector space W is said to be onetoone if for every u, u V, we have u = u whenever T (u ) = T (u ) The result below follows immediately from our definition PROPOSITION 8M Suppose that T : V W is a linear transformation from a real vector space V into a real vector space W Then T is onetoone if and only if ker(t ) = {0} Proof ( ) Clearly 0 ker(t ) Suppose that ker(t ) {0} Then re exists a nonzero v ker(t ) It follows that T (v) = T (0), and so T is not onetoone ( ) Suppose that ker(t ) = {0} Given any u, u V, we have T (u ) T (u ) = T (u u ) = 0 if and only if u u = 0; in or words, if and only if u = u We have following generalization of Proposition 8E PROPOSITION 8N Suppose that T : V V is a linear operator on a finitedimensional real vector space V Then following statements are equivalent: (a) The linear operator T is onetoone (b) We have ker(t ) = {0} (c) The range of T is V ; in or words, R(T ) = V Proof The equivalence of (a) and (b) is established by Proposition 8M The equivalence of (b) and (c) follows from Proposition 8L Chapter 8 : Linear Transformations page 19 of 35
20 Suppose that T : V W is a onetoone linear transformation from a real vector space V into a real vector space W Then for every w R(T ), re exists exactly one u V such that T (u) = w We can refore define a transformation T 1 : R(T ) V by writing T 1 (w) = u, where u V is unique vector satisfying T (u) = w PROPOSITION 8P Suppose that T : V W is a onetoone linear transformation from a real vector space V into a real vector space W Then T 1 : R(T ) V is a linear transformation Proof Suppose that w, z R(T ) Then re exist u, v V such that T 1 (w) = u and T 1 (z) = v It follows that T (u) = w and T (v) = z, so that T (u + v) = T (u) + T (v) = w + z, whence T 1 (w + z) = u + v = T 1 (w) + T 1 (z) Suppose furr that c R Then T (cu) = cw, so that This completes proof T 1 (cw) = cu = ct 1 (w) We also have following result concerning compositions of linear transformations and which requires no furr proof, in view of our knowledge concerning inverse functions PROPOSITION 8Q Suppose that V, W, U are real vector spaces Suppose furr that T 1 : V W and T 2 : W U are onetoone linear transformations Then (a) linear transformation T 2 T 1 : V U is onetoone; and (b) (T 2 T 1 ) 1 = T1 1 T Matrices of General Linear Transformations Suppose that T : V W is a linear transformation from a real vector space V to a real vector space W Suppose furr that vector spaces V and W are finite dimensional, with dim V = n and dim W = m We shall show that if we make use of a basis B of V and a basis C of W, n it is possible to describe T indirectly in terms of some matrix A The main idea is to make use of coordinate matrices relative to bases B and C Let us recall some discussion in Section 85 Suppose that B = {v 1,, v n } is a basis of V Then every vector v V can be written uniquely as a linear combination v = β 1 v β n v n, where β 1,, β n R (7) The matrix [v] B = β 1 (8) β n is coordinate matrix of v relative to basis B Consider now a transformation φ : V R n, where φ(v) = [v] B for every v V The proof of following result is straightforward PROPOSITION 8R Suppose that real vector space V has basis B = {v 1,, v n } Then transformation φ : V R n, where φ(v) = [v] B satisfies (7) and (8) for every v V, is a onetoone linear transformation, with range R(φ) = R n Furrmore, inverse linear transformation φ 1 : R n V is also onetoone, with range R(φ 1 ) = V Chapter 8 : Linear Transformations page 20 of 35
21 Linear Algebra c WWLChen, 1997, 2006 Linear Algebra Linear Algebra c c WWLChen, 1997, 2006 WWLChen, W W L 1997, Suppose Suppose Suppose next next next that {w,,w is basis of Then we can define linear transformation that that C = {w {w 1,,w,,,w, w m } is is a basis basis of of W Then Then we we can can define define a linear linear transformation transformation ψ : W R m, where where where ψ(w) ψ(w) ψ(w) =[w] =[w] = =[w] for every,inasimilar way We now have following C diagram of linear transformations for for every every w W,inasimilar,,inasimilar a way way We We now now have have following following diagram diagram of of linear linear transformations transformations V T W 1 φ 1 1 φ 1 ψ 1 1 ψ R n R m Clearly Clearly Clearly composition composition composition Clearly composition S = ψ T φ : R n R m is is is a euclidean euclidean euclidean linear linear linear transformation, transformation, transformation, and and and can can can refore refore refore be be be described described described in in in terms terms terms of of of a standard standard standard matrix matrix matrix A A A Our Our Our task task task is is is to to to determine determine determine this this this matrix matrix matrix A in in in terms terms terms of of of T and and and bases bases bases B and and and C C C We We We know know know from from from Proposition Proposition Proposition 8A 8A 8A that that that = ), A =(S(e =(S(e =(S(e S(e )), 1 ) S(e S(e n )), )), where,, e =, where where {e {e {e,,e is standard basis for For every =1,,n, we have 1,,e,,e n } is is standard standard basis basis for for R n For For every every j =1,,n, =1,,n, we we have have S(e ) = ) = )=(ψ 1 )(e )=ψ(t (φ 1 (e ))) ψ(t (v )) [T (v )] S(e S(e j )=(ψ )=(ψ T φ 1 1 )(e )(e j )=ψ(t )=ψ(t (φ (φ 1 1 (e (e j ))) ))) = ψ(t ψ(t (v (v j )) )) = [T [T (v (v j )] )] C It follows that It It follows follows that that (9) [T (v )] [T (v )] (9) (9) (9) A = ( [T [T (v (v 1 )] )] C [T [T (v (v n )] )] C ) Definition Definition Definition The The The matrix matrix matrix A given given given by by by (9) (9) (9) is is is called called called matrix matrix matrix for for for linear linear linear transformation transformation transformation T with with with respect respect respect to to to bases bases bases and C B and and C C We now have following diagram of linear transformations We We now now have have following following diagram diagram of of linear linear transformations transformations V T W 1 φ 1 1 φ 1 ψ 1 1 ψ R n S R m Hence Hence Hence we we we can can can write write write T as as as composition composition composition T 1 W = ψ 1 1 S φ : V W W For every,wehave following: For For every every v V,wehave,,wehave following: following: φ S ψ 1 φ [v] S A[v] ψ v [v] [v] 1 (A[v] A[v] 1 B A[v] 1 B ψ 1 1 (A[v] (A[v] B ) Chapter Linear Transformations page 21 of 35 Chapter Linear Transformations page 21 of 35 Chapter 8 : Linear Transformations page 21 of 35
22 More precisely, if v = β 1 v β n v n, n [v] B = β 1 β n and A[v] B = A β 1 β n = say, and so T (v) = ψ 1 (A[v] B ) = γ 1 w γ m w m We have proved following result PROPOSITION 8S Suppose that T : V W is a linear transformation from a real vector space V into a real vector space W Suppose furr that V and W are finite dimensional, with bases B and C respectively, and that A is matrix for linear transformation T with respect to bases B and C Then for every v V, we have T (v) = w, where w W is unique vector satisfying [w] C = A[v] B Remark In special case when V = W, linear transformation T : V W is a linear operator on T Of course, we may choose a basis B for domain V of T and a basis C for codomain V of T In case when T is identity linear operator, we often choose B = C since this represents a change of basis In case when T is not identity operator, we often choose B = C for sake of convenience; we n say that A is matrix for linear operator T with respect to basis B Example 881 Consider an operator T : P 3 P 3 on real vector space P 3 of all polynomials with real coefficients and degree at most 3, where for every polynomial p(x) in P 3, we have T (p(x)) = xp (x), product of x with formal derivative p (x) of p(x) The reader is invited to check that T is a linear operator Now consider basis B = {1, x, x 2, x 3 } of P 3 The matrix for T with respect to B is given by A = ( [T (1)] B [T (x)] B [T (x 2 )] B [T (x 3 )] B ) = ( [0] B [x] B [2x 2 ] B [3x ] B ) = Suppose that p(x) = 1 + 2x + 4x 2 + 3x 3 Then γ 1 γ m, [p(x)] B = and A[p(x)] 4 B = =, so that T (p(x)) = 2x + 8x 2 + 9x 3 This can be easily verified by noting that T (p(x)) = xp (x) = x(2 + 8x + 9x 2 ) = 2x + 8x 2 + 9x 3 In general, if p(x) = p 0 + p 1 x + p 2 x 2 + p 3 x 3, n p p 0 p [p(x)] B = p and A[p(x)] p B = 1 = p 2 p p 3 so that T (p(x)) = p 1 x + 2p 2 x 2 + 3p 3 x 3 Observe that verifying our result T (p(x)) = xp (x) = x(p 1 + 2p 2 x + 3p 3 x 2 ) = p 1 x + 2p 2 x 2 + 3p 3 x 3, 0 p 1 2p 2 3p 3, Chapter 8 : Linear Transformations page 22 of 35
23 Linear Algebra c WWLChen, W W L 1997, Example 882 Consider linear operator T : R 2 R 2, given by T (x 1,x, x 2 ) = (2x 1 + x 2,x, x 1 +3x+ 2 ) for every (x, x 1,x 2 ) R 2 Consider also basis B = {(1, 0), (1, 1)} of R 2 Then matrix for T with respect to B is given by 1 1 A = ( [T (1, 0)] B [T (1, 1)] B )=([(2, ) = ( 1)] B [(3, 4)] B )= ) = 1 4 Suppose that (x 1,x, x 2 ) = (3, 2) Then 1 [(3, 2)] B = 2 () and A[(3, 2)] B = =, so that T (3, 2) = (1, 0) + 9(1, 1) = (8, 9) This can be easily verified directly In general, we have x1 x [(x, x 1,x 2 )] B = 2 x 2 () 1 1 x1 x, x and A[(x 1,x 2 )] B = 2 x1 2x = 2, 1 4 x 2 x 1 +3x+ 2 so that T (x 1,x, x 2 )=(x ) = 1 2x 2 )(1, 0) + (x 1 +3x+ 2 )(1, 1) = (2x 1 + x 2,x, x 1 +3x+ 2 ) Example 883 Suppose that T : R n R m is a linear transformation Suppose furr that B and C are standard bases for R n and R m respectively Then matrix for T with respect to B and C is given by A = ( [T (e 1 )] C [T (e n )] C )=(T ) = ( T (e 1 ) T (e n )), ) ), so it follows from Proposition 8A that A is simply standard matrix for T Suppose now that T 1 : V W and T 2 : W U are linear transformations, where real vector spaces V, W, U are finite dimensional, with respective bases B = {v 1,,v,, v n }, C = {w 1,,w,, w m } and D = {u 1,,u,, u k }We n have following diagram of linear transformations T 1 V W U T 2 φ 1 φ ψ 1 ψ η 1 η S 1 R n R m R k Here η : U R k, where η(u) =[u] = D for every u U, isalinear a transformation, and S 1 = ψ T 1 φ 1 : R n R m and S 2 = η T 2 ψ 1 : R m R k are euclidean linear transformations Suppose that A 1 and A 2 are respectively standard matrices for S 1 and S 2,so, that y are respectively matrix for T 1 with respect to B and C and matrix for T 2 with respect to C and D Clearly S 2 S 1 = η T 2 T 1 φ 1 : R n R k It follows that A 2 A 1 is standard matrix for S 2 S 1, and so is matrix for T 2 T 1 with respect to bases B and D To summarize, we have following result S 2 Chapter 8 : Linear Transformations page 23 of 35
24 PROPOSITION 8T Suppose that T 1 : V W and T 2 : W U are linear transformations, where real vector spaces V, W, U are finite dimensional, with bases B, C, D respectively Suppose furr that A 1 is matrix for linear transformation T 1 with respect to bases B and C, and that A 2 is matrix for linear transformation T 2 with respect to bases C and D Then A 2 A 1 is matrix for linear transformation T 2 T 1 with respect to bases B and D Example 884 Consider linear operator T 1 : P 3 P 3, where for every polynomial p(x) in P 3, we have T 1 (p(x)) = xp (x) We have already shown that matrix for T 1 with respect to basis B = {1, x, x 2, x 3 } of P 3 is given by A 1 = Consider next linear operator T 2 : P 3 P 3, where for every polynomial q(x) = q 0 +q 1 x+q 2 x 2 +q 3 x 3 in P 3, we have T 2 (q(x)) = q(1 + x) = q 0 + q 1 (1 + x) + q 2 (1 + x) 2 + q 3 (1 + x) 3 We have T 2 (1) = 1, T 2 (x) = 1 + x, T 2 (x 2 ) = 1 + 2x + x 2 and T 2 (x 3 ) = 1 + 3x + 3x 2 + x 3, so that matrix for T 2 with respect to B is given by A 2 = ( [T 2 (1)] B [T 2 (x)] B [T 2 (x 2 )] B [T 2 (x )] B ) = Consider now composition T = T 2 T 1 : P 3 P 3 Let A denote matrix for T with respect to B By Proposition 8T, we have A = A 2 A 1 = = Suppose that p(x) = p 0 + p 1 x + p 2 x 2 + p 3 x 3 Then p p 0 p 1 + 2p 2 + 3p 3 p [p(x)] B = p and A[p(x)] p B = 1 p = 1 + 4p 2 + 9p 3, p 2 2p 2 + 9p 3 p p 3 3p 3 so that T (p(x)) = (p 1 +2p 2 +3p 3 )+(p 1 +4p 2 +9p 3 )x+(2p 2 +9p 3 )x 2 +3p 3 x 3 We can check this directly by noting that T (p(x)) = T 2 (T 1 (p(x))) = T 2 (p 1 x + 2p 2 x 2 + 3p 3 x 3 ) = p 1 (1 + x) + 2p 2 (1 + x) 2 + 3p 3 (1 + x) 3 = (p 1 + 2p 2 + 3p 3 ) + (p 1 + 4p 2 + 9p 3 )x + (2p 2 + 9p 3 )x 2 + 3p 3 x 3 Example 885 Consider linear operator T : R 2 R 2, given by T (x 1, x 2 ) = (2x 1 + x 2, x 1 + 3x 2 ) for every (x 1, x 2 ) R 2 We have already shown that matrix for T with respect to basis B = {(1, 0), (1, 1)} of R 2 is given by A = Chapter 8 : Linear Transformations page 24 of 35
25 Consider linear operator T 2 : R 2 R 2 By Proposition 8T, matrix for T 2 with respect to B is given by A = = Suppose that (x 1, x 2 ) R 2 Then x1 x [(x 1, x 2 )] B = 2 and A 2 [(x 1, x 2 )] B = x 2 ( ) ( x1 x 2 5x2 = x 2 5x x 2 so that T (x 1, x 2 ) = 5x 2 (1, 0) + (5x x 2 )(1, 1) = (5x 1 + 5x 2, 5x x 2 ) The reader is invited to check this directly A simple consequence of Propositions 8N and 8T is following result concerning inverse linear transformations PROPOSITION 8U Suppose that T : V V is a linear operator on a finite dimensional real vector space V with basis B Suppose furr that A is matrix for linear operator T with respect to basis B Then T is onetoone if and only if A is invertible Furrmore, if T is onetoone, n A 1 is matrix for inverse linear operator T 1 : V V with respect to basis B Proof Simply note that T is onetoone if and only if system Ax = 0 has only trivial solution x = 0 The last assertion follows easily from Proposition 8T, since if A denotes matrix for inverse linear operator T 1 with respect to B, n we must have A A = I, matrix for identity operator T 1 T with respect to B Example 886 Consider linear operator T : P 3 P 3, where for every q(x) = q 0 +q 1 x+q 2 x 2 +q 3 x 3 in P 3, we have T (q(x)) = q(1 + x) = q 0 + q 1 (1 + x) + q 2 (1 + x) 2 + q 3 (1 + x) 3 We have already shown that matrix for T with respect to basis B = {1, x, x 2, x 3 } is given by A = This matrix is invertible, so it follows that T is onetoone Furrmore, it can be checked that Suppose that p(x) = p 0 + p 1 x + p 2 x 2 + p 3 x 3 Then A = p p 0 p 0 p 1 + p 2 p 3 p [p(x)] B = 1 and A p [p(x)] p B = 1 p = 1 2p 2 + 3p 3, p 2 p 2 3p 3 p p 3 p 3 so that T 1 (p(x)) = (p 0 p 1 + p 2 p 3 ) + (p 1 2p 2 + 3p 3 )x + (p 2 3p 3 )x 2 + p 3 x 3 = p 0 + p 1 (x 1) + p 2 (x 2 2x + 1) + p 3 (x 3 3x 2 + 3x 1) = p 0 + p 1 (x 1) + p 2 (x 1) 2 + p 3 (x 1) 3 = p(x 1) ), Chapter 8 : Linear Transformations page 25 of 35
26 89 Change of Basis Suppose that V is a finite dimensional real vector space, with one basis B = {v 1,, v n } and anor basis B = {u 1,, u n } Suppose that T : V V is a linear operator on V Let A denote matrix for T with respect to basis B, and let A denote matrix for T with respect to basis B If v V and T (v) = w, n and We wish to find relationship between A and A Recall Proposition 8J, that if [w] B = A[v] B (10) [w] B = A [v] B (11) P = ( [u 1 ] B [u n ] B ) denotes transition matrix from basis B to basis B, n [v] B = P [v] B and [w] B = P [w] B (12) Note that matrix P can also be interpreted as matrix for identity operator I : V V with respect to bases B and B It is easy to see that matrix P is invertible, and P 1 = ( [v 1 ] B [v n ] B ) denotes transition matrix from basis B to basis B, and can also be interpreted as matrix for identity operator I : V V with respect to bases B and B Combining (10) and (12), we conclude that Comparing this with (11), we conclude that This implies that [w] B = P 1 [w] B = P 1 A[v] B = P 1 AP [v] B P 1 AP = A (13) A = P A P 1 (14) Remark We can use notation A = [T ] B and A = [T ] B to denote that A and A are matrices for T with respect to basis B and with respect to basis B respectively We can also write P = [I] B,B to denote that P is transition matrix from basis B to basis B, so that P 1 = [I] B,B Chapter 8 : Linear Transformations page 26 of 35
27 Linear Algebra c WWLChen, W W L 1997, Then (13) and (14) become respectively [I] B B,B[T ] B [I] B,B =[T = ] B and [I] B,B [T ] B [I] B B,B =[T = ] B We have proved following result PROPOSITION 8V Suppose that T : V V is a linear operator on a finite dimensional real vector space V, with bases B = {v,, v,, u 1,,v n } and B = {u 1,,u n } Suppose furr that A and A are matrices for T with respect to basis B and with respect to basis B respectively Then P 1 AP = A and A = P AP 1, where P = ( [u 1 ] B [u n ] B ) denotes transition matrix from basis B to basis B Remarks (1) We have following picture I v T w I v T w A [v] B [w] B P P 1 [v] B A [w] B (2) The idea can be extended to case of linear transformations T : V W from a finite dimensional real vector space into anor, with a change of basis in V and a change of basis in W Example 891 Consider vector space P 3 of all polynomials with real coefficients and degree at most 3, with bases B = {1, x, x 2,x, x 3 } and B = {1, 1+x, x 1 + x + x 2, 1+x 1 + x + x 2 + x 3 } Consider also linear operator T : P =p 3 P 3, where for every polynomial p(x) = p 0 + p 1 x + p 2 x 2 + p 3 x 3, we have T (p(x)) = (p )+(p +(p +(p 0 + p 1 ) p 2 )x p 3 )x p 3 )x 3 Let A denote matrix for T with respect to basis B Then T (1) = 1 + x 3, T (x) = 1 + x, T (x 2 )=x ) = x + x 2 and T (x 3 )=x ) = x 2 + x 3, and so 1 0 A = ( [T (1)] B [T (x)] B [T (x 2 )] B [T (x 3 )= )] B ) = 0 1 Next, note that transition matrix from basis B to basis B is given by =([1] P = ( B [1 + x] B [1 + x + x 2 ] B [1 + x + x 2 + x 3 )= 1 1 ] B ) = Chapter 8 : Linear Transformations page 27 of 35
28 It can be checked that 1 0 P 1 =, and so A = P AP = = is matrix for T with respect to basis B It follows that T (1) = 1 (1 + x + x 2 ) + (1 + x + x 2 + x 3 ) = 1 + x 3, T (1 + x) = 1 + (1 + x) (1 + x + x 2 ) + (1 + x + x 2 + x 3 ) = 2 + x + x 3, T (1 + x + x 2 ) = (1 + x) + (1 + x + x 2 + x 3 ) = 2 + 2x + x 2 + x 3, T (1 + x + x 2 + x 3 ) = 2(1 + x + x 2 + x 3 ) = 2 + 2x + 2x 2 + 2x 3 These can be verified directly 810 Eigenvalues and Eigenvectors Definition Suppose that T : V V is a linear operator on a finite dimensional real vector space V Then any real number λ R is called an eigenvalue of T if re exists a nonzero vector v V such that T (v) = λv This nonzero vector v V is called an eigenvector of T corresponding to eigenvalue λ The purpose of this section is to show that problem of eigenvalues and eigenvectors of linear operator T can be reduced to problem of eigenvalues and eigenvectors of matrix for T with respect to any basis B of V The starting point of our argument is following orem, proof of which is left as an exercise PROPOSITION 8W Suppose that T : V V is a linear operator on a finite dimensional real vector space V, with bases B and B Suppose furr that A and A are matrices for T with respect to basis B and with respect to basis B respectively Then (a) det A = det A ; (b) A and A have same rank; (c) A and A have same characteristic polynomial; (d) A and A have same eigenvalues; and (e) dimension of eigenspace of A corresponding to an eigenvalue λ is equal to dimension of eigenspace of A corresponding to λ We also state without proof following result PROPOSITION 8X Suppose that T : V V is a linear operator on a finite dimensional real vector space V Suppose furr that A is matrix for T with respect to a basis B of V Then (a) eigenvalues of T are precisely eigenvalues of A; and (b) a vector u V is an eigenvector of T corresponding to an eigenvalue λ if and only if coordinate matrix [u] B is an eigenvector of A corresponding to eigenvalue λ Chapter 8 : Linear Transformations page 28 of 35
Mathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 19967 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 34 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationMATH1231 Algebra, 2015 Chapter 7: Linear maps
MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter
More information1 VECTOR SPACES AND SUBSPACES
1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such
More information( ) which must be a vector
MATH 37 Linear Transformations from Rn to Rm Dr. Neal, WKU Let T : R n R m be a function which maps vectors from R n to R m. Then T is called a linear transformation if the following two properties are
More informationChapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation
Chapter 6 Linear Transformation 6 Intro to Linear Transformation Homework: Textbook, 6 Ex, 5, 9,, 5,, 7, 9,5, 55, 57, 6(a,b), 6; page 7 In this section, we discuss linear transformations 89 9 CHAPTER
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the yaxis We observe that
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationMAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =
MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1982, 2008. This chapter originates from material used by author at Imperial College, University of London, between 1981 and 1990. It is available free to all individuals,
More informationISOMETRIES OF R n KEITH CONRAD
ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the ndimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all ndimensional column
More informationLectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222  Linear Algebra II  Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n realvalued matrix A is said to be an orthogonal
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LUdecomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationMath 312 Homework 1 Solutions
Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationChapter 20. Vector Spaces and Bases
Chapter 20. Vector Spaces and Bases In this course, we have proceeded stepbystep through lowdimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit
More information18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in 2106. Total: 175 points.
806 Problem Set 4 Solution Due Wednesday, March 2009 at 4 pm in 206 Total: 75 points Problem : A is an m n matrix of rank r Suppose there are righthandsides b for which A x = b has no solution (a) What
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Rowreduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationOrthogonal Projections
Orthogonal Projections and Reflections (with exercises) by D. Klain Version.. Corrections and comments are welcome! Orthogonal Projections Let X,..., X k be a family of linearly independent (column) vectors
More informationLinear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)
MAT067 University of California, Davis Winter 2007 Linear Maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007) As we have discussed in the lecture on What is Linear Algebra? one of
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationMA106 Linear Algebra lecture notes
MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More information9 MATRICES AND TRANSFORMATIONS
9 MATRICES AND TRANSFORMATIONS Chapter 9 Matrices and Transformations Objectives After studying this chapter you should be able to handle matrix (and vector) algebra with confidence, and understand the
More informationSection 6.1  Inner Products and Norms
Section 6.1  Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More informationMatrix Representations of Linear Transformations and Changes of Coordinates
Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationMath 215 HW #6 Solutions
Math 5 HW #6 Solutions Problem 34 Show that x y is orthogonal to x + y if and only if x = y Proof First, suppose x y is orthogonal to x + y Then since x, y = y, x In other words, = x y, x + y = (x y) T
More informationMATH 551  APPLIED MATRIX THEORY
MATH 55  APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationLinear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University
Linear Algebra Done Wrong Sergei Treil Department of Mathematics, Brown University Copyright c Sergei Treil, 2004, 2009, 2011, 2014 Preface The title of the book sounds a bit mysterious. Why should anyone
More informationx1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.
Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An ndimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0534405967. Systems of Linear Equations Definition. An ndimensional vector is a row or a column
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus ndimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationSection 1.1. Introduction to R n
The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to
More informationLinear Algebra Review. Vectors
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka kosecka@cs.gmu.edu http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa Cogsci 8F Linear Algebra review UCSD Vectors The length
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationis in plane V. However, it may be more convenient to introduce a plane coordinate system in V.
.4 COORDINATES EXAMPLE Let V be the plane in R with equation x +2x 2 +x 0, a twodimensional subspace of R. We can describe a vector in this plane by its spatial (D)coordinates; for example, vector x 5
More informationLecture 14: Section 3.3
Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationThese axioms must hold for all vectors ū, v, and w in V and all scalars c and d.
DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationCS3220 Lecture Notes: QR factorization and orthogonal transformations
CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationr (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)
Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system
More informationFIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.
FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
More informationx + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9  SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a nonempty
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonaldiagonalorthogonal type matrix decompositions Every
More informationSection 1.7 22 Continued
Section 1.5 23 A homogeneous equation is always consistent. TRUE  The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE  The equation
More informationLinear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More informationMath 4310 Handout  Quotient Vector Spaces
Math 4310 Handout  Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 918/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationProblem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.
Math 312, Fall 2012 Jerry L. Kazdan Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday. In addition to the problems below, you should also know how to solve
More informationTHREE DIMENSIONAL GEOMETRY
Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,
More information26. Determinants I. 1. Prehistory
26. Determinants I 26.1 Prehistory 26.2 Definitions 26.3 Uniqueness and other properties 26.4 Existence Both as a careful review of a more pedestrian viewpoint, and as a transition to a coordinateindependent
More informationGROUP ALGEBRAS. ANDREI YAFAEV
GROUP ALGEBRAS. ANDREI YAFAEV We will associate a certain algebra to a finite group and prove that it is semisimple. Then we will apply Wedderburn s theory to its study. Definition 0.1. Let G be a finite
More informationHow To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More information9 Multiplication of Vectors: The Scalar or Dot Product
Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation
More informationDecember 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS
December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in twodimensional space (1) 2x y = 3 describes a line in twodimensional space The coefficients of x and y in the equation
More informationEigenvalues, Eigenvectors, Matrix Factoring, and Principal Components
Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they
More informationAdding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors
1 Chapter 13. VECTORS IN THREE DIMENSIONAL SPACE Let s begin with some names and notation for things: R is the set (collection) of real numbers. We write x R to mean that x is a real number. A real number
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in threespace, we write a vector in terms
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationThe cover SU(2) SO(3) and related topics
The cover SU(2) SO(3) and related topics Iordan Ganev December 2011 Abstract The subgroup U of unit quaternions is isomorphic to SU(2) and is a double cover of SO(3). This allows a simple computation of
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More information1 0 5 3 3 A = 0 0 0 1 3 0 0 0 0 0 0 0 0 0 0
Solutions: Assignment 4.. Find the redundant column vectors of the given matrix A by inspection. Then find a basis of the image of A and a basis of the kernel of A. 5 A The second and third columns are
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More information