Soltions to Assignment Math 27, Fall 22.4.8 Define T : R R by T (x) = Ax where A is a matrix with eigenvales and -2. Does there exist a basis B for R sch that the B-matrix for T is a diagonal matrix? We know that if C is the matrix giving the B-matrix for T, then A is similar to C. So this qestion can be restated as follows: is A similar to a diagonal matrix? We know that this can happen if and only if A has n linearly independent eigenvectors (theorem, page 2). We also know that eigenvectors corresponding to different eigenvales are linearly independent (theorem 2, pg 7). The characteristic eqation of this matrix has degree, so one of the two given eigenvales mst occr with mltiplicity two (an aside, complex roots come in pairs, so it can t be the case that the remaining root of the characteristic eqation is complex). We conclde that A will be diagonalizable if and only if the eigenspace corresponding to the eigenvale with mltiplicity two has dimension two..4.2 Show that if A is similar to B, then A 2 is similar to B 2 (where A and B are sqare). Becase A is similar to B, there is a P sch that A = P BP. Sqaring both sides of this eqation, we find that A 2 = (P BP ) 2 = (P BP )(P BP ) = P BP P BP = P BBP = P B 2 P, so A 2 is similar to B 2..4.24 Show that if A and B are similar, then they have the same rank. The proof is not difficlt, bt the chain of reasoning one has to follow is somewhat long. We will need to prove the following lemma: Lemma. If D and C are an n n matrices sch that C is invertible, then rank(cd) = rank(d). Before giving the proof of this lemma, let s see how we will se it.
Becase A is similar to B, there is an invertible matrix P sch that A = P BP, and ths AP = P B. The lemma will show s that rank(b) = rank(p B). Of corse, rank(p B) = rank(ap ), and by the Rank Theorem rank(ap ) = rank((ap ) T ). Then the lemma will also show that rank((ap ) T ) = rank(p T A T ) = rank(a T ) (recall that by the IMT, P t is invertible if and only if P is invertible). Again applying the Rank Theorem, we will conclde that rank(b) = rank(p B) = rank(ap ) = rank((ap ) T ) = rank(p T A T ) = rank(a T ) = rank(a). So all we have to do is to prove the lemma. Let C and D be as given in the statement of the lemma. Then the Rank Theorem also tells s that rank(d)+dim(nl(d)) = n = rank(cd)+dim(nl(cd)) so it is enogh to show that dim(nl(d)) = dim(nl(cd)). To do this we will show that Nl(D) = Nl(CD). If x Nl(D), then Dx =, and ths CDx = so x Nl(CD). On the other hand, if x Nl(CD), then CDx = and becase C is invertible, C CDx = C, or Dx =. Ths Nl(D) = Nl(CD), so dim(nl(d)) = dim(nl(cd)), and we conclde that rank(d) = rank(cd) as reqired. 6 6.. Find a nit vector in the direction of = 4. The nit vector in the direction of is. Here is = ( 6)2 + 4 2 + ( ) 2 = 6 and ths = 6 6 4 6 6. 6..24 Verify the parallelogram law for vectors and v in R n : + v 2 + v 2 = 2 2 + 2 v 2. We know that the dot prodct distribtes, so, ( ) 2 ( ) 2 + v 2 + v 2 = ( + v) ( + v) + ( v) ( v) = (+v) (+v)+( v) ( v) = + v+v +v v+ v v +v v = 2 2 + 2 v 2. 2
6..26 Let = 6, and let W be the set of all x R sch that x =. What 7 theorem in Chapter 4 can be sed to show that W is a sbspace of R. Describe W in geometric langage. Note that x W if and only if x = or rather, if T x =. Ths W is the Nll space of the matrix T. We know from theorem 2, page 227, that a Nl space is a vector space. In geometric langage, W consists of all vectors perpendiclar to, that is, W is the plane going throgh the origin which is perpendiclar to. [ ] [ ] 2 7 6.2.4 Let y = and =. Write y as the sm of a vector in Span{} and a 6 vector orthogonal to. [ ] [ ] 7 4/ First we calclate proj (y) = ŷ = y = 2 =. Let z = y ŷ = [ ] [ ] [ ] 2/ 2 4/ 4/ =. Now by constrction it is clear that y = z + ŷ, that 6 2/ 28/ ŷ Span{}, and, as was arged on pg 86, that z is orthogonal to. 6.2.2 Let {v, v 2 } be an orthogonal set of nonzero vectors, and c, c 2 be any nonzero scalars. Show that {c v, c 2 v 2 } is also an orthogonal set. We have that v v 2 =, and ths (c v ) (c 2 v 2 ) = c c 2 (v v 2 ) = c c 2 =. We conclde that c v and c 2 v 2 are orthogonal. 6.2.4 Given in R n, let L = Span{}. For y R n, the reflection of y in L is the point refl L (y) defined by refl L (y) = 2proj L (y) y. Show that the mapping y refl L (y) is a linear transformation. Well, there are some things we have to check. First, for all x, y R, note that T (x + y) = refl L (x + y) = 2proj L (x + y) x y = 2 (x + y) x y = 2 x + y x y = 2 x x + 2y y =
2proj L (x) x + 2proj L (y) y = refl L (x) + refl L (y) = T (x) + T (y). For all c R, T (cx) = refl L (cx) = 2proj L (cx) cx = 2 cx cx = 2c x cx = c ( 2 x x ) = c(2proj L (x) x) = crefl L (x) = ct (x). We conclde that T is a linear transformation. 4 6..6 Let y =, v = 2, and v 2 =. Find the distance from y to the 2 sbspace of R 4 spanned by v and v 2. Call W the sbspace spanned by v and v 2. Then they are asking s to calclate Done. y proj W (y) = y v y v v 2 y v 2 = v v v 2 v 2 4 4 2 26 26 = 4 4 = 4 2 + 4 2 + 4 2 + 4 2 = 8. 2 4 6..2 Let =, 2 =, and let 4 =. It can be shown that 4 is not 2 2 in the sbspace W spanned by and 2. Use this fact to constrct a nonzero vector v in R that is orthogonal to and 2. We know by theorem 8, page 9, that v = 4 û 4 = 4 proj W ( 4 ) is orthogonal to every vector in W. It is also tre that 2 =, so v = 4 û 4 = ( 4 + 2 4 2 ) 2 2 = + = 4/. 6 2 2 2/ 4
6..24 Let W be a sbspace of R n with an orthogonal basis {w,..., w p }, and let {v,..., v q } be an orthogonal basis for W. (a) Explain why {w,..., w p v,..., v q } is an orthogonal set. We know that w i w j = for all i j, i, j p, and that v i v j = for all i j, i, j q. So it remains to check that w i v j = for all i p and j q. We know this is tre, however, becase v j W, which is by definition the set of all vectors whose dot prodct with elements of W is zero. (b) Explain why the set in part (a) spans R n By the Orthogonal Decomposition Theorem, any y R n can be written as y = ŷ + z where ŷ W and z W. This means we can write any vector in R n sing the vectors from the set {w,..., w p v,..., v q }, and ths this set spans. (c) Show that dim(w ) + dim(w ) = n. We know that {w,..., w p v,..., v q } spans R n, so if we can show that this set is linearly independent, then n = p + q = dim(w ) + dim(w ) as reqired (I am sing here the fact that a linearly independent spanning set is a basis). Sppose that we have c i, d j R for i =,..., p and j =,..., q sch that p i= c iw i + q j= d iv i = and not all the c i, d j are zero. Becase the w i are linearly independent, it can not be the case that all the d i are zero (otherwise we cold write p i= c iw i + q j= d iv i = p i= c iw i = where not all the c i are zero, a contradiction). A similar argment for the v j shows that all the c i are not zero. Ths we can write = p i= c iw i = q j= d iv i, and ths there is a nonzero element of W which is also in W ( p i= c iw i cannot be identically zero becase not all the c i are zero and the w i are linearly independent). Becase W, we know that the dot prodct of with anything in W is zero. Bt is also in W, so =, and hence that =. This is a contradiction ( was spposed to be nonzero), and ths completes the proof.