Solutions to Assignment 10



Similar documents
Similarity and Diagonalization. Similar Matrices

CHAPTER ONE VECTOR GEOMETRY

The Dot Product. Properties of the Dot Product If u and v are vectors and a is a real number, then the following are true:

Chapter 6. Orthogonality

Chapter 14. Three-by-Three Matrices and Determinants. A 3 3 matrix looks like a 11 a 12 a 13 A = a 21 a 22 a 23

α = u v. In other words, Orthogonal Projection

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Orthogonal Diagonalization of Symmetric Matrices

by the matrix A results in a vector which is a reflection of the given

Lecture 14: Section 3.3

Section Inner Products and Norms

Orthogonal Projections

MATH1231 Algebra, 2015 Chapter 7: Linear maps

Inner Product Spaces and Orthogonality

MAT 1341: REVIEW II SANGHOON BAEK

[1] Diagonal factorization

Inner products on R n, and more

T ( a i x i ) = a i T (x i ).

MATH APPLIED MATRIX THEORY

4.1 Work Done by a Constant Force

1.3. DOT PRODUCT If θ is the angle (between 0 and π) between two non-zero vectors u and v,

Orthogonal Projections and Orthonormal Bases

Math 215 HW #6 Solutions

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Linear Algebra Review. Vectors

3. INNER PRODUCT SPACES

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Applied Linear Algebra I Review page 1

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Inner product. Definition of inner product

9 Multiplication of Vectors: The Scalar or Dot Product

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

5.3 The Cross Product in R 3

LINEAR ALGEBRA W W L CHEN

( ) which must be a vector

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

MAT 242 Test 2 SOLUTIONS, FORM T

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

Math 550 Notes. Chapter 7. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Similar matrices and Jordan form

Linear Algebra Notes

Solutions to Math 51 First Exam January 29, 2015

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

NOTES ON LINEAR TRANSFORMATIONS

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Mathematics Course 111: Algebra I Part IV: Vector Spaces

University of Lille I PC first year list of exercises n 7. Review

Inner Product Spaces

A =

Applied Linear Algebra

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Methods for Finding Bases

Chapter 17. Orthogonal Matrices and Symmetries of Space

Examination paper for TMA4115 Matematikk 3

ISOMETRIES OF R n KEITH CONRAD

Math 241, Exam 1 Information.

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

4 MT210 Notebook Eigenvalues and Eigenvectors Definitions; Graphical Illustrations... 3

Solving Linear Systems, Continued and The Inverse of a Matrix

Numerical Analysis Lecture Notes

LIMITS IN CATEGORY THEORY

Chapter 20. Vector Spaces and Bases

LINEAR ALGEBRA. September 23, 2010

1 Sets and Set Notation.

12.5 Equations of Lines and Planes

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Every manufacturer is confronted with the problem

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

1 VECTOR SPACES AND SUBSPACES

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Section Continued

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

x = + x 2 + x

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Math Practice Exam 2 with Some Solutions

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

160 CHAPTER 4. VECTOR SPACES

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Chapter 2. ( Vasiliy Koval/Fotolia)

Section 4.4 Inner Product Spaces

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

THE DIMENSION OF A VECTOR SPACE

Lecture 5: Singular Value Decomposition SVD (1)

Section 1.1. Introduction to R n

4.5 Linear Dependence and Linear Independence

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Numerical Analysis Lecture Notes

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Problem set on Cross Product

6. Cholesky factorization

Transcription:

Soltions to Assignment Math 27, Fall 22.4.8 Define T : R R by T (x) = Ax where A is a matrix with eigenvales and -2. Does there exist a basis B for R sch that the B-matrix for T is a diagonal matrix? We know that if C is the matrix giving the B-matrix for T, then A is similar to C. So this qestion can be restated as follows: is A similar to a diagonal matrix? We know that this can happen if and only if A has n linearly independent eigenvectors (theorem, page 2). We also know that eigenvectors corresponding to different eigenvales are linearly independent (theorem 2, pg 7). The characteristic eqation of this matrix has degree, so one of the two given eigenvales mst occr with mltiplicity two (an aside, complex roots come in pairs, so it can t be the case that the remaining root of the characteristic eqation is complex). We conclde that A will be diagonalizable if and only if the eigenspace corresponding to the eigenvale with mltiplicity two has dimension two..4.2 Show that if A is similar to B, then A 2 is similar to B 2 (where A and B are sqare). Becase A is similar to B, there is a P sch that A = P BP. Sqaring both sides of this eqation, we find that A 2 = (P BP ) 2 = (P BP )(P BP ) = P BP P BP = P BBP = P B 2 P, so A 2 is similar to B 2..4.24 Show that if A and B are similar, then they have the same rank. The proof is not difficlt, bt the chain of reasoning one has to follow is somewhat long. We will need to prove the following lemma: Lemma. If D and C are an n n matrices sch that C is invertible, then rank(cd) = rank(d). Before giving the proof of this lemma, let s see how we will se it.

Becase A is similar to B, there is an invertible matrix P sch that A = P BP, and ths AP = P B. The lemma will show s that rank(b) = rank(p B). Of corse, rank(p B) = rank(ap ), and by the Rank Theorem rank(ap ) = rank((ap ) T ). Then the lemma will also show that rank((ap ) T ) = rank(p T A T ) = rank(a T ) (recall that by the IMT, P t is invertible if and only if P is invertible). Again applying the Rank Theorem, we will conclde that rank(b) = rank(p B) = rank(ap ) = rank((ap ) T ) = rank(p T A T ) = rank(a T ) = rank(a). So all we have to do is to prove the lemma. Let C and D be as given in the statement of the lemma. Then the Rank Theorem also tells s that rank(d)+dim(nl(d)) = n = rank(cd)+dim(nl(cd)) so it is enogh to show that dim(nl(d)) = dim(nl(cd)). To do this we will show that Nl(D) = Nl(CD). If x Nl(D), then Dx =, and ths CDx = so x Nl(CD). On the other hand, if x Nl(CD), then CDx = and becase C is invertible, C CDx = C, or Dx =. Ths Nl(D) = Nl(CD), so dim(nl(d)) = dim(nl(cd)), and we conclde that rank(d) = rank(cd) as reqired. 6 6.. Find a nit vector in the direction of = 4. The nit vector in the direction of is. Here is = ( 6)2 + 4 2 + ( ) 2 = 6 and ths = 6 6 4 6 6. 6..24 Verify the parallelogram law for vectors and v in R n : + v 2 + v 2 = 2 2 + 2 v 2. We know that the dot prodct distribtes, so, ( ) 2 ( ) 2 + v 2 + v 2 = ( + v) ( + v) + ( v) ( v) = (+v) (+v)+( v) ( v) = + v+v +v v+ v v +v v = 2 2 + 2 v 2. 2

6..26 Let = 6, and let W be the set of all x R sch that x =. What 7 theorem in Chapter 4 can be sed to show that W is a sbspace of R. Describe W in geometric langage. Note that x W if and only if x = or rather, if T x =. Ths W is the Nll space of the matrix T. We know from theorem 2, page 227, that a Nl space is a vector space. In geometric langage, W consists of all vectors perpendiclar to, that is, W is the plane going throgh the origin which is perpendiclar to. [ ] [ ] 2 7 6.2.4 Let y = and =. Write y as the sm of a vector in Span{} and a 6 vector orthogonal to. [ ] [ ] 7 4/ First we calclate proj (y) = ŷ = y = 2 =. Let z = y ŷ = [ ] [ ] [ ] 2/ 2 4/ 4/ =. Now by constrction it is clear that y = z + ŷ, that 6 2/ 28/ ŷ Span{}, and, as was arged on pg 86, that z is orthogonal to. 6.2.2 Let {v, v 2 } be an orthogonal set of nonzero vectors, and c, c 2 be any nonzero scalars. Show that {c v, c 2 v 2 } is also an orthogonal set. We have that v v 2 =, and ths (c v ) (c 2 v 2 ) = c c 2 (v v 2 ) = c c 2 =. We conclde that c v and c 2 v 2 are orthogonal. 6.2.4 Given in R n, let L = Span{}. For y R n, the reflection of y in L is the point refl L (y) defined by refl L (y) = 2proj L (y) y. Show that the mapping y refl L (y) is a linear transformation. Well, there are some things we have to check. First, for all x, y R, note that T (x + y) = refl L (x + y) = 2proj L (x + y) x y = 2 (x + y) x y = 2 x + y x y = 2 x x + 2y y =

2proj L (x) x + 2proj L (y) y = refl L (x) + refl L (y) = T (x) + T (y). For all c R, T (cx) = refl L (cx) = 2proj L (cx) cx = 2 cx cx = 2c x cx = c ( 2 x x ) = c(2proj L (x) x) = crefl L (x) = ct (x). We conclde that T is a linear transformation. 4 6..6 Let y =, v = 2, and v 2 =. Find the distance from y to the 2 sbspace of R 4 spanned by v and v 2. Call W the sbspace spanned by v and v 2. Then they are asking s to calclate Done. y proj W (y) = y v y v v 2 y v 2 = v v v 2 v 2 4 4 2 26 26 = 4 4 = 4 2 + 4 2 + 4 2 + 4 2 = 8. 2 4 6..2 Let =, 2 =, and let 4 =. It can be shown that 4 is not 2 2 in the sbspace W spanned by and 2. Use this fact to constrct a nonzero vector v in R that is orthogonal to and 2. We know by theorem 8, page 9, that v = 4 û 4 = 4 proj W ( 4 ) is orthogonal to every vector in W. It is also tre that 2 =, so v = 4 û 4 = ( 4 + 2 4 2 ) 2 2 = + = 4/. 6 2 2 2/ 4

6..24 Let W be a sbspace of R n with an orthogonal basis {w,..., w p }, and let {v,..., v q } be an orthogonal basis for W. (a) Explain why {w,..., w p v,..., v q } is an orthogonal set. We know that w i w j = for all i j, i, j p, and that v i v j = for all i j, i, j q. So it remains to check that w i v j = for all i p and j q. We know this is tre, however, becase v j W, which is by definition the set of all vectors whose dot prodct with elements of W is zero. (b) Explain why the set in part (a) spans R n By the Orthogonal Decomposition Theorem, any y R n can be written as y = ŷ + z where ŷ W and z W. This means we can write any vector in R n sing the vectors from the set {w,..., w p v,..., v q }, and ths this set spans. (c) Show that dim(w ) + dim(w ) = n. We know that {w,..., w p v,..., v q } spans R n, so if we can show that this set is linearly independent, then n = p + q = dim(w ) + dim(w ) as reqired (I am sing here the fact that a linearly independent spanning set is a basis). Sppose that we have c i, d j R for i =,..., p and j =,..., q sch that p i= c iw i + q j= d iv i = and not all the c i, d j are zero. Becase the w i are linearly independent, it can not be the case that all the d i are zero (otherwise we cold write p i= c iw i + q j= d iv i = p i= c iw i = where not all the c i are zero, a contradiction). A similar argment for the v j shows that all the c i are not zero. Ths we can write = p i= c iw i = q j= d iv i, and ths there is a nonzero element of W which is also in W ( p i= c iw i cannot be identically zero becase not all the c i are zero and the w i are linearly independent). Becase W, we know that the dot prodct of with anything in W is zero. Bt is also in W, so =, and hence that =. This is a contradiction ( was spposed to be nonzero), and ths completes the proof.