INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL

Save this PDF as:

Size: px
Start display at page:

Transcription

1 SOLUTIONS OF THEORETICAL EXERCISES selected from INTRODUCTORY LINEAR ALGEBRA WITH APPLICATIONS B. KOLMAN, D. R. HILL Eighth Edition, Prentice Hall, Dr. Grigore CĂLUGĂREANU Department of Mathematics and Computer Sciences The Algebra Group Kuwait University 2006

2 ii

3 Contents Preface List of symbols v vii Matrices 3 Determinants 29 4 n-vectors 37 5 Lines and Planes 4 6 Vector Spaces 45 8 Diagonalization 55 References 59 iii

4 iv CONTENTS

5 Preface Back in 997, somebody asked in the Mathematics Department: Why are the results in Course (Linear Algebra), so bad? The solution was to cancel some sections of the 6 chapters selected for this one semester course. The solutions of some of the so-called theoretical Exercises were to be covered in the lectures. But this takes time and less time remains for covering the long material in these 6 chapters. Our collection of solutions is intended to help the students in course, and provides to the lecturers a precious additional time in order to cover carefully all the large number of notions, results, examples and procedures to be taught in the lectures. Moreover, this collection yields all the solutions of the Chapter Tests and as a Bonus, some special Exercises to be solved by the students in their Home Work. Because often these Exercises are required in Midterms and Final Exam, the students are warmly encouraged to prepare carefully these solutions, and, if some of them are not understood, to use the Office Hours of their teachers for supplementary explanations. The author v

6 vi PREFACE

7 List of symbols Symbol N Z Q R Description the set of all positive integer numbers the set of all integer numbers the set of all rational numbers the set of all real numbers for R any of the above numerical sets R R n M m n (R) M n (R) S n P(M) R[X] the set R, removing zero the set of all n-vectors with entries in R the set of all m n matrices with entries in R the set of all (square) n n matrices the set of all permutations of n elements the set of all subsets of M the set of all polynomials of indeterminate X with coefficients in R vii

8 viii LIST OF SYMBOLS

9 Second Edition (updated to eighth Edition) All rights reserved to the Department of Mathematics and Computer Sciences, Faculty of Science, Kuwait University ix

10 0

11 Chapter Matrices Page 20. T.5. A square matrix A = [a ij ] is called upper triangular if a ij = 0 for i > j. It is called lower triangular if a ij = 0 for i < j. a a a n 0 a a 2n 0 0 a a 3n a nn Upper triangular matrix (The entries below the main diagonal are zero.) a a 2 a a 3 a 32 a a n a n2 a n a nn

12 2 CHAPTER. MATRICES Lower triangular matrix (The entries above the main diagonal are zero.) (a) Show that the sum and difference of two upper triangular matrices is upper triangular. (b) Show that the sum and difference of two lower triangular matrices is lower triangular. (c) Show that if a matrix is upper and lower triangular, then it is a diagonal matrix. Solution. (a) As A above, let B = [b ij ] be also an upper triangular matrix, S = A+B = [s ij ] be the sum and D = A B = [d ij ] be the difference of these matrices. Then, for every i > j we have s ij = a ij + b ij = = 0 respectively, d ij = a ij b ij = 0 0 = 0. Thus the sum and difference of two upper triangular matrices isupper triangular Example = 0 4 4, sum of two upper triangular matrices, which is also upper triangular; = , difference of two lower triangular matrices, which is also lower triangular. (b) Similar. (c) If a matrix is upper and lower triangular, then the entries above the main diagonal and the entries below the main diagonal are zero. Hence all the entries off the main diagonal are zero and the matrix is diagonal. T.6. (a) Show that if A is an upper triangular matrix, then A T is lower triangular. (b) Show that if A is an lower triangular matrix, then A T is upper triangular. Solution. (a) By the definition of the transpose if A T = [a T ij ], and A is an upper triangular matrix, a ij = 0 for every i > j and so a T ji = a ij = 0.

13 3 Hence A T is lower triangular. (b) Similar. Page T.4. Show that the product of two diagonal matrices is a diagonal matrix. Solution. Just verify that a b a b = a nn b nn a b a 22 b a nn b nn T.5. Show that the product of two scalar matrices is a scalar matrix. Solution. Just verify that a b ab a b = 0 ab a b ab Short solution. Notice that any scalar matrix has the form a.i n = a a Then, obviously (a.i n)(b.i n ) = (ab).i n shows that prod a ucts of scalar matrices are scalar matrices. T.6. (a) Show that the product of two upper triangular matrices is an upper triangular matrix..

14 4 CHAPTER. MATRICES (b) Show that the product of two lower triangular matrices is a lower triangular matrix. Sketched solution. (a) A direct computation shows that the product of two upper triangular matrices a a 2... a n 0 a a 2n a nn is upper triangular too. Indeed, this is b b 2... b n 0 b b 2n b nn a b a b + a 2 b a b n + a 2 b 2n a n b nn 0 a 22 b a 22 b 2n + a 23 b 3n a 2n b nn a nn b nn an upper triangular matrix. Complete solution. Let P = [p ij ] = AB be the product of two upper triangular matrices. If i > j then p ij = n k= a ikb kj = i k= a ikb kj + n k=i a ikb kj = (a i b j a i,i b i,j ) + (a ii b ij a in b nj ). Since both matrices A and B are upper triangular, in the first sum the a s are zero and in the second sum the b s are zero. Hence p ij = 0 and P is upper triangular too. (b) Analogous. T9. (a) Show that the j-th column of the matrix product AB is equal to the matrix product Acol j (B). (b) Show that the i-th row of the matrix product AB is equal to the matrix product row i (A)B. Solution. (a) With usual notations, consider A = [a ij ] an m n matrix, B = [b ij ] an n p matrix and C = AB = [c ij ] the corresponding matrix product, an m p matrix.,

15 As already seen in the Lectures, an arbitrary (i, j)-entry c ij in the n product is given by the dot product row i (A) col j (B) = a ik b kj or k= b j b 2j [a i a i2...a in ]. Hence the j-th column of the product (the matrix. b nj C) is the following: c j c 2j. c nj = row (A) col j (B) row 2 (A) col j (B). row n (A) col j (B) = Acol j(b) using the product (just do the computation!) of the initial m n matrix A b j b 2j and the n matrix col j (B) =.. 5 (b) Analogous. b nj Page 5. T.9. Find [ a 2 ] 2 matrix B O 2 and B I 2 such that AB = BA, 2 where A =. 0 Solution. Obviously B = A satisfies the required conditions (indeed, AA = AA, A O 2 and A I 2 ). Solution for a better statement: find all the matrices B with this property. [ ] a b We search for B as an unknown matrix. c d

16 6 CHAPTER. MATRICES [ ] [ ] [ 2 a b a b Thus = 0 c d c d of matrix equality, ] [ 2 0 a + 2c = a b + 2d = 2a + b c = c d = 2c + d ] and so, using the definition These equalities are [ equivalent ] to c = 0 and a = d. Therefore every a b matrix B of the form with arbitrary (real) numbers a, b verifies [ 0 a ] 0 AB = BA. Example:. 0 0 T.3. Show that ( )A = A. Solution. In the proof of Theorem. (Properties of the matrix addition) we took D = ( )A (the scalar multiplication) and we have verified that A + D = D + A = O, that is A = ( )A. T.23. Let A and B be symmetric matrices. (a) Show that A + B is symmetric. (b) Show that AB is symmetric if and only if AB = BA. Solution. (a) This follows at once from (A + B) T Th..4(b) = A T + B T = A + B, A and B being symmetric (above some of the equalities, their justification is given). (b) First observe that (AB) T Th..4(c) = B T A T = BA holds for arbitrary symmetric matrices A, B. Now, if AB is symmetric, (AB) T = AB and thus (using the previous equality) AB = BA. Conversely, if AB = BA then (AB) T = BA = AB, that is, AB is symmetric. T.26. If A is an n n matrix, show that AA T and A T A are symmetric. Solution. For instance (AA T ) T Th..4(c) = (A T ) T A T Th..4(a) = AA T, so that AA T is symmetric. Likewise A T A is symmetric..

17 We recall here a definition given in Exercise T24: a matrix A = [a ij ] is called skew-symmetric if A T = A. T.27. If A is an n n matrix, (a) Show that A + A T is symmetric. (b) Show that A A T is skew-symmetric. Solution. (a) We have only to observe that (A + A T ) T (A T ) T Th..4(a) = A T + A Th..(a) = A + A T. (b) Analogously, (A A T ) T Th..4(b) = A T (A T ) T Th..4(a) = = A T A Th..(a) = (A A T ). 7 Th..4(b) = A T + T.28. Show that if A is an n n matrix, then A can be written uniquely as A = S + K, where S is symmetric and K is skew-symmetric. Solution. Suppose such a decomposition exists. Then A T = S T + K T = S K so that A + A T = 2S and A A T = 2K. Now take S = 2 (A + AT ) and K = 2 (A AT ). One verifies A = S + K, S = S T and K T = K similarly to the previous Exercise 26. T.32. Show that if Ax = b is a linear system that has more than one solution, then it has infinitely many solutions. Solution. Suppose u u 2 are two different solutions of the given linear system. For an arbitrary real number r, such that 0 < r <, consider w r = ru + ( r)u 2. This is also a solution of the given system: Aw r = A(ru + ( r)u 2 ) = r(au ) + ( r)(au 2 ) = rb + ( r)b = b. First observe that w r / {u, u 2 }. Indeed, w r = u implies u = ru + ( r)u 2, ( r)(u u 2 ) = 0 and hence u = u 2, a contradiction. Similarly, w r u 2. Next, observe that for 0 < r, s <, r s the corresponding solutions are different: indeed, w r = w s implies ru + ( r)u 2 = su + ( s)u 2 and so (r s)(u u 2 ) = 0, a contradiction. Page 89. T.. Ax = 0. Let u and v be solutions to the homogeneous linear system

18 8 CHAPTER. MATRICES (a) Show that u + v is a solution. (b) For any scalar r, show that ru is a solution. (c) Show that u v is a solution. (d) For any scalars r and s, show that ru + sv is a solution. Remark. We have interchanged (b) and (c) from the book, on purpose. Solution. We use the properties of matrix operations. (a) A(u + v) Th..2(b) = Au + Av = = 0. (b) A(ru) Th..3(d) = r(au) = r0 = 0. (c) We can use our previous (b) and (a): by (b), for r = we have ( )v = v is a solution; by (a) u+( v) = u v is a solution. (d) Using twice our (b), ru and sv are solutions and, by (a), ru + sv is a solution. T.2. Show that if u and v are solutions to the linear system Ax = b, then u v is a solution to the associated homogeneous system Ax = 0. Solution. Our hypothesis assures that Au = b and Av = b. Hence A(u v) Th..2(b) = Au Av = b b = 0 and so u v is a solution to the associated homogeneous system Ax = 0. Page 86 (Bonus). Find all values of a for which the resulting linear system has (a) no solution, (b) a unique solution, and (c) infinitely many solutions. x + y z = x + 2y + z = 3. x + y + (a 2 5)z = a Solution. We use Gauss-Jordan method. The augmented matrix is The first elementary operations give a 2 5 a R +R 2, a 2 5 a 0 0 a 2 4 a 2

19 Case. a 2 4 = 0, that is a {±2}. (i) a = 2. Then the last matrix gives (final Step after Step 8; in what follows we refer to the steps in the Procedure p ) the following reduced row echelon form R 2+R { x = + 3z The corresponding system is, which has (z is an arbi- y = 2z trary real number), infinitely many solutions. (ii) a = 2. The last matrix is and so our last equation is 0 x + 0 y + 0 z = 4. Hence this is an inconsistent system (it has no solutions). Case 2. a (i.e., a / {±2}). In the 4 submatrix (which remains neglecting the first and second rows), we must only use Step 4 (multiply by = a 2 4 (a 2)(a+2) RREF): a+2 ) and after this, the final Step (from REF to a a a a+2. a+2 a+2 Finally the solution (depending on a) in this case is x = + 3, y = a+2 2 and z =. a+2 a x + y + z = 2 2x + 3y + 2z = 3 2x + 3y + (a 2 )z = a +.

20 20 CHAPTER. MATRICES Solution. The augmented matrix is elementary operations give R +R 2,3 2 3 a 2 a + and a 2 3 a a 2 a a 2 3 a a 2 3 a 4. The first Case. a 2 3 0, that is a / {± 3}. Then a 2 3 is our third pivot and we continue by Step 4 (multiply the third row by ) : a a 4 a a a a 4 a 2 3 This is a consistent system with the unique solution x = a 4, y =, a 2 3 z = a 4. a 2 3 Case 2. a 2 3 = 0, that is a {± 3}. Hence a 4 0 and the last equation is 0 x + 0 y + 0 z = a 4 0, an inconsistent system (no solutions). Pages (Bonus). 6. Find all the values of a for which the inverse of 0 A = a exists. What is A?.,

21 Solution. We use the practical procedure for computing the inverse (see p. 95, textbook): a a 0 R +R 2,3 R 2+R a a a 2 = [ ] C.D. R 2 R 2+R Case. If a = 0 then C has a zero row (C I 3 ). Hence A is singular (that is, has no inverse). Case 2. If a 0 we use Step 4 (multiply the third row by a ): [ ] = C.D a a a 0 0 so that A is nonsingular and D = A = If A and B are nonsingular, are A + B, A B and A nonsingular? Explain. Solution. If A, B are nonsingular generally it is not true that A + B or A B are nonsingular. It suffices to give suitable counterexamples: for A = I n and B = I n we have A + B = 0 n, which is not invertible (see the definition p. 9), and, for A = B, the difference A B = 0 n, again a non-invertible matrix. 2 a a a 2

22 22 CHAPTER. MATRICES Finally, if A is nonsingular we easily check that A is also invertible and ( A) = A. Remark. More can be proven (see Exercise 20, (b)): for every c 0, if A is nonsingular, ca is also nonsingular and (ca) = c A (indeed, one verifies (ca)( c A ) Th..3(d) = ((ca) Th..3.(d) )A = (c c c A)A = AA = I n and similarly ( c A )(ca) = I n ).

23 23 Chapter Test.. Find all solutions to the linear system x + x 2 + x 3 2x 4 = 3 2x + x 2 + 3x 3 + 2x 4 = 5 x 2 + x 3 + 6x 4 = 3. Solution. Gauss-Jordan method is used: R +R R 2+R R 2 The corresponding equivalent system has the third equation 0 x + 0 x x x 4 = 4, which has no solution. 2. Find all values of a for which the resulting linear system has (a) no solution, (b) a unique solution, and (c) infinitely many solutions. x + z = 4 2x + y + 3z = 5 3x 3y + (a 2 5a)z = a 8. Solution. Gauss-Jordan method is used: ( 2)R +R 2,3R +R a 2 5a a a 2 5a + 3 a + 4 3R 2+R 3.

24 24 CHAPTER. MATRICES a 2 5a + 6 a 5 As usually we distinguish two cases (notice that a 2 5a + 6 = (a 2)(a 3) = 0 a {2, 3}): Case. a = 2 or a = 3. In both cases a + 0 and so the third equation of the corresponding system is 0 x + 0 y + 0 z = a +, with no solution. Case 2. If a / {2, 3} then a 2 5a and the procedure continues with Step 4: a 2 5a+6 R a a 2 5a+6. R 3+R 2, a 5 a 2 5a a 5 a 2 5a+6, a a 2 5a+6 with the corresponding equivalent system (unique solution) x = 4 a 5, y = 3 a 5, z = a 5. a 2 5a+6 a 2 5a+6 a 2 5a+6 3. If possible, find the inverse of the following matrix Solution. We use the Practical Procedure (see p. 95, textbook): R +R R 2+R R R 3+R 2,R 3 +R

25 = ]. [ C.D ( 2)R 2+R = Since C has no zero rows, A exists (that is, A is nonsingular) and 3 A 2 2 = D = [ ] 2 4. If A =, find all values of λ for which the homogeneous 2 2 system (λi 2 A)x = 0 has a nontrivial solution. Solution. The [ homogeneous ] system has a nontrivial solution if and only λ + 2 if λi 2 A = is singular (see Theorem.3, p. 99). Use 2 λ 2 the practical procedure for finding the inverse: Case. λ + 0 then this is the first pivot in [ λ λ 2 0 [ 2 λ+ 2 λ λ+ 0 λ+ λ+ ] [ λ+ R 2 λ+ ] = 0 λ+ 2 λ 2 0 [ 2 λ+ 2 λ2 λ 6 λ+ 2 λ+ 0 λ+ ] 2R +R 2 ] = [ ] C.D. Now, if λ 2 λ 6 = (λ + 2)(λ 3) = 0 λ { 2, 3} then C has a zero row and λi 2 A is singular (as required). [ ] Case 2. If λ + = 0 then the initial matrix is so that using Step 3 we obtain [ ] [ ] [ ] 25

26 26 CHAPTER. MATRICES [ ] Hence[ the coefficient matrix of the system is a nonsingular matrix with inverse 3 ] 4 2 and therefore the homogeneous system has a unique (trivial) solution (a) If A = (AB) (b) Solve Ax = b for x if A = and B = Th..0(b) Solution. (a) (AB) = B A = and b = (b) If A exists (that is, A is nonsingular) then (see p.98) x = A b = 7. Answer each of the following as true or false. (a) If A and B are n n matrices, then (A + B)(A + B) = A 2 + 2AB + B 2.., compute (b) If u and u 2 are solutions to the linear system Ax = b, then w = 4 u u 2 is also a solution to Ax = b. (c) If A is a nonsingular matrix, then the homogeneous system Ax = 0 has a nontrivial solution. (d) A homogeneous system of three equations in four unknowns has a nontrivial solution. 2 3.

27 (e) If A, B and C are n n nonsingular matrices, then (ABC) = C A B. Solution. (a) False, since generally AB = BA fails. Indeed, take A = [ ] and B = [ ]. Then AB = [ ] and BA = [ As a matter of fact (A + B) 2 = (A + B)(A + B) Th..2(b) = (A + B)A + (A + B)B = Th..2(c) = A 2 + BA + AB + B 2. (b) True, indeed if Au = b and Au 2 = b then Aw = A( u u 4 2) Th..2(b),Th..3(d) = Au 4 + 3Au 4 2 = b + 3b = b. 4 4 (c) False: see Theorem.3 (p. 99). (d) True: special case of Theorem.8 (p.77) for m = 3 < 4 = n. (e) False: a special case of Corollary.2 (p. 94) gives (ABC) = C B A. It suffices to give an example for B A A B. 27 ].

28 28 CHAPTER. MATRICES

29 Chapter 3 Determinants Page T3. Show that if c is a real number and A is an n n matrix then det(ca) = c n det(a). Solution. You only have to repeatedly use (n times) Theorem 3.5 (p. 87): the scalar multiplication of a matrix by c consists of the multiplication of the first row, of the second row,..., and the multiplication of the n-th row, by the same real number c. Hence det(ca) = c(c(...ca)...)) = c n det(a). T5. Show that if det(ab) = 0 then det(a) = 0 or det(b) = 0. Solution. Indeed, by Theorem 3.8 (p. 9) det(ab) = det(a) det(b) = 0 and so det(a) = 0 or det(b) = 0 (as a zero product of two real numbers). T6. Is det(ab) = det(ba)? Justify your answer. Solution. Yes. Generally AB BA but because of Theorem 3.8 det(ab) = det(a) det(b) = det(b) det(a) = det(ba) (determinants are real numbers). T8. Show that if AB = I n then det(a) 0 and det(b) 0. Solution. Using Theorem 3.8, = det(i n ) = det(ab) = det(a) det(b) and so det(a) 0 and det(b) 0 (as a nonzero product of two real numbers). 29

32 32 CHAPTER 3. DETERMINANTS A (which exists, A being nonsingular), we obtain A (AB) = A I n and B = A. Hence BA = A A = I n. (b) Similarly. 2) Exercise T3. Show that if A is symmetric, then adja is also symmetric. Solution. Take an arbitrary cofactor A ij = ( ) i+j det(m ij ) where M ij is the submatrix of A obtained by deleting the i-th row and j-th column. Observe that the following procedure gives also M ij : () consider the transpose A T ; (2) in A T delete the j-th row and i-th column; (3) transpose the resulting submatrix. Further, if A is symmetric (i.e., A = A T ) the above procedure shows that M ij = (M ji ) T. Therefore A ji = ( ) j+i det(m ji ) = ( ) i+j det((m ji ) T ) = ( ) i+j det(m ij ) = A ij and so adja is symmetric. Chapter Test 3.. Evaluate Solution. Use the expansion of det(a) along the second row (fourth row, first column and third column are also good choices, having also two zero entries). det(a) = 0 A 2 + A A A 24 = A A 24 = = = = Let A be 3 3 and suppose that A = 2. Compute (a) 3A. (b) 3A. (c) (3A).

33 Solution. Notice that A is here (and above) only an alternative notation for det(a). Thus: (a) 3A = 3 3 A = 54 (b) 3A = 3 3 A = 27 2 (c) (3A) = =. 3A For what value of a is a + 0 a 3a 0 2 a 2 = 4? Solution. Evaluating the sum of the (two) determinants, we obtain 2( a 3) + a + 6a 2a = 4, a simple equation with the solution a = Find all values of a for which the matrix a a is singular. Solution. The determinant a a = a3 9a = a(a 3)(a + 3) = 0 if and only if a { 3, 0, 3}. By Theorem 3.2 (p. 203) these are the required values. 5. Not requested (Cramer s rule). 6. Answer each of the following as true or false. (a) det(aa T ) = det(a 2 ). (b) det( A) = det(a). (c) If A T = A, then det(a) =. (d) If det(a) = 0 then A = 0. (e) If det(a) = 7 then Ax = 0 has only the trivial solution.

34 34 CHAPTER 3. DETERMINANTS (f) The sign of the term a 5 a 23 a 3 a 42 a 54 in the expansion of the determinant of a 5 5 matrix is +. (g) If det(a) = 0, then det(adja) = 0. (h) If B = P AP, and P is nonsingular, then det(b) = det(a). (i) If A 4 = I n then det(a) =. (j) If A 2 = A and A I n, then det(a) = 0. Solution. (a) True: det(aa T ) Th.3.8 = det(a) det(a T ) Th.3. = = det(a) det(a) = det(a 2 ). (b) False: det( A) = det(( )A) Th.3.5 = ( ) n det(a) = det(a) only if n is odd. For[ instance, if ] A = I 2 then 0 det( A) = det = = det(a). 0 (c) False: indeed, A T = A det(a T ) = det(a ) det(a) = det(a) which implies [ det(a) ] {±}, and not necessarily det(a) =. For instance, 0 for A =, A 0 T = A holds, but det(a) =. [ ] 0 (d) False: obviously A = 0 but det(a) = (e) True: indeed, 7 0 and one applies Corollary 3.4 (p. 203). (f) True: indeed, the permutation 5324 has = 6 inversions and so is even. (g) True: use Exercise T.8 (p. 20) which tells us that det(adja) = [det(a)] n. (h) True: using Theorems 3.8 and Corollary 3.2 (notice that P is nonsingular) we have det(b) = det(p AP ) = det(p ) det(a) det(p ) = = det(p ) det(a) = det(a). det(p ) [ ] 0 (i) False: for instance, if n = 2 and A =, the equality A 0 4 = I 2 holds and det(a) =. (j) True: by the way of contradiction. If det(a) 0 then A is nonsingular (see Theorem 3.2, p. 203) and thus the inverse A exists. By

35 left multiplication of A 2 = A with this inverse A we obtain at once: A (A 2 ) = A A and A = I n. 35

36 36 CHAPTER 3. DETERMINANTS

37 Chapter 4 n-vectors Page 246. In the following Exercises we denote the n-vectors considered by u = (u, u 2,..., u n ), v = (v, v 2,..., v n ) and w = (w, w 2,..., w n ). T7. Show that u (v + w) = u v + u w. Solution. Easy computation: u (v + w) = = u (v + w ) + u 2 (v 2 + w 2 ) u n (v n + w n ) = = u v + u w + u 2 v 2 + u 2 w u n v n + u n w n = = u v + u 2 v u n v n + u w + u 2 w u n w n = = u v + u w. T8. Show that if u v = u w for all u, then v = w. Solution. First take u = e = (, 0,..., 0). Then e v = e w gives v = w. Secondly take u = e 2 = (0,, 0,..., 0). Then e 2 v = e 2 w implies v 2 = w 2, and so on. Finally, take u = e n = (0,..., 0, ). Then e n v = e n w gives v n = w n. Hence v = (v, v 2,..., v n ) = w = (w, w 2,..., w n ). T9. Show that if c is a scalar, then cu = c u, where c is the absolute value of c. 37

38 38 CHAPTER 4. N-VECTORS Solution. Indeed, by the norm (length) definition, cu = (cu ) 2 + (cu 2 ) (cu n ) 2 = = c 2 u 2 + c 2 u c 2 u n2 = c 2 (u 2 + u u n2 ) = = c u 2 + u u n2 = = c u. T0. (Pythagorean Theorem in R n ) Show that u + v 2 = u 2 + v 2 if and only if u v = 0. Solution. First notice that for arbitrary n-vectors we have u + v 2 = (u + v) (u + v) = u 2 + 2u v+ v 2 (using Theorem 4.3, p. 236 and the previous Exercise T7). Finally, if the equality in the statement holds, then 2u v = 0 and u v = 0. Conversely, u v = 0 implies 2u v = 0 and u + v 2 = u 2 + v 2. Chapter Test 4.. Find the cosine of the angle between the vectors (, 2,, 4) and (3, 2, 4, ). Solution. Use the formula of the angle (p. 237) cos θ = u v = u v = 2. Find the unit vector in the direction of (2,,, 3). Solution. Use the remark after the definition (p. 239): if x is a nonzero vector, then u = x, is a unit vector in the direction of x. Therefore, x (2,,, 3) = = 5 and the required vector is 5 (2,,, 3). 3. Is the vector (, 2, 3) a linear combination of the vectors (, 3, 2), (2, 2, ) and (3, 7, 0)? Solution. Yes: searching for a linear combination x(, 3, 2)+y(2, 2, )+

39 39 z(3, 7, 0) = (, 2, 3), yields a linear system The coefficient matrix x + 2y + 3z = 3x + 2y + 7z = 2 2x y = is nonsingular because (compute!) its determinant is 4 0. Hence the system has a (unique) solution. 4. Not requested (linear transformations). 5. Not requested (linear transformations). 6. Answer each of the following as true or false. (a) In R n, if u v = 0, then u = 0 or v = 0. (b) In R n, if u v = u w, then v = w. (c) In R n, if cu = 0 then c = 0 or u = 0. (d) In R n, cu = c u. (e) In R n, u + v = u + v. (f) Not requested (linear transformations). (g) The vectors (, 0, ) and (,, 0) are orthogonal. (h) In R n, if u = 0, then u = 0. (i) In R n, if u is orthogonal to v and w, then u is orthogonal to 2v+3w. (j) Not requested (linear transformations). Solution. (a) False: for instance u = e = (, 0,..., 0) 0 and v = e 2 = (0,, 0..., 0) 0 but u v = e e 2 = 0. (b) False: for u, v as above: take w = 0; then e e 2 = e 0 but e 2 0. (c) True: indeed, if cu = 0 and c 0 then (cu, cu 2,..., cu n ) = (0, 0,..., 0) or cu = cu 2 =... = cu n = 0, together with c 0, imply u = u 2 =... = u n = 0, that is, u = 0. (d) False: compare with the correct formula on p. 246, Theoretical Exercise T9; for instance, if c =, and u 0 then u u.

40 40 CHAPTER 4. N-VECTORS (e) False: compare with the correct Theorem 4.5 (p.238); if u 0 and v = u then u + v = 0 = 0 2 u = u + u = u + v. (g) False: the dot product u v = , so that the vectors are not orthogonal (see Definition p.238). (h) True: indeed, u = u 2 + u u 2 n = 0 implies u 2 + u u 2 n = 0 and (for real numbers) u = u 2 =... = u n = 0, that is, u = 0. (i) True: indeed, compute (once again by the definition p. 238) u (2v + 3w) T7,p.246 = 2(u v) + 3(u w) = = 0.

41 Chapter 5 Lines and Planes Page 263. T2. Show that (u v) w = u (v w). Solution. Indeed, (u v) w = (u 2 v 3 u 3 v 2, u 3 v u v 3, u v 2 u 2 v ) (w, w 2, w 3 ) = = (u 2 v 3 u 3 v 2 )w + (u 3 v u v 3 )w 2 + (u v 2 u 2 v )w 3 = = u (v 2 w 3 v 3 w 2 ) + u 2 (v 3 w v w 3 ) + u 3 (v w 2 v 2 w ) = = (u, u 2, u 3 ) (v 2 w 3 v 3 w 2, v 3 w v w 3, v w 2 v 2 w ) = = u (v w). T4. Show that (u v) w = u u 2 u 3 v v 2 v 3 w w 2 w 3 Solution. In the previous Exercise we already obtained (u v) w = u (v 2 w 3 v 3 w 2 ) + u 2 (v 3 w v w 3 ) + u 3 (v w 2 v 2 w ). But this computation can be continued as follows: = u (v 2 w 3 v 3 w 2 ) + u 2 (v 3 w v w 3 ) + u 3 (v w 2 v 2 w ) = = u v 2 v 3 w 2 w 3. u 2 v v 3 w w 3 + u 3 v v 2 w w 2 = 4 u u 2 u 3 v v 2 v 3 w w 2 w 3,

42 42 CHAPTER 5. LINES AND PLANES using the expansion of the determinant along the first row. T5. Show that u and v are parallel if and only if u v = 0. Solution. In Section 4.2 (p. 238), the following definition is given: two nonzero vectors u and v are parallel if u v = u v (that is, cos θ = ±, or, equivalently sin θ = 0, θ denoting the angle of u and v). Notice that u 0 v for nonzero vectors and u v = 0 u v = 0. Using the length formula u v = u v sin θ we obtain sin θ = 0 if and only if u v = 0, the required result. Page 27. T5. Show that an equation of the plane through the noncollinear points P (x, y, z ), P (x 2, y 2, z 2 ) and P (x 3, y 3, z 3 ) is x y z x y z x 2 y 2 z 2 x 3 y 3 z 3 = 0. Solution. Any three noncollinear points P (x, y, z ), P (x 2, y 2, z 2 ) and P (x 3, y 3, z 3 ) determine a plane whose equation has the form ax + by + cz + d = 0, where a, b, c and d are real numbers, and a, b, c are not all zero. Since P (x, y, z ), P (x 2, y 2, z 2 ) and P (x 3, y 3, z 3 ) lie on the plane, their coordinates satisfy the previous Equation: ax + by + cz + d = 0 ax 2 + by 2 + cz 2 + d = 0 ax 3 + by 3 + cz 3 + d = 0.

43 We can write these relations as a homogeneous linear system in the unknowns a, b, c and d ax + by + cz + d = 0 ax + by + cz + d = 0 ax 2 + by 2 + cz 2 + d = 0 ax 3 + by 3 + cz 3 + d = 0 which must have a nontrivial solution. This happens if and only if (see Corollary 3.4 p. 203) the determinant of the coefficient matrix is zero, that is, if and only if x y z x y z x 2 y 2 z 2 = 0. x 3 y 3 z 3 Chapter Test Find parametric equations of the line through the point (5, 2, ) that is parallel to the vector u = (3, 2, 5). Solution. x = 5+3t, y = 2 2t, z = +5t, < t < (see p.266). 4. Find an equation of the plane passing through the points (, 2, ), (3, 4, 5), (0,, ). x y z Solution = 0 (see the previous T. Exercise 5, p. 27). 0 Thus 2 x 4 5 y z = 0 or x y + = 0. 43

44 44 CHAPTER 5. LINES AND PLANES 5. Answer each of the following as true or false. (b) If u v = 0 and u w = 0 then u (v + w) = 0. (c) If v = 3u, then u v = 0. (d) The point (2, 3, 4) lies in the plane 2x 3y + z = 5. (e) The planes 2x 3y + 3z = 2 and 2x + y z = 4 are perpendicular. Solution. (b) True, using Theorem 5. (p.260) properties (c) and (a): u (v + w) = u v + u w = = 0. (c) True, using Theoretical Exercise T.5 (p.263), or directly: if u = (u, u 2, u 3 ) then for v = ( 3u, 3u 2, 3u 3 ) the cross product i j k u u 2 u 3 3u 3u 2 3u 3 = 0 because (factoring out 3) it has two equal rows. (d) False. Verification: = 0. (e) False. The planes are perpendicular if and only if the corresponding normal vectors are perpendicular, or, if and only if these vectors have the dot product zero. (2, 3, 3) (2,, ) = = 2 0.

45 Chapter 6 Vector Spaces Page 302. T2. Let S and S 2 be finite subsets of R n and let S be a subset of S 2. Show that: (a) If S is linearly dependent, so is S 2. (b) If S 2 is linearly independent, so is S. Solution. Since S is a subset of S 2, denote the vectors in the finite sets as follows: S = {v, v 2,..., v k } and S 2 = {v, v 2,..., v k, v k+,..., v m }. (a) If S is linearly dependent, there are scalars c, c 2,..., c k not all zero, such that c v + c 2 v c k v k = 0. Hence c v + c 2 v c k v k + 0v k v m = 0, where not all the scalars are zero, and so S 2 is linearly dependent too. (b) If S is not linearly independent, it is (by Definition) linearly dependent. By (a), S 2 is also linearly dependent, a contradiction. Hence S is linearly independent. T4. Suppose that S = {v, v 2, v 3 } is a linearly independent set of vectors in R n. Show that T = {w, w 2, w 3 } is also linearly independent, where w = v + v 2 + v 3, w 2 = v 2 + v 3 and w 3 = v 3. Solution. Take c w + c 2 w 2 + c 3 w 3 = 0, that is, c (v + v 2 + v 3 ) + 45

46 46 CHAPTER 6. VECTOR SPACES c 2 (v 2 + v 3 ) + c 3 v 3 = 0. Hence c v + (c + c 2 )v 2 +(c + c 2 + c 3 )v 3 = 0 and by linearly independence of S, c = c + c 2 = c + c 2 + c 3 = 0. But this homogeneous system has obviously only the zero solution. Thus T is also linearly independent. T6. Suppose that S = {v, v 2, v 3 } is a linearly independent set of vectors in R n. Is T = {w, w 2, w 3 }, where w = v, w = v + v 2 and w = v + v 2 + v 3, linearly dependent or linearly independent? Justify your answer. Solution. T is linearly independent. We prove this in a similar way to the previous Exercise T4. Page (Bonus). Find all values of a for which {(a 2, 0, ), (0, a, 2), (, 0, )} is a basis for R 3. Solution. Use the procedure given in Section 6.3, p. 294, in order to determine the values of a for which the vectors are linearly independent; this amounts to find the reduced row echelon form for the matrix (we have reversed the order of the vectors to simplify computation of RREF) 0 0 a 2 a 2 0 a2 R +R a a 2 If a = 0 or a {±} the reduced row echelon form has a zero row. For a / {, 0, } the three vectors are linearly independent, and so form a basis in R 3 (by Theorem 6.9 (a), p.32). Solution 2. Use the Corollary 6.4 from Section 6.6, p.335: it is sufficient to find the values of a for which the determinant a a 2 0 = a(a2 ) 0..

47 These are a R {, 0, }. T9. Show that if {v, v 2,..., v n } is a basis for a vector space V and c 0 then {cv, v 2,..., v n } is also a basis for V. Solution. By definition: if k (cv ) + k 2 v k n v n = 0, then (the system {v, v 2,..., v n } being linearly independent) k c = k 2 =... = k n = 0 and (c 0), k = k 2 =... = k n = 0. The rest is covered by Theorem 6.9, (a). T0. Let S = {v, v 2, v 3 } be a basis for vector space V. Then show that T = {w, w 2, w 3 }, where w = v +v 2 +v 3, w 2 = v 2 +v 3 and w 3 = v 3, is also a basis for V. Solution. Using Theorem 6.9, (a), it is sufficient to show that T = {w, w 2, w 3 } is linearly independent. If c w + c 2 w 2 + c 3 w 3 = 0 then c (v + v 2 + v 3 ) + c 2 (v 2 + v 3 ) + c 3 v 3 = c v + (c + c 2 )v 2 + (c + c 2 + c 3 )v 3 = 0, and, {v, v 2, v 3 } being linearly independent, c = c + c 2 = c + c 2 + c 3 = 0. Hence (by elementary computations) c = c 2 = c 3 = 0. T2. Suppose that {v, v 2,..., v n } is a basis for R n. Show that if A is an n n nonsingular matrix, then {Av, Av 2,..., Av n } is also a basis for R n. Solution. First we give a solution for Exercise T0, Section 6.3: Suppose that {v, v 2,..., v n } is a linearly independent set of vectors in R n. Show that if A is an n n nonsingular matrix, then {Av, Av 2,..., Av n } is linearly independent. Solution using section 6.6. Let M = [v v 2...v n ] be the matrix which has the given vectors as columns (col i (M) = v i, i n). Thus the product AM = [Av Av 2...Av n ], that is col i (AM) = Av i, i n. The given vectors being linearly independent, det(m) 0. The matrix A being nonsingular det(a) 0. Hence det(am) = det(a) det(m) 0 so that {Av, Av 2,..., Av n } is a linearly independent set of vectors, by Corollary 6.4 in Section

48 48 CHAPTER 6. VECTOR SPACES Finally, if {Av, Av 2,..., Av n } is linearly independent and has n vectors (notice that dim(r n ) = n), it only remains to use Theorem 6.9 (a), p.32. T3. Suppose that {v, v 2,..., v n } is a linearly independent set of vectors in R n and let A be a singular matrix. Prove or disprove that {Av, Av 2,..., Av n } is linearly independent. Solution. Disprove (see also the previous Exercise). Indeed, {Av, Av 2,..., Av n } can be linearly dependent: take for instance A = 0: a set of vectors which contains a zero vector is not linearly independent. v 4 = Page (6 Bonus Exercises). 3. Let S = {v, v 2,..., v 5 }, where v = , v 5 = , v 2 = 2 2, v 3 =. Find a basis for the subspace V = spans of R4. Solution. We use the procedure given after the proof of Theorem 6.6, p. 308: A = and so v and v 2 is a basis ,

49 . Compute the rowand column ranks of A verifying Theorem 6., A = Solution. We transform A into row echelon form: R 3+R and so the column rank is = 3. Further, for the row rank, we transform in row echelon form the transpose A T, R 2 R and so the row rank is (also) = 3. 49? 8. If A is a 3 4 matrix, what is the largest possible value for rank(a) Solution. The matrix A has three rows so the row rank is 3 and four

50 50 CHAPTER 6. VECTOR SPACES columns, and so the column rank is 4. By Theorem 6., the rank of A is at most If A is a 5 3 matrix, show that the rows are linearly dependent. Solution. The matrix having only 3 columns, the column rank is 3. Hence also the row rank must be 3 and 5 rows are necessarily dependent (otherwise the row rank would be 5). 25. Determine whether the matrix A = or nonsingular using Theorem is singular Solution. We transform the matrix into row echelon form in order to compute its rank: of nonzero rows) is 4 and the matrix is nonsingular. 29. Is S = vectors? 4 2, 2 5 5, 2 3. Hence the rank (the number a linearly independent set of

51 5 Solution. The matrix whose columns are the vectors in S , has a zero determinant (verify!). Hence, by Corollary 6.4, the set S of vectors is linearly dependent. T7. Let A be an m n matrix. Show that the linear system Ax = b has a solution for every m matrix b if and only if rank(a) = m. Solution. Case : m n. If the linear system Ax = b has a solution for every m matrix b, then every m matrix b belongs to the column space of A. Hence this column space must be all R m and has dimension m. Therefore rank(a) = column rank(a) = dim(column space(a)) = m. Conversely, if rank(a) = m, the equality rank(a) = rank[a b] follows at once because, generally, rank(a) rank[a b] and rank[a b] m (because [A b] has only m rows). Finally we use Theorem 6.4. Case 2: m > n. We observe that in this case rank(a) = m is impossible, because A has only n < m rows and so rank(a) n. But if m > n, the linear system Ax = b has no solution for every m matrix b (indeed, if we have more equations than unknowns, and for a given b, the system has a solution, it suffices to modify a coefficient in b, and the corresponding system is no more verified by the same previous solution). Chapter Test 6.. Consider the set W of all vectors in R 3 of the form (a, b, c), where a + b + c = 0. Is W a subspace of R 3? Solution. Yes: (0, 0, 0) W so that W. For (a, b, c), (a, b, c ) W also (a + a, b + b, c + c ) W because (a + a ) + (b + b ) + (c + c ) = (a + b + c) + (a + b + c ) = = 0. Finally, for every k R and (a, b, c) W also k(a, b, c) W because ka + kb + kc = k (a + b + c) = 0.

52 52 CHAPTER 6. VECTOR SPACES 2. Find a basis for the solution space of the homogeneous system x + 3x 2 + 3x 3 x 4 + 2x 5 = 0 x + 2x 2 + 2x 3 2x 4 + 2x 5 = 0 x + x 2 + x 3 3x 4 + 2x 5 = 0. Solution. We transform the augmented matrix A in row echelon form: The corresponding system is now Hence x = x x 2 x 3 x 4 x 5 = s x 4 = t, x 5 = u, so that basis. x = 4x 4 2x 5 x 2 = x 3 x t 0 0 0, u, taking x 3 = s, is the required 3. Does the set of vectors {(,, ), (, 3, ), (, 2, 2)} form a basis for R 3?

53 Solution. If A = is the matrix consisting, as columns, of the given vectors, det(a) = = 2 0 so that the vectors are linearly independent. Using Theorem 6.9, (a), this is a basis in R For what value(s) of λ is the set of vectors {(λ 2 5,, 0), (2, 2, 3), (2, 3, 3)} linearly dependent? λ Solution. If A = 2 3 is the matrix consisting, as columns, of the given vectors, then det A = 6(λ 2 5) + 6 9(λ 2 5) + 6 = 3λ Thus det A = 0 if and only if λ {±3}. 5. Not required. 6. Answer each of the following as true or false. (a) All vectors of the form (a, 0, a) form a subspace of R 3. (b) In R n, cx = c x. (c) Every set of vectors in R 3 containing two vectors is linearly independent. (d) The solution space of the homogeneous system Ax = 0 is spanned by the columns. (e) If the columns of an n n matrix form a basis for R n, so do the rows. (f) If A is an 8 8 matrix such that the homogeneous system Ax = 0 has only the trivial solution then rank(a) < 8. (g) Not required. (h) Every linearly independent set of vectors in R 3 contains three vectors. (i) If A is an n n symmetric matrix, then rank(a) = n. (j) Every set of vectors spanning R 3 contains at least three vectors. Solution. (a) True. Indeed, these vectors form exactly

54 54 CHAPTER 6. VECTOR SPACES span{(, 0, )} which is a subspace (by Theorem 6.3, p. 285). (b) False. Just take c = and x 0. You contradict the fact that the length (norm) of any vector is 0. (c) False. x = (,, ) and y = (2, 2, 2) are linearly dependent in R 3 because 2x y = 0. (d) False. The solution space of the homogeneous system Ax = 0 is spanned by the columns which correspond to the columns of the reduced row echelon form which do not contain the leading ones. (e) True. In this case the column rank of A is n. But then also the row rank is n and so the rows form a basis. (f) False. Just look to Corollary 6.5, p. 335, for n = 8. (h) False. For instance, each nonzero vector alone in R 3 forms a linearly independent set of vectors. (i) False. For example, the zero n n matrix is symmetric, but has not the rank = n (it has zero determinant). (j) True. The dimension of the subspace of R 3 spanned by one or two vectors is 2, but dim R 3 = 3.

55 Chapter 8 Diagonalization Page 42. T. Let λ j be a particular eigenvalue of A. Show that the set W of all the eigenvectors of A associated with λ j, as well as the zero vector, is a subspace of R n (called the eigenspace associated with λ j ). Solution. First, 0 W and so W. Secondly, if x, y W then Ax = λ j x, Ay = λ j y and consequently A(x + y) = Ax + Ay = λ j x + λ j y = λ j (x + y). Hence x + y W. Finally, if x W then Ax = λ j x and so A(cx) = c(ax) = c(λ j x) = λ j (cx). Hence cx W. T3. Show that if A is an upper (lower) triangular matrix, then the eigenvalues of A are the elements on the main diagonal of A. Solution. The corresponding matrix λi n A is also upper (lower) triangular and by Theorem 3.7 (Section 3., p. 88), the characteristic polynomial f(λ) is given by: λ a a 2... a n 0 λ a a 2n λ a nn = (λ a )(λ a 22 )...(λ a nn ) 55

56 56 CHAPTER 8. DIAGONALIZATION (expanding successively along the first column). The corresponding characteristic equation has the solutions a, a 22,..., a nn. T4. Show that A and A T have the same eigenvalues. Solution. Indeed, these two matrices have the same characteristic polynomials: det(λi n A T ) = det((λi n ) T A T ) = det((λi n A) T ) Th.3. = det(λi n A). T7. Let A be an n n matrix. (a) Show that det A is the product of all the roots of the characteristic polynomial of A. (b) Show that A is singular if and only if 0 is an eigenvalue of A. Solution. The characteristic polynomial λ a a 2... a n a 2 λ a a 2n f(λ) =... a n a n2... λ a nn is a degree n polynomial in λ which has the form λ n + c λ n + c 2 λ n c n λ + c n (one uses the definition of the determinant in order to check that the leading coefficient actually is ). If in the equality f(λ) = det(λi n A) = λ n + c λ n + c 2 λ n c n λ + c n we let λ = 0 we obtain c n = det( A) = ( ) n det A. By the other way, if f(λ) = (λ λ )(λ λ 2 )...(λ λ n ) is the decomposition using the (real) eigenvalues of A, then (in the same way, λ = 0) we obtain f(0) = c n = ( ) n λ λ 2...λ n. Hence λ λ 2...λ n = det A. (b) By Theorem 3.2 (p. 203), we know that a matrix A is singular if and only if det A = 0. Hence, using (a), A is singular if and only if λ λ 2...λ n = 0, or, if and only if A has 0 as an eigenvalue.

57 57 Chapter Test 8.. If possible, find a nonsingular matrix P and a diagonal matrix D so that A is similar to D where 0 0 A = Solution. The matrix A is just tested for diagonalizability. Since the matrix λi 3 A is lower triangular (using Theorem 3.7, p. 88), the characteristic polynomial is det(λi 3 A) = (λ )(λ 2) 2, so we have a simple eigenvalue λ = and a double eigenvalue λ 2 = 2. For λ = the eigenvector is given by the system (I 3 A)x = 0 (solve it!): x = 5 For the double eigenvalue λ 2 = 2 only one eigenvector is given by the 0 system (2I 3 A)x = 0 (solve it!): x 2 = 0. Therefore A is not diagonalizable. 2 and 3: not required. 4. Answer each of the following as true or false. (a) Not required. (b) If A is diagonalizable, then each of its eigenvalues has multiplicity one. (c) If none of the eigenvalues of A are zero, then det(a) 0. (d) If A and B are similar, then det(a) = det(b). (e) If x and y are eigenvectors of A associated with the distinct eigenvalues λ and λ 2, respectively, then x + y is an eigenvector of A associated with the eigenvalue λ + λ 2.

58 58 CHAPTER 8. DIAGONALIZATION Solution. (b) False: see the Example 6, p (c) True: according to T. Exercise 7, p. 42 (section 8.), det(a) is the product of all roots of the characteristic polynomial. But these are the eigenvalues of A (see Theorem 8.2, p. 43). If none of these are zero, neither is their product. (d) True: indeed, if B = P AP then det(b) = det(p AP ) = det P det A det P = det P det P det A = det A, since all these are real numbers. (e) False: hypothesis imply Ax = λ x and Ay = λ 2 y. By addition (of columns) A(x + y) = Ax + Ay = λ x + λ 2 y is not generally equal to (λ + λ 2 )(x + y). An easy example: any 2 2 matrix having two different nonzero eigenvalues. The sum is a real number different from both: a 2 2 matrix cannot have 3 eigenvalues (specific example, see Example 3, p. 424).

59 Bibliography [] Kolman B, Hill D.R., Introductory Linear Algebra with Applications, 8-th Edition, 2005, Prentice Hall Inc., New Jersey. 59

MATH 240 Fall, Chapter 1: Linear Equations and Matrices

MATH 240 Fall, 2007 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 9th Ed. written by Prof. J. Beachy Sections 1.1 1.5, 2.1 2.3, 4.2 4.9, 3.1 3.5, 5.3 5.5, 6.1 6.3, 6.5, 7.1 7.3 DEFINITIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

Determinants. Dr. Doreen De Leon Math 152, Fall 2015

Determinants Dr. Doreen De Leon Math 52, Fall 205 Determinant of a Matrix Elementary Matrices We will first discuss matrices that can be used to produce an elementary row operation on a given matrix A.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

LINEAR ALGEBRA. September 23, 2010

LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................

MATH36001 Background Material 2015

MATH3600 Background Material 205 Matrix Algebra Matrices and Vectors An ordered array of mn elements a ij (i =,, m; j =,, n) written in the form a a 2 a n A = a 2 a 22 a 2n a m a m2 a mn is said to be

Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

(a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular.

Theorem.7.: (Properties of Triangular Matrices) (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product

DETERMINANTS. b 2. x 2

DETERMINANTS 1 Systems of two equations in two unknowns A system of two equations in two unknowns has the form a 11 x 1 + a 12 x 2 = b 1 a 21 x 1 + a 22 x 2 = b 2 This can be written more concisely in

Mathematics Notes for Class 12 chapter 3. Matrices

1 P a g e Mathematics Notes for Class 12 chapter 3. Matrices A matrix is a rectangular arrangement of numbers (real or complex) which may be represented as matrix is enclosed by [ ] or ( ) or Compact form

1 Determinants. Definition 1

Determinants The determinant of a square matrix is a value in R assigned to the matrix, it characterizes matrices which are invertible (det 0) and is related to the volume of a parallelpiped described

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

Solution. Area(OABC) = Area(OAB) + Area(OBC) = 1 2 det( [ 5 2 1 2. Question 2. Let A = (a) Calculate the nullspace of the matrix A.

Solutions to Math 30 Take-home prelim Question. Find the area of the quadrilateral OABC on the figure below, coordinates given in brackets. [See pp. 60 63 of the book.] y C(, 4) B(, ) A(5, ) O x Area(OABC)

2.1: Determinants by Cofactor Expansion. Math 214 Chapter 2 Notes and Homework. Evaluate a Determinant by Expanding by Cofactors

2.1: Determinants by Cofactor Expansion Math 214 Chapter 2 Notes and Homework Determinants The minor M ij of the entry a ij is the determinant of the submatrix obtained from deleting the i th row and the

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

1.5 Elementary Matrices and a Method for Finding the Inverse

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder:

Determinants. Chapter Properties of the Determinant

Chapter 4 Determinants Chapter 3 entailed a discussion of linear transformations and how to identify them with matrices. When we study a particular linear transformation we would like its matrix representation

2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

Matrix Inverse and Determinants

DM554 Linear and Integer Programming Lecture 5 and Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1 2 3 4 and Cramer s rule 2 Outline 1 2 3 4 and

Lecture 11. Shuanglin Shao. October 2nd and 7th, 2013

Lecture 11 Shuanglin Shao October 2nd and 7th, 2013 Matrix determinants: addition. Determinants: multiplication. Adjoint of a matrix. Cramer s rule to solve a linear system. Recall that from the previous

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT 200, Midterm Exam Solution. (0 points total) a. (5 points) Compute the determinant of the matrix 2 2 0 A = 0 3 0 3 0 Answer: det A = 3. The most efficient way is to develop the determinant along the

Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

MATH10212 Linear Algebra B Homework 7

MATH22 Linear Algebra B Homework 7 Students are strongly advised to acquire a copy of the Textbook: D C Lay, Linear Algebra and its Applications Pearson, 26 (or other editions) Normally, homework assignments

Lecture 6. Inverse of Matrix

Lecture 6 Inverse of Matrix Recall that any linear system can be written as a matrix equation In one dimension case, ie, A is 1 1, then can be easily solved as A x b Ax b x b A 1 A b A 1 b provided that

1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each)

Math 33 AH : Solution to the Final Exam Honors Linear Algebra and Applications 1. True/False: Circle the correct answer. No justifications are needed in this exercise. (1 point each) (1) If A is an invertible

Chapter 8. Matrices II: inverses. 8.1 What is an inverse?

Chapter 8 Matrices II: inverses We have learnt how to add subtract and multiply matrices but we have not defined division. The reason is that in general it cannot always be defined. In this chapter, we

Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

1 Eigenvalues and Eigenvectors

Math 20 Chapter 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors. Definition: A scalar λ is called an eigenvalue of the n n matrix A is there is a nontrivial solution x of Ax = λx. Such an x

2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

Name: Section Registered In:

Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

Math 315: Linear Algebra Solutions to Midterm Exam I

Math 35: Linear Algebra s to Midterm Exam I # Consider the following two systems of linear equations (I) ax + by = k cx + dy = l (II) ax + by = 0 cx + dy = 0 (a) Prove: If x = x, y = y and x = x 2, y =

NOTES on LINEAR ALGEBRA 1

School of Economics, Management and Statistics University of Bologna Academic Year 205/6 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

2.1: MATRIX OPERATIONS

.: MATRIX OPERATIONS What are diagonal entries and the main diagonal of a matrix? What is a diagonal matrix? When are matrices equal? Scalar Multiplication 45 Matrix Addition Theorem (pg 0) Let A, B, and

MA 242 LINEAR ALGEBRA C1, Solutions to Second Midterm Exam

MA 4 LINEAR ALGEBRA C, Solutions to Second Midterm Exam Prof. Nikola Popovic, November 9, 6, 9:3am - :5am Problem (5 points). Let the matrix A be given by 5 6 5 4 5 (a) Find the inverse A of A, if it exists.

Sergei Silvestrov, Christopher Engström, Karl Lundengård, Johan Richter, Jonas Österberg. November 13, 2014

Sergei Silvestrov,, Karl Lundengård, Johan Richter, Jonas Österberg November 13, 2014 Analysis Todays lecture: Course overview. Repetition of matrices elementary operations. Repetition of solvability of

Solutions to Review Problems

Chapter 1 Solutions to Review Problems Chapter 1 Exercise 42 Which of the following equations are not linear and why: (a x 2 1 + 3x 2 2x 3 = 5. (b x 1 + x 1 x 2 + 2x 3 = 1. (c x 1 + 2 x 2 + x 3 = 5. (a

1 Introduction to Matrices

1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

Cofactor Expansion: Cramer s Rule

Cofactor Expansion: Cramer s Rule MATH 322, Linear Algebra I J. Robert Buchanan Department of Mathematics Spring 2015 Introduction Today we will focus on developing: an efficient method for calculating

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Matrix Algebra A. Doerr Before reading the text or the following notes glance at the following list of basic matrix algebra laws. Some Basic Matrix Laws Assume the orders of the matrices are such that

The Inverse of a Matrix

The Inverse of a Matrix 7.4 Introduction In number arithmetic every number a ( 0) has a reciprocal b written as a or such that a ba = ab =. Some, but not all, square matrices have inverses. If a square

( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

The Characteristic Polynomial

Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

by the matrix A results in a vector which is a reflection of the given

Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

5.3 Determinants and Cramer s Rule

290 5.3 Determinants and Cramer s Rule Unique Solution of a 2 2 System The 2 2 system (1) ax + by = e, cx + dy = f, has a unique solution provided = ad bc is nonzero, in which case the solution is given

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

A matrix over a field F is a rectangular array of elements from F. The symbol

Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F) denotes the collection of all m n matrices over F Matrices will usually be denoted

Math 312 Homework 1 Solutions

Math 31 Homework 1 Solutions Last modified: July 15, 01 This homework is due on Thursday, July 1th, 01 at 1:10pm Please turn it in during class, or in my mailbox in the main math office (next to 4W1) Please

Solving Linear Systems, Continued and The Inverse of a Matrix

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

Lecture Notes: Matrix Inverse. 1 Inverse Definition

Lecture Notes: Matrix Inverse Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Inverse Definition We use I to represent identity matrices,

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

MATH 551 - APPLIED MATRIX THEORY

MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

Linear Algebra Test 2 Review by JC McNamara

Linear Algebra Test 2 Review by JC McNamara 2.3 Properties of determinants: det(a T ) = det(a) det(ka) = k n det(a) det(a + B) det(a) + det(b) (In some cases this is true but not always) A is invertible

1. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither.

Math Exam - Practice Problem Solutions. For each of the following matrices, determine whether it is in row echelon form, reduced row echelon form, or neither. (a) 5 (c) Since each row has a leading that

Section 2.1. Section 2.2. Exercise 6: We have to compute the product AB in two ways, where , B =. 2 1 3 5 A =

Section 2.1 Exercise 6: We have to compute the product AB in two ways, where 4 2 A = 3 0 1 3, B =. 2 1 3 5 Solution 1. Let b 1 = (1, 2) and b 2 = (3, 1) be the columns of B. Then Ab 1 = (0, 3, 13) and

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

Matrices, Determinants and Linear Systems

September 21, 2014 Matrices A matrix A m n is an array of numbers in rows and columns a 11 a 12 a 1n r 1 a 21 a 22 a 2n r 2....... a m1 a m2 a mn r m c 1 c 2 c n We say that the dimension of A is m n (we

Diagonalisation. Chapter 3. Introduction. Eigenvalues and eigenvectors. Reading. Definitions

Chapter 3 Diagonalisation Eigenvalues and eigenvectors, diagonalisation of a matrix, orthogonal diagonalisation fo symmetric matrices Reading As in the previous chapter, there is no specific essential

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix.

MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a 11

Orthogonal Diagonalization of Symmetric Matrices

MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

1 Sets and Set Notation.

LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most

NON SINGULAR MATRICES. DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that

NON SINGULAR MATRICES DEFINITION. (Non singular matrix) An n n A is called non singular or invertible if there exists an n n matrix B such that AB = I n = BA. Any matrix B with the above property is called

4. Matrix inverses. left and right inverse. linear independence. nonsingular matrices. matrices with linearly independent columns

L. Vandenberghe EE133A (Spring 2016) 4. Matrix inverses left and right inverse linear independence nonsingular matrices matrices with linearly independent columns matrices with linearly independent rows

Facts About Eigenvalues By Dr David Butler Definitions Suppose A is an n n matrix An eigenvalue of A is a number λ such that Av = λv for some nonzero vector v An eigenvector of A is a nonzero vector v

Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

Systems of Linear Equations

Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

Homework: 2.1 (page 56): 7, 9, 13, 15, 17, 25, 27, 35, 37, 41, 46, 49, 67

Chapter Matrices Operations with Matrices Homework: (page 56):, 9, 3, 5,, 5,, 35, 3, 4, 46, 49, 6 Main points in this section: We define a few concept regarding matrices This would include addition of

Linear Algebra: Determinants, Inverses, Rank

D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of

Chapter 1 - Matrices & Determinants

Chapter 1 - Matrices & Determinants Arthur Cayley (August 16, 1821 - January 26, 1895) was a British Mathematician and Founder of the Modern British School of Pure Mathematics. As a child, Cayley enjoyed

Sec 4.1 Vector Spaces and Subspaces

Sec 4. Vector Spaces and Subspaces Motivation Let S be the set of all solutions to the differential equation y + y =. Let T be the set of all 2 3 matrices with real entries. These two sets share many common

Inverses and powers: Rules of Matrix Arithmetic

Contents 1 Inverses and powers: Rules of Matrix Arithmetic 1.1 What about division of matrices? 1.2 Properties of the Inverse of a Matrix 1.2.1 Theorem (Uniqueness of Inverse) 1.2.2 Inverse Test 1.2.3

Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

Introduction to Matrix Algebra I

Appendix A Introduction to Matrix Algebra I Today we will begin the course with a discussion of matrix algebra. Why are we studying this? We will use matrix algebra to derive the linear regression model

MATH 2030: SYSTEMS OF LINEAR EQUATIONS. ax + by + cz = d. )z = e. while these equations are not linear: xy z = 2, x x = 0,

MATH 23: SYSTEMS OF LINEAR EQUATIONS Systems of Linear Equations In the plane R 2 the general form of the equation of a line is ax + by = c and that the general equation of a plane in R 3 will be we call

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

MAT188H1S Lec0101 Burbulla

Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible:

Cramer s Rule and the Adjugate Using determinants, it is possible to express the solution to a system of equations whose coefficient matrix is invertible: Theorem [Cramer s Rule] If A is an invertible

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

Math 54. Selected Solutions for Week Is u in the plane in R 3 spanned by the columns

Math 5. Selected Solutions for Week 2 Section. (Page 2). Let u = and A = 5 2 6. Is u in the plane in R spanned by the columns of A? (See the figure omitted].) Why or why not? First of all, the plane in

7 - Linear Transformations

7 - Linear Transformations Mathematics has as its objects of study sets with various structures. These sets include sets of numbers (such as the integers, rationals, reals, and complexes) whose structure

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants

Calculus and linear algebra for biomedical engineering Week 4: Inverse matrices and determinants Hartmut Führ fuehr@matha.rwth-aachen.de Lehrstuhl A für Mathematik, RWTH Aachen October 30, 2008 Overview

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

B such that AB = I and BA = I. (We say B is an inverse of A.) Definition A square matrix A is invertible (or nonsingular) if matrix

Matrix inverses Recall... Definition A square matrix A is invertible (or nonsingular) if matrix B such that AB = and BA =. (We say B is an inverse of A.) Remark Not all square matrices are invertible.

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0