1. (16 pts) Find all matrices that commute with A = 0 1. Write B =. Setting AB = BA and equating the four components of the product matrices,

Similar documents
A =

Name: Section Registered In:

MATH1231 Algebra, 2015 Chapter 7: Linear maps

Solutions to Math 51 First Exam January 29, 2015

NOTES ON LINEAR TRANSFORMATIONS

160 CHAPTER 4. VECTOR SPACES

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

MA106 Linear Algebra lecture notes

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

T ( a i x i ) = a i T (x i ).

Lecture Notes 2: Matrices as Systems of Linear Equations

Solution to Homework 2

[1] Diagonal factorization

Matrix Representations of Linear Transformations and Changes of Coordinates

Methods for Finding Bases

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

THE DIMENSION OF A VECTOR SPACE

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Orthogonal Projections and Orthonormal Bases

( ) which must be a vector

Linear Algebra Notes

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Math Practice Exam 2 with Some Solutions

Chapter 17. Orthogonal Matrices and Symmetries of Space

Inner Product Spaces

University of Lille I PC first year list of exercises n 7. Review

LINEAR ALGEBRA W W L CHEN

Orthogonal Projections

LS.6 Solution Matrices

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Vieta s Formulas and the Identity Theorem

16.3 Fredholm Operators

1 Introduction to Matrices

Section Inner Products and Norms

Systems of Linear Equations

by the matrix A results in a vector which is a reflection of the given

Orthogonal Diagonalization of Symmetric Matrices

3. INNER PRODUCT SPACES

Math 4310 Handout - Quotient Vector Spaces

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Solving Systems of Linear Equations

Similarity and Diagonalization. Similar Matrices

The last three chapters introduced three major proof techniques: direct,

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Review Jeopardy. Blue vs. Orange. Review Jeopardy

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

Linearly Independent Sets and Linearly Dependent Sets

Linear Algebra Review. Vectors

Section V.3: Dot Product

Math 312 Homework 1 Solutions

8 Square matrices continued: Determinants

Polynomial Invariants

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

MATH APPLIED MATRIX THEORY

Vector and Matrix Norms

4.5 Linear Dependence and Linear Independence

ISOMETRIES OF R n KEITH CONRAD

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

1 Determinants and the Solvability of Linear Systems

1 Sets and Set Notation.

Section 13.5 Equations of Lines and Planes

Linear Codes. Chapter Basics

GROUP ALGEBRAS. ANDREI YAFAEV

Chapter 20. Vector Spaces and Bases

BANACH AND HILBERT SPACE REVIEW

3. Mathematical Induction

Unified Lecture # 4 Vectors

1 VECTOR SPACES AND SUBSPACES

Similar matrices and Jordan form

Pigeonhole Principle Solutions

LEARNING OBJECTIVES FOR THIS CHAPTER

Continued Fractions and the Euclidean Algorithm

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

α = u v. In other words, Orthogonal Projection

5. Linear algebra I: dimension

Lecture 3: Finding integer solutions to systems of linear equations

Section Continued

DETERMINANTS TERRY A. LORING

CS3220 Lecture Notes: QR factorization and orthogonal transformations

MAT 242 Test 2 SOLUTIONS, FORM T

DERIVATIVES AS MATRICES; CHAIN RULE

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture 2 Matrix Operations

Examination paper for TMA4115 Matematikk 3

Question 2: How do you solve a matrix equation using the matrix inverse?

Chapter 9. Systems of Linear Equations

Linear Algebra I. Ronald van Luijk, 2012

PROJECTIVE GEOMETRY. b3 course Nigel Hitchin

Solving Systems of Linear Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Lecture 14: Section 3.3

Transcription:

. (6 pts) Find all matrices that commute with A = [ ]. 0 [ ] a b Write B =. Setting AB = BA and equating the four components of the product matrices, c d [ ] a b we see that c = 0 and a = d, so all matrices of the form B = where a and b are any real 0 a numbers (and only matrices of this form) will commute with A.

2 0 2. (6 pts) Let A = 0 2 0 and B = {, 0, } be a basis of R 3. 2 2 0 2 Find the B-matrix of the linear transformation T ( x) = A x. 0 The easiest method is probably to compute this by columns: [T ( )] B = 2 = 0 and doing 0 2 B 0 the same thing with the other basis elements shows that [T ( 0 )] B = and [T ( )] B = 0, 0 2 0 so that the B-matrix of T is B = 0 0. 0 Using the method B = S AS to compute this is also not too ugly in this case, since the inverse of S is fairly nice. If you did it this way, you should have ended up with S = 2 0. 0

3. (6 pts) Let A be an n m matrix and V be a subspace of R n. Define Show that W is a subspace of R m. W = { x R m A x V }. We need to check the three properties in the definition of a subspace of R m : (a) Note that A 0 = 0 and 0 V since V is a subspace of R n and all subspaces contain 0. Thus, 0 W. (b) Let x, y W. That is, A x, A y V. Then A( x + y) = A x + A y V since subspaces are closed under addition. Thus, x + y W. (c) Let x W, k R. That is, A x V. Then A(k x) = ka x V since subspaces are closed under scalar multiplication. Thus, k x W. The three properties hold, so W is a subspace of R m. Aside: We can use the above to show that the solutions to (for example) ax + by + cz = dx + ey + fz = gx + hy + jz form a subspace of R 3. Note that the solutions to these equalities satisfy A x = k for some k R a b c where A = d e f, so if we let V = span( ), then the set of solutions is precisely the set W g h j that we proved was a subspace above. A few special cases of the above: ) If im(a) V, then W = R m. 2) If V = { 0}, then W = ker(a).

4. Consider the matrix 2 2 3 3 5 0 0 2 A = 4 4 3 2 3 3 7 with rref(a) = 0 0 0 3 0 0 0 0. 2 2 8 0 0 0 0 0 (a) (8 pts) Find a basis of im(a). What is the dimension of im(a)? We know that the columns of A corresponding to the columns of rref(a) with leading s will 2 3 3 form a bais of im(a), so that { 4, 3 3, 2 3 } is a basis of im(a), so dim(im(a)) = 3. 2 2 (b) (8 pts) Find a basis of ker(a). What is the dimension of ker(a)? From rref(a) x = 0, we see that the solutions to A x = 0 are vectors of the form s + 2t 2 s x = 3t 0 = s 0 0 + t 0 3 0, t 0 2 so that { 0 0, 0 3 0 } is a basis of ker(a), so dim(ker(a)) = 2. 0 Note that dim(im(a)) + dim(ker(a)) = 5, as guaranteed by the Rank-Nullity Theorem.

5. (2 pts) Let A be an n m matrix. Let v,..., v k be a basis of ker(a) and w,..., w l be a basis of im(a). For i =,..., l, let x i R m be a vector such that A x i = w i. Show that B = { v,..., v k, x,..., x l } is a basis of R m. (Hint: It may be useful to determine what k + l equals, in terms of n and m.) By Rank-Nullity, k + l = dim(ker(a)) + dim(im(a)) = m. We know that any m linearly independent vectors in R m will form a basis of R m (and similarly for m vectors that span R m ), so it is enough to show that the vectors of B either are linearly independent or span R m. Showing linear independence is easier, but I ll show how to do it either way below. Note that you only need to do one of the two below (unless you didn t figure out that k + l = m, in which case you d need to show both). Let T ( x) = A x. (Technically, you don t need to do this: you can use A below everywhere that I use T. I just want to view things in terms of linear transformations instead of matrices for a reason that I ll explain at the end. Note im(t ) = im(a) and ker(t ) = ker(a) by definition.) Linear Independence: Consider the relation c v + + c k v k + d x + + d l x l = 0. () We want to show that only the trivial relation exists; that is, that c = = c k = d = = d l = 0. Let s apply T to both sides of (). Since we can split on addition and scalar multiplication (since T is a linear transformation), we get: c T ( v ) + + c k T ( v k ) + d T ( x ) + + d l T ( x l ) = T ( 0). Note that T ( v i ) = 0 since v i ker(t ) for i =,... k and we chose x i so that T ( x i ) = w i for i =,..., l. Also, T ( 0) = 0. So the above equation becomes d w + + d l w l = 0. But, we know that w,..., w l are linearly independent, so they satisfy only the trivial relation, so d = = d l = 0. Plugging these values into (), we re left with c v + + c k v k = 0. Since we also know that v,..., v k are linearly independent, they also satisfy only the trivial relation, so c = = c k = 0. Thus, () is the trivial relation, so that the vectors of B are linearly independent. Spanning Set: Let x R m. We will show that x span( v,..., v k, x,..., x l ) by explicitly computing it as a linear combination of these vectors. First, note that T ( x) im(t ), so there exist unique scalars d,..., d l such that T ( x) = d w + + d l w l since the vectors w,..., w l form a basis of im(t ). Let u = d x + + d l x l for the same scalars d i. Then T ( u) = d w + + d l w l = T ( x) (again, since we can split on addition and scalar multiplication). Then T ( x u) = T ( x) T ( u) = 0, so x u ker(t ). Thus, x u = c v + + c k v k for some scalars c,..., c k since v,..., v k form a basis of ker(t ). Thus, x = c v + + c k v k + u = c v + + c k v k + d x + + d l x l, so that x span( v,..., v k, x,..., x l ). Since x R m was arbitrary, this shows that the vectors of B span R m.

Aside: Note that this provides a second proof of the Rank-Nullity Theorem. If we didn t know that k + l = m, we could prove both linear independence and spanning as above, which would show that B is a basis of R m. Since B contains k +l vectors and we know that every basis of R m has m vectors, we now would then see that dim(im(a)) + dim(ker(a)) = k + l = m. Why did I use T above instead of A? Because this generalizes to abstract vector spaces. In Section 4.2, we ll define linear transformations, images, and kernels in abstract vector spaces. Then, following the same argument above exactly will prove that for any linear transformation T from a finite-dimensional vector space V to a vector space W, dim(im(t )) + dim(ker(t )) = dim(v ). Cool. But where did that finite-dimensional bit come from, I hear you ask. Good question. Above, we ve assumed that im(t ) and ker(t ) are finite-dimensional, which will clearly be the case if V is finite-dimensional, but may not be the case if V is infinite-dimensional. Aside 2: Some people tried to use one linear relation for the vectors v,..., v k and one linear relation for x,..., x l instead of combining them into one relation like we did above in (). This 0 doesn t work. For example, the vectors 0, 2 are linearly independent and the vectors 0, 0 0 0 are linearly independent, but the vectors 0, 2, 0, are not linearly independent (since 0 0 0 = 2 2 2 0 is redundant). Linear independence is a property of a set of vectors, not of 0 the individual vectors themselves, so you need to check them all together.

6. (4 pts each) Decide whether the following statements are true or false. You do not need to show work. Let A be an n n matrix. If A is similar to I n, then A = I n. If A is similar to I n, then there is an invertible matrix S such that A = S I n S = S S = I n. Let A be an invertible n n matrix such that A 2 = A. Then A = I n. Since A is invertible, we can multiply both sides by A : A A 2 = A A, so A = I n. Note that we saw on the review that this is not necessarily the case if we don t know that A is invertible. Let A be an n n matrix such that im(a) = ker(a). Then n must be even. By Rank-Nullity, dim(im(a)) = dim(ker(a)) = n 2. If n is odd, this will not be an integer, which is impossible since the dimension must be an integer. Let [ A be ] a 2 2 matrix that represents a reflection across a line through the origin. Then A is similar to 0. 0 Recall that matrices will be similar if they represent the same transformation, but in different bases. Since we know how to construct the B-matrix of a transformation column-by-column, this question is asking if we can find a basis B = { v, w} of R 2 such that A v = v and A w = w. One way to do this is to let v be a unit vector on the line and w be a unit vector perpendicular to the line. From the geometry, it s clear that A v = v and A w = w and since we know that perpendicular unit vectors are linearly independent, B really is a basis. Algebraically, this tells us that if we let S = [ v w ] [ ] 0, then S is invertible and = S 0 AS. Let A be an n n matrix such that the columns of A are linearly independent. Then A x = b has a unique solution for all b R n. This is one of the conditions in Summary 3.3.0. Let A be an n m matrix. Then dim(im(a)) + dim(ker(a)) = n. By the Rank-Nullity Theorem, dim(im(a)) + dim(ker(a)) = m, not n.