V = Now. N(V ) is the set x : x 1 + x 2 + x 3 = 0. R(V ) is the set span{ 1/3 }.

Similar documents
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Linearly Independent Sets and Linearly Dependent Sets

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Methods for Finding Bases

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Systems of Linear Equations

Linear Algebra Notes

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Solutions to Math 51 First Exam January 29, 2015

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Math 215 HW #6 Solutions

A =

160 CHAPTER 4. VECTOR SPACES

by the matrix A results in a vector which is a reflection of the given

T ( a i x i ) = a i T (x i ).

Similar matrices and Jordan form

NOTES ON LINEAR TRANSFORMATIONS

THE DIMENSION OF A VECTOR SPACE

Lecture Notes 2: Matrices as Systems of Linear Equations

Systems of Linear Equations

Similarity and Diagonalization. Similar Matrices

α = u v. In other words, Orthogonal Projection

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

4.5 Linear Dependence and Linear Independence

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Name: Section Registered In:

Section Continued

Orthogonal Projections

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

5 Homogeneous systems

8 Square matrices continued: Determinants

Math 4310 Handout - Quotient Vector Spaces

Orthogonal Diagonalization of Symmetric Matrices

Quadratic forms Cochran s theorem, degrees of freedom, and all that

LINEAR ALGEBRA. September 23, 2010

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Chapter 20. Vector Spaces and Bases

x = + x 2 + x

Math 312 Homework 1 Solutions

LS.6 Solution Matrices

Matrix Representations of Linear Transformations and Changes of Coordinates

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Vector Spaces 4.4 Spanning and Independence

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Section Inner Products and Norms

Examination paper for TMA4115 Matematikk 3

Solution to Homework 2

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

Linear Algebra Review. Vectors

The Determinant: a Means to Calculate Volume

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Row Echelon Form and Reduced Row Echelon Form

( ) which must be a vector

Math Practice Exam 2 with Some Solutions

Solving Systems of Linear Equations

Notes on Determinant

MATH APPLIED MATRIX THEORY

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Orthogonal Projections and Orthonormal Bases

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Vector and Matrix Norms

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Lecture 14: Section 3.3

Solutions to Homework Section 3.7 February 18th, 2005

Inner Product Spaces

Inner product. Definition of inner product

MAT 242 Test 2 SOLUTIONS, FORM T

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Pigeonhole Principle Solutions

Lecture 3: Finding integer solutions to systems of linear equations

5. Linear algebra I: dimension

Solving Linear Equations in One Variable. Worked Examples

[1] Diagonal factorization

Vector Math Computer Graphics Scott D. Anderson

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Subspaces of R n LECTURE Subspaces

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Introduction to Matrices

Notes on Factoring. MA 206 Kurt Bryan

5.5. Solving linear systems by the elimination method

Linear Equations in Linear Algebra

MATH1231 Algebra, 2015 Chapter 7: Linear maps

DETERMINANTS TERRY A. LORING

Lecture 4: Partitioned Matrices and Determinants

LEARNING OBJECTIVES FOR THIS CHAPTER

3. INNER PRODUCT SPACES

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

ISOMETRIES OF R n KEITH CONRAD

Transcription:

LECTURE I. Column space, nullspace, solutions to the Basic Problem Let A be a m n matrix, and y a vector in R m. Recall the fact: Theorem: A x = y has a solution if and only if y is in the column space R(A) of A. Now let s prove a new theorem which talks about when such a solution is unique. Suppose x, x are two solutions to A x = y. Then A( x x) = A x A x = y y = 0. So x x is in the nullspace N(A) of A. In fact, we can say more. Theorem: Let x 0 be a solution to A x = y. Then the set of all solutions is the set of vectors which can be written x 0 + n for some n N(A). Proof omitted it s not much more than the above argument. We also get the corollary: Theorem: Suppose y is in the column space of A. A x = y has a unique solution if and only if N(A) = 0. Ex: Take Now V = N(V ) is the set x : x + x 2 + x 3 = 0 /3 R(V ) is the set span{ /3 }. /3 In particular, the equation V x = y, if it has a solution, will never have a unique one. One minute contemplation: in view of the above theorem, how do we think of the set of solutions of A x =?.

By contrast, take D = 2 3 5. Then ask what people think the nullspace and column space are. N(D) is the zero space; R(D) = R 3. So, for every y, the equation D x = y has a unique solution (which we already knew, because D is invertible.) II. Rank Let s understand the averaging example a little more closely. If we try to solve x x 2 x 3 = y y 2 y 3 we get, after elimination, 0 0 0 0 0 0 x x 2 x 3 = y y 2 y y 3 y. Provisional definition: The rank of a matrix A is the number of nonzero pivots in A after elimination. So rank of V is, while rank of D is 3. (Elimination is already complete there!) Let Z be the zero matrix. What is rank of Z? Now make a little table: V D Z rank 3 0 nullspace plane 0 R 3 column space line R 3 0 This may give us the idea that the rank tells us something about the dimension of the column space, which in turn seems to have something to do with the dimension of the nullspace. This will turn out to be exactly correct, once we work out the proper definition of the terms involved! 2

Note that we can now prove Theorem (Invertibility theorem III) Suppose A is an n n matrix such that N(A) = 0 and R(A) = R m. Then A is invertible. Proof. The equation A x = y has a solution for every y, because every y is in the column space of A. This solution is always unique, because N(A) = 0. So A x = y always has a unique solution. It now follows from invertibility theorem II that A is invertible. LECTURE II I. Linear independence and basis Lots of important basic definitions today. Remember our discussion from the first week: Two vectors v, 2 span a plane unless one is a scalar multiple of the other. Today we ll talk about the proper generalization of the notion above to larger sets of vectors. Definition: A set of vectors v,..., v n is linearly independent if and only if the only solution to x v +... + x n v n = 0 is x =... = x n = 0. Alternative : v,..., v n is linearly independent if and only if the equation v... v n x x n = 0 has only the solution x x n = 0. Alternative 2, most compact of all: v,..., v n is linearly independent if and only if N v... v n = { 0}. 3

So the notions of linear independence and nullspace are closely related. Example: Two vectors v, v 2. Suppose they are not linearly independent. Then there is an expression x v + x 2 v 2 = 0 such that x and x 2 are not both 0. In other words, v and v 2 are scalar multiples of each other. So we can rephrase our fact from week : Two vectors v, 2 span a plane as long as they are linearly independent. Now, a crucial definition. Definition. Let V be a subspace of R m. A basis for V is a set of vectors v,..., v n, which are linearly independent; span V. For instance, let s start our work by looking at the subspace of R 2 : [ ] x V = { : x + x 2 = 0} x 2 Ask: What are some vectors in this space? Can we produce a basis for it? Is it the only one? Now (if time permits) turn to partners, and address the following question: can we think of a basis for R 2? How many can we think of? LECTURE III Note: I won t define row space and left nullspace this week. At last we can define dimension. Theorem. Let V be a subspace of R m. Then any two bases of V contain the same number of vectors. The number of vectors in a basis for V is called the dimension of V. This is a wonderful theorem if only I had time to prove it at the board. But I think the intuitive desire to believe the above fact is strong enough that I can get away without a formal proof. Our goal for today will be to prove the conjectures from last time. Namely: Theorem (Rank Theorem) 4

The rank of A is the dimension of R(A). The dimension of R(A) plus the dimension of N(A) is the number of columns of A. Before we set off to prove this theorem, let me record some of the corollaries. Corollary. The following are equivalent: rank(a) = m; R(A) = R m ; A x = y has at least one solution for every y. Corollary. The following are equivalent: rank(a) = n; N(A) = 0; A x = y has at most one solution for every y. Corollary. The following are equivalent: rank(a) = m = n; A x = y has exactly one solution for every y. A is invertible. In other words, we have shown that an invertible matrix must be square! So: now that we ve eaten our dessert, let us turn to the vegetables which in my opinion are actually quite tasty. We want to prove the theorem above. FACT: Let A be an m n matrix, and let B be an invertible m n matrix. Then. N(BA) = N(A). 2. R(BA) and R(A) have the same dimension. Proof: Suppose v is in N(BA). Then BA v = 0. Multiplying by B, we see that A v = 0. The other direction is obvious. This proves. For part 2, let v,..., v d be a basis for R(A). So for every x in R n, we have A x = k v +... + k d v d. 5

Thus, we can write BA x = k Bv +... + k d Bv d. So B v,..., B v d span R(BA). Moreover, B v,..., B v d are linearly independent, because if k B v +... + k d B v d = 0 just multiply by B and we see that all the k i must be 0. So R(BA) has a basis with d elements, so has dimension d. In particular, if A is any matrix, and U is the matrix resulting from Gaussian elimination, then U = (E E 2 E 3... )A and we can take B = (E E 2 E 3... ), which is invertible. So by the FACT above, we see that N(A) = N(U) dimr(a) = dimr(u). Thus, to prove the Rank Theorem, it suffices to prove it in case Gaussian elimination is already finished, since Gaussian elimination as we have just seen does not affect the numbers we are interested in. There presumably won t be time to say much, if anything, more. 6