Subspaces of R n LECTURE 7. 1. Subspaces



Similar documents
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

160 CHAPTER 4. VECTOR SPACES

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

NOTES ON LINEAR TRANSFORMATIONS

Solving Systems of Linear Equations

( ) which must be a vector

Name: Section Registered In:

Solving Linear Systems, Continued and The Inverse of a Matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Orthogonal Diagonalization of Symmetric Matrices

Systems of Linear Equations

by the matrix A results in a vector which is a reflection of the given

Solving Systems of Linear Equations

1 VECTOR SPACES AND SUBSPACES

Recall that two vectors in are perpendicular or orthogonal provided that their dot

4.5 Linear Dependence and Linear Independence

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Inner Product Spaces and Orthogonality

Similarity and Diagonalization. Similar Matrices

Notes on Determinant

Linearly Independent Sets and Linearly Dependent Sets

Orthogonal Projections

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Vector Spaces 4.4 Spanning and Independence

ISOMETRIES OF R n KEITH CONRAD

6. Cholesky factorization

Mathematics Course 111: Algebra I Part IV: Vector Spaces

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

Methods for Finding Bases

1 Introduction to Matrices

Math 312 Homework 1 Solutions

T ( a i x i ) = a i T (x i ).

1 Sets and Set Notation.

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Solutions to Math 51 First Exam January 29, 2015

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT188H1S Lec0101 Burbulla

Lecture 14: Section 3.3

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

A =

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Factorization Theorems

Examination paper for TMA4115 Matematikk 3

Vector Spaces. Chapter R 2 through R n

THE DIMENSION OF A VECTOR SPACE

α = u v. In other words, Orthogonal Projection

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

The Determinant: a Means to Calculate Volume

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Section Continued

x = + x 2 + x

The Characteristic Polynomial

Linear Codes. Chapter Basics

8 Square matrices continued: Determinants

Section Inner Products and Norms

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

1.5 SOLUTION SETS OF LINEAR SYSTEMS

LINEAR ALGEBRA W W L CHEN

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Linear Equations in Linear Algebra

Linear Algebra Review. Vectors

Matrix Representations of Linear Transformations and Changes of Coordinates

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

LINEAR ALGEBRA. September 23, 2010

MATH1231 Algebra, 2015 Chapter 7: Linear maps

Lecture Notes 2: Matrices as Systems of Linear Equations

University of Lille I PC first year list of exercises n 7. Review

Inner Product Spaces

Row Echelon Form and Reduced Row Echelon Form

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Direct Methods for Solving Linear Systems. Matrix Factorization

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

More than you wanted to know about quadratic forms

Linear Algebra Notes

Numerical Analysis Lecture Notes

Notes on Factoring. MA 206 Kurt Bryan

LS.6 Solution Matrices

1.2 Solving a System of Linear Equations

MAT 242 Test 2 SOLUTIONS, FORM T

Using row reduction to calculate the inverse and the determinant of a square matrix

Question 2: How do you solve a matrix equation using the matrix inverse?

Inner product. Definition of inner product

Multivariate normal distribution and testing for means (see MKB Ch 3)

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Math 215 HW #6 Solutions

Chapter 6. Orthogonality

MAT 242 Test 3 SOLUTIONS, FORM A

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Transcription:

LECTURE 7 Subspaces of R n Subspaces Definition 7 A subset W of R n is said to be closed under vector addition if for all u, v W, u + v is also in W If rv is in W for all vectors v W and all scalars r R, then we say that W is closed under scalar multiplication A non-empty subset W of R n that is closed under both vector addition and scalar multiplication is called a subspace of R n Example 72 Let u =(, ) and v =(, 2) be vectors in R 2 We can construct a subset that closed under vector addition as follows W = {w R n w = ju + kv ; j, k positive integers} To see that this set is closed under vector addition, let w,w W Then w = ju + kv w = j u + k v for some positive integers j, k, j, and k But then there are positive integers j, k, j and k such that w + w =(ju + kv) +(j u + k v)=(j + j ) u +(k + k ) v W because both (j + j ) and (k + k ) ar e positive integers if j, k, j, and k are positive integers The set W is not a subspace, however; because it is not closed under scalar multiplication To see this, note that the vector can not be represented as ( ) u 2 = 2, sum of u and v with positive integer coefficients Example 73 The preceding example, however, does provide a clue as to one way to constructing a subspace Let u =(, ) and v =(, 2) be vectors in R 2 Consider the set W = {w R n w = ju + kv ; j, k R} This is closed under vector addition because if w, w W, then there are real numbers r, s, r and s such that But then w = ru + sv w = r u + s v w + w =(r + r ) u +(s + s ) v W since (r + r ) R and (s + s ) R And, for any real number t tw =(tr)u +(ts)v W since (tr) R and (ts) R The following theorem generalizes this last example

2 SOLUTIONS OF HOMOGENEOUS SYSTEMS 2 Theorem 74 Let {v,,v k } R n be a set of vectors in R n Then the span of {v,,v k } is a subspace of Proof Recall that the span ofa set ofvector is the set ofall possible linear combinations ofthose vectors Set W = span (v,,v k ) {w R n w = c v + c 2 v 2 + + c k v k ; c,c 2,,c k R} Then for any vectors w,w W we have w = c v + c 2 v 2 + + c k v k w = c v + c 2v 2 + + c kv k for some choice of real numbers c,,c k and c,,c k But then w + w =(c + c )v +(c 2 + c )v 2 2 + +(c k + c )v k k W and if t is any real number tw =(tc )v +(tc 2 )v 2 + +(tc k )v k W Remark 75 We shall often refer to the span of a set {v,,v k } ofvectors in R n generated by {v,,v k } as the subspace 2 Solutions of Homogeneous Systems We now come to another fundamental way of realizing a subspace of R n Definition 76 A linear system of the form Ax = is called homogeneous A homogeneous linear system is always solvable since x = is always a solution As such, this solution is not very interesting; we call it the trivial solution A homogeneous linear system may possess other non-trivial solutions (ie solutions x ), this is where we shall focus our attention today Lemma 77 Suppose x and x 2 are solutions of a homogeneous system Ax = Then so is any linear combination rx + sx 2 of x and x 2 Proof Since x and x 2 are solution of Ax = we have Ax = = Ax 2 But then A (rx + sx 2 ) = A(rx )+A(sx 2 ) = r (Ax )+s (Ax 2 ) = r + s = so rx + sx 2 is also a solution Theorem 78 The solution space of a homogeneous linear system is a subspace of R n Proof The preceding lemma demonstrates that the solution space ofa homogeneous linear system is closed under both vector addition (take r = and s = in the proofofthe preceding lemma) and scalar multiplication (let r be any real number and take s =, in the proofofthe lemma) Therefore, it is a subspace ofr n

4 BASES 3 3 Subspaces Associated withmatrices Definition 79 The row space of an m n matrix A is the span of row vectors of A Since the row vectors ofan m n matrix are n-dimensional vectors, the row space ofan m n matrix is a subspace of R n Definition 7 The column space of an m n matrix A is the span of column vectors of A Since the column vectors ofan m n matrix are m-dimensional vectors, the column space ofan m n matrix is a subspace of R m Definition 7 The null space of an m n matrix A is the solution set of homogeneous linear system Ax = By the theorem ofthe proceding section, the null space ofan m n matrix A will be a subspace of R n Consider now a non-homogeneous linear system Ax = b The left hand side of such an equation is a a 2 a n a 2 a 22 a 2n am am2 amn x x 2 xn = ax + a2x2 + anxn = x a a 2 a 2 x + a 22 x 2 + + a 2nxn amx + am2x 2 + + amnxn am + x 2 a 2 a 22 am2 + x n The final expression on the right hand side is evidently a linear combination ofthe column vectors ofa The consistency ofthe equation Ax = b then requires the column vector b to also lie within the span of the column vectors of A Thus we have Theorem 72 A linear system Ax = b is consistent if and only if b lies in the column space of A a n a 2n amn 4 Bases Consider the subspace generated by the following three vectors in R 3 : v =, v2 =, v3 = It turns out that this is the same as the subspace generated from just v and v 2 To see this note that v 3 = = = v +( )v 2

4 BASES 4 But any vector w in span (v, v 2, v 3 ) is expressible in the form w = c v + c 2 v 2 + c 3 v 3 = c v + c 2 v 2 + c 3 (v v 2 ) = (c + c 3 ) v +(c 2 c 3 ) v 2 span (v, v 2 ) For reasons ofeffficiency alone, it is natural to try to find the minimum number ofvectors needed to specify every vector in a subspace W Such a set will be called a basis for W There is also another reason to be interested in basis vectors Consider the vector w = 2 In terms ofthe vectors v, v 2,andv 3, we can write this as either or or even w =3v v 2 2v 3 w = 2v +4v 2 +3v 3 w = 2v +4v 2 +3v 3 However, there is only one way to represent w as a linear combination ofthe vectors v and v 2 For the condition w = c v c 2 v 2 requires 2 = c + c2 is equivalent to the following linear system c + c 2 = 2 c 2 = c = which obviously as c = and c 2 = as its only solution This motivates the following definition = c + c 2 c 2 c Definition 73 Let W be a subspace of R n A subset {w,w 2,,w k } of W is called a basis for W if every vector in W can be uniquely expressed as linear combination of the vectors w,w 2,,w k Theorem 74 A set of vectors {w, w 2,, w k } is a basis for the subspace W generated by {w, w 2,, w k } ifandonlyif r w + r 2 w 2 + + r k w k = implies =r = r 2 = = r k Proof Suppose {w, w 2,, w k } is a basis for W = span (w, w 2,, w k ) Then every vector in W can be uniquely specfied as a vector ofthe form v = r w + r 2 w 2 + + r k w k ; r,r 2,,r k R In particular, the zero vector (7) = ()w +()w 2 + +()w k

4 BASES 5 lies in W Because {w, w 2,, w k } is assumed to be a basis, the linear combination on the right hand side of(7) must be the unique linear combination ofthe vectors w,,w k that is equal to Hence, r w + r 2 w 2 + + r k w k = implies = r = r 2 = = r k Suppose r w + r 2 w 2 + + r k w k = implies = r = r 2 = = r k We want to show that {w, w 2,, w k } is a basis for W = span (w, w 2,, w k ) In other words, we need to show that there is only one choice ofcoefficients r,,r k such that a vector v W can be expressed in the form v =r w + r 2 w 2 + + r k w k Suppose there were in fact two distinct ways of representing v : (72) (73) v = r w + r 2 w 2 + + r k w k v = s w + s 2 w 2 + + s k w k Subtracting the second equation from the first yields =(r s ) w +(r 2 s 2 ) w 2 + +(r k s k ) w k Our hypothesis now implies =r s = r 2 s 2 = = r k s k In other words r = s r 2 = s 2 r k = s k and so the two linear combinations on the right hand sides of(72) and (73) must be identical Theorem 75 Let A be an n n matrix Then the following statements are equivalent The linear system Ax = b has a unique solution for each vector b R n 2 The matrix A is row equivalent to the identity matrix 3 The matrix A is invertible 4 The column vectors of A form a basis for R n Proof We have already demonstrated the equivalence ofstatements 2, 3 and 4 It therefore suffices to show that statement 4 is equivalent to statement To see that statement 4 implies statement, suppose that the column vectors c, c 2,,c n of A form a basis for R n Then by Theorem 72, the linear system Ax = b is consistent for all vectors b span (c,,c n )= R n But a direct calculation reveals b = Ax = xc + x2c 2 + + xnc n Because the vectors c,,c n form a basis, there choice of coefficients x, x2,,xn must be unique Therefore, the linear system Ax = b has a unique solution for each vector b R n On the other hand, suppose the linear system Ax = b has a unique solution for each vector b R n In particular, this must be true for b = Therefore, there is only one choice of coefficients x,x2,,xn such that = xc + x2c 2 + + xnc n = Ax By the preceding theorem we can conclude that the column vectors of A form a basis for R n Example 76 Show that the vectors v =(,, 3), v 2 =(3,, 4), and v 3 =(, 4, ) form a basis for R 3

By the preceding theorem, it suffices to show that the matrix 3 4 3 4 is invertible Row reducing A yields A = R 2 R 2 R R 3 R 3 3R 4 BASES 6 3 4 3 4 3 3 3 5 4 R3 R 3 5 3 R 2 3 3 3 9 The matrix on the far right is upper triangular so it s obviously invertible Hence, A is invertible; hence the column vectors of A form a basis for R 3 ; hence the vectors v, v 2, and v 3 form a basis for R 3 The preceding theorem is applicable only to square matrices A and linear systems of n equations in n unknowns It can be extended to more general matrices and linear systems in the following manner Theorem 77 Let A be an m n matrix Then the following are equivalent Each consistent system Ax = b has a unique solution 2 The reduced row echelon form of A consists of the n n identity matrix followed by m n rows containing only zeros 3 The column vectors of A form a basis for the column space of A Proof 2 : From Theorem 58 oflecture 5 (Theorem 7 in text), we know that a consistent linear system Ax = b has a unique solution ifand only ifa is row equivalent to a matrix A in row-echelon form such that every column of A has a pivot Since A, and hence A, has n columns, we can conclude that the solution ofevery consistent linear system Ax = b is unique ifand only ifwe have must have n pivots In order to have n pivots the number m ofrows must be n When n = m there will be one pivot for each row, and the pivots will all reside along the diagonal, like so A = a a 2 a n a22 a2n ann occuring in at least n rows If A is further reduced to a matrix A in reduced row-echelon form, then all the pivots are re-scaled to and all the entries above the pivots are equal to Thus, A = = I If m>n, we still require n pivots, that means the only way we can consistently add rows to the picture above is by adding rows without pivots; ie, rows containing only s 3 : Suppose Ax = b has a unique solution for each b in the column space C(A) ofa (Recall b must lie in the column space of A in order for the linear system to be consistent) Then, if we denote the column vectors of A by c, c 2,,c n we have b = Ax x c + x 2 c 2 + + x n c n, for all b C(A) span (c, c 2,,c n ) Ifthe solution x is unique, then there is only one such linear combination ofthe column vectors c i for each vector b C(A) Hence, the column vectors c i provide a basis for C(A) On the other hand, ifthe column vectors were not a basis for C(A) span (c, c 2,,c n ), then that would mean that there are vectors b

4 BASES 7 lying in C(A) such that the expansion b = x c + x 2 c 2 + + x n c n in terms ofthe c i is not unique Hence, a solution x to Ax = b would not be unique Theorem 78 Let Ax = b be a non-homogeneous linear system, and let p be any particular solution of this sytem Then every solution of Ax = b can be expressed in the form x = p + h where h is a solution of the corresponding homogeneous system Ax = Proof Suppose p and x are both solutions of Ax = b Then set h = x p Then h satisfies Hence, x = p + h with h a solution of Ax = Ah = A (x p) =Ax Ap = b b =