Vector Spaces: Theory and Practice

Similar documents
Solving Systems of Linear Equations

Methods for Finding Bases

Solutions to Math 51 First Exam January 29, 2015

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

160 CHAPTER 4. VECTOR SPACES

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

NOTES ON LINEAR TRANSFORMATIONS

Solving Linear Systems, Continued and The Inverse of a Matrix

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Systems of Linear Equations

by the matrix A results in a vector which is a reflection of the given

Lecture Notes 2: Matrices as Systems of Linear Equations

4.5 Linear Dependence and Linear Independence

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Solving Systems of Linear Equations

Similarity and Diagonalization. Similar Matrices

T ( a i x i ) = a i T (x i ).

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Name: Section Registered In:

Matrix Representations of Linear Transformations and Changes of Coordinates

( ) which must be a vector

Solution to Homework 2

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Orthogonal Diagonalization of Symmetric Matrices

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Linear Algebra Notes

Vector Spaces 4.4 Spanning and Independence

LINEAR ALGEBRA. September 23, 2010

Linear Algebra Notes for Marsden and Tromba Vector Calculus

1 VECTOR SPACES AND SUBSPACES

THE DIMENSION OF A VECTOR SPACE

Solving Systems of Linear Equations Using Matrices

Subspaces of R n LECTURE Subspaces

Solving simultaneous equations using the inverse matrix

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

1 Introduction to Matrices

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Lecture 3: Finding integer solutions to systems of linear equations

University of Lille I PC first year list of exercises n 7. Review

Math 312 Homework 1 Solutions

Inner Product Spaces

Practical Guide to the Simplex Method of Linear Programming

A =

Mathematics Course 111: Algebra I Part IV: Vector Spaces

1.5 SOLUTION SETS OF LINEAR SYSTEMS

7 Gaussian Elimination and LU Factorization

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Row Echelon Form and Reduced Row Echelon Form

Notes on Determinant

LEARNING OBJECTIVES FOR THIS CHAPTER

LINEAR ALGEBRA W W L CHEN

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

1 Sets and Set Notation.

5 Homogeneous systems

Vector and Matrix Norms

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

MATH APPLIED MATRIX THEORY

Linear Algebra Review. Vectors

Linear Programming. March 14, 2014

MAT 242 Test 2 SOLUTIONS, FORM T

LS.6 Solution Matrices

Section Continued

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Linear Algebra I. Ronald van Luijk, 2012

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Similar matrices and Jordan form

Linearly Independent Sets and Linearly Dependent Sets

6. Cholesky factorization

Using row reduction to calculate the inverse and the determinant of a square matrix

MAT188H1S Lec0101 Burbulla

The last three chapters introduced three major proof techniques: direct,

Math 4310 Handout - Quotient Vector Spaces

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Matrix Differentiation

Polynomial Invariants

The Determinant: a Means to Calculate Volume

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

MA106 Linear Algebra lecture notes

Linear Codes. Chapter Basics

1 Solving LPs: The Simplex Algorithm of George Dantzig

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Solution of Linear Systems

x = + x 2 + x

Systems of Linear Equations

Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products

α = u v. In other words, Orthogonal Projection

Applied Linear Algebra I Review page 1

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Introduction to Matrix Algebra

Orthogonal Projections

Math 215 HW #6 Solutions

Section Inner Products and Norms

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Transcription:

Chapter 5 Vector Spaces: Theory and Practice So far, we have worked with vectors of length n and performed basic operations on them like scaling and addition Next, we looked at solving linear systems via Gaussian elimination and LU factorization Already, we ran into the problem of what to do if a zero pivot is encountered What if this cannot be fixed by swapping rows? Under what circumstances will a linear system not have a solution? When will it have more than one solution? How can we describe the set of all solutions? To answer these questions, we need to dive deeper into the theory of linear algebra 5 Vector Spaces The reader should be quite comfortable with the simplest of vector spaces: R,R 2, and R 3, which represent the points in one-dimentional, two-dimensional, and three-dimensional (real valued space, respectively A vector x R n is represented by the column of n real numbers x = χ χ n which one can think of as the direction (vector from the origin (the point χ = to the point x = However, notice that a direction is position χ n independent: You can think of it as a direction anchored anywhere in R n What makes the set of all vectors in R n a space is the fact that there is a scaling operation (multiplication by a scalar defined on it, as well as the addition of two vectors: The definition of a space is a set of elements (in this case vectors in R n together with addition and multiplication (scaling operations such that ( for any two elements in the set, the element that results from adding these elements is also in the set; (2 scaling an element in the set results in an element in the set; and (3 there is an element in the set, denoted by, such that additing it to another element results in that element and scaling by results in the 35

36 Chapter 5 Vector Spaces: Theory and Practice element Example 5 Let x, y R 2 and α R Then z = x + y R 2 ; α x = αx R 2 ; and ( R 2 and x = In this document we will talk about vector spaces because the spaces have vectors as their elements Example 52 Consider the set of all real valued m n matrices, R m n Together with matrix addition and multiplication by a scalar, this set is a vector space Note that an easy way to visualize this is to take the matrix and view it as a vector of length m n Example 53 Not all spaces are vector spaces For example, the spaces of all functions defined from R to R has addition and multiplication by a scalar defined on it, but it is not a vectors space (It is a space of functions instead Recall the concept of a subset, B, of a given set, A All elements in B are elements in A If A is a vector space we can ask ourselves the question of when B is also a vector space The answer is that B is a vector space if ( x, y B implies that x + y B; (2 x B and α B implies αx B; and (3 B (the zero vector We call a subset of a vector space that is also a vector space a subspace Example 54 Reason that one does not need to explicitly say that the zero vector is in a (subspace Definition 55 Let A be a vector space and let B be a subset of A Then B is a subspace of A if ( x, y B implies that x+y B; and (2 x B and α R implies that αx B

52 Why Should We Care? 37 One way to describe a subspace is that it is a subset that is closed under addition and scalar multiplication Example 56 The empty set is a subset of R n Is it a subspace? Why? Exercise 57 What is the smallest subspace of R n? (Smallest in terms of the number of elements in the subspace 52 Why Should We Care? Example 58 Consider 3 2 A = 2, b = 4 2 8 7, and b = Does Ax = b have a solution? The answer is yes: x = (,, 2 T Does Ax = b have a solution? The answer is no Does Ax = b have any other solutions? The answer is yes 5 7 The above example motivates the question of when a linear system has a solution, when it doesn t, and how many solutions it has We will try to answer that question in this section Let A R m n, x R n, b R m, and Ax = b Partition A ( a a a n and x χ χ χ n Then χ a + χ a + + χ n a n = b Definition 59 Let {a,, a n } R m and {χ,, χ n } R Then χ a + χ a + + χ n a n is said to be a linear combination of the vectors {a,, a n } We note that Ax = b can be solved if and only if b equals a linear combination of the vectors that are the columns of A, by the definition of matrix-vector multiplication This

38 Chapter 5 Vector Spaces: Theory and Practice observation answers the question Given a matrix A, for what right-hand side vector, b, does Ax = b have a solution? The answer is that there is a solution if and only if b is a linear combination of the columns (column vectors of A Definition 5 The column space of A R m n is the set of all vectors b R m for which there exists a vector x R n such that Ax = b Theorem 5 The column space of A R m n is a subspace (of R m Proof: We need to show that the column space of A is closed under addition and scalar multiplication: Let b,b R m be in the column space of A Then there exist x,x R n such that Ax = b and Ax = b But then A(x + x =Ax + Ax = b + b and thus b + b is in the column space of A Let b be in the column space of A and α R Then there exists a vector x such that Ax = b and hence αax = αb Since A(αx =αax = αb we conclude that αb is in the column space of A Hence the column space of A is a subspace (of R m Example 52 Consider again A = 3 2 2 4 2, b = Set this up as two appended systems, one for solving Ax = b and the other for solving Ax = (this will allow us to compare and contrast, which will lead to an interesting observation later on: 3 2 8 2 4 2 7 Now, apply Gauss-Jordan elimination 8 7 3 2 2 4 2 It becomes convenient to swap the first and second equation: 2 2 3 2 8 3 2 4 2 7 4 2 (5

52 Why Should We Care? 39 Use the first row to eliminate the coefficients in the first column below the diagonal: 2 2 7 2 7 2 7 2 7 2 Use the second row to eliminate the coefficients in the second column below the diagonal: 2 2 7 2 7 2 Divide the first and second row by the diagonal element: 2 2 2/7 /7 2/7 Use the second row to eliminate the coefficients in the second column above the diagonal: 4/7 5/7 4/7 2/7 /7 2/7 (52 Now, what does this mean? For now, we will focus only on the results for the appended system ( A b on the left We notice that χ 2 = So, there is no constraint on variable χ 2 As a result, we will call χ 2 a free variable We see from the second row that χ 2/7χ 2 = /7 or χ = /7 + 2/7χ 2 Thus, the value of χ is constrained by the value given to χ 2 Finally, χ +4/7χ 2 = 5/7 or χ = 5/7 4/7χ 2 Thus, the value of χ is also constrained by the value given to χ 2 We conclude that any vector of the form 5/7 4/7χ 2 /7+2/7χ 2 χ 2 solves the linear system We can rewrite this as 5/7 /7 + χ 2 4/7 2/7 (53 So, for each choice of χ 2, we get a solution to the linear system by plugging it into Equation (53

4 Chapter 5 Vector Spaces: Theory and Practice Example 53 We now give a slightly slicker way to view Example 52 Consider again Equation (52: 4/7 5/7 2/7 /7 This represents 4/7 2/7 χ χ χ χ 2 = 5/7 /7 Using blocked matrix-vector multiplication, we find that ( ( ( χ 4/7 5/7 + χ 2 = 2/7 /7 and hence ( χ χ = which we can then turn into χ χ = χ 2 ( 5/7 /7 5/7 /7 χ 2 ( 4/7 2/7 + χ 2 4/7 2/7 In the above example, we notice the following: 5/7 3 2 Let x p = /7 Then Ax p = 2 4 2 5/7 /7 = words, x p is a particular solution to Ax = b (Hence the p in the x p 4/7 3 2 4/7 Let x n = 2/7 Then Ax n = 2 2/7 = 4 2 8 7 that x n is in the null space of A, to be defined later (Hence the n in x n Now, notice that for any α, x p + αx n is a solution to Ax = b : A(x p + αx n =Ax p + A(αx n =Ax p + αax n = b + α =b In other We will see

52 Why Should We Care? 4 So, the system Ax = b has many solutions (indeed, an infinite number of solutions To characterize all solutions, it suffices to find one (nonunique particular solution x p that satisfies Ax p = b Now, for any vector x n that has the property that Ax n =, we know that x p + x n is also a solution Definition 54 Let A R m n Then the set of all vectors x R n that have the property that Ax = is called the null space of A and is denoted by N (A Theorem 55 The null space of A R m n is a subspace of R n Proof: Clearly N (A is a subset of R n Now, assume that x, y N (A and α R Then A(x + y =Ax + Ay = and therefore (x + y N (A Also A(αx =αax = α = and therefore αx N (A Hence, N (A is a subspace A Notice that the zero vector (of appropriate length is always in the null space of a matrix Example 56 Let us use the last example, but with Ax = b : Let us set this up as an appended system 3 2 5 2 4 2 7 Now, apply Gauss-Jordan elimination It becomes convenient to swap the first and second equation: 2 3 2 5 4 2 7 Use the first row to eliminate the coefficients in the first column below the diagonal: 2 7 2 8 7 2 Use the second row to eliminate the coefficients in the second column below the diagonal: 2 7 2 8 3

42 Chapter 5 Vector Spaces: Theory and Practice Now, what does this mean? We notice that χ 2 = 3 This is a contradiction, and therefore this linear system has no solution! Consider where we started: Ax = b represents 3χ + ( χ + 2χ 2 = = 5 χ + 2χ + (χ 2 = = 4χ + χ + 2χ 2 = = 7 Now, the last equation is a linear combination of the first two Indeed, add the first equation to the second, you get the third Well, not quite: The last equation is actually inconsistent, because if you subtract the first two rows from the last, you don t get = As a result, there is no way of simultaneously satisfying these equations 52 A systematic procedure (first try Let us analyze what it is that we did in Examples 52 and 53 We start with A R m n, x R n, and b R m where Ax = b and x is to be computed Append the system ( A b Use the Gauss-Jordan method to transform this appended system into the form ( Ik k à TR bt (m k k (m k (n k bb, (54 where I k k is the k k identity matrix, à TR R k (n k, b T R k, and b B R m k Now, if b B, then there is no solution to the system and we are done Notice that (54 means that ( Ik k à TR ( xt x B = ( bt Thus, x T + ÃTRx B = b T This translates to x T = b T ÃTRx B or ( ( ( xt bt = + x B x B ÃTR I (m k (m k

52 Why Should We Care? 43 By taking x B =, we find a particular solution x p = ( bt By taking, successively, x B = e i, i =,, (m k, we find vectors in the null space: ( x ni = e i The general solution is then given by ÃTR I (m k (m k x p + α x n + + α (m k x n(m k Example 57 We now show how to use these insights to systematically solve the problem in Example 52 As in that example, create the appended systems for solving Ax = b and Ax = (Equation (5 3 2 8 2 4 2 7 3 2 2 4 2 (55 We notice that for Ax = (the appended system on the right, the right-hand side never changes It is always equal to zero So, we don t really need to carry out all the steps for it, because everything to the left of the remains the same as it does for solving Ax = b Carrying through with the Gauss-Jordan method, we end up with Equation (52: Now, our procedure tells us that x p = b Next, we notice that ( 4/7 5/7 2/7 /7 Ã TR I (m k (m k 5/7 /7 = is a particular solution: it solves Ax = 4/7 2/7 I (m k (m k = (since there is only one free variable So, ( 4/7 ÃTR x n = e I = 2/7 = (m k (m k The general solution is then given by for any scalar α x = x p + αx n = 5/7 /7 so that A TR = + α 4/7 2/7 4/7 2/7, ( 4/7 2/7, and

44 Chapter 5 Vector Spaces: Theory and Practice 522 A systematic procedure (second try Example 58 We now give an example where the procedure breaks down example is borrowed from the book Consider Ax = b where 3 3 2 2 A = 2 6 9 7 and b = 3 3 4 Note: this Let us apply Gauss-Jordan to this: Append the system: 3 3 2 2 2 6 9 7 3 3 4 The boldfaced is the pivot, in the first column Subtract 2/ times the first row and ( / times the first row from the second and third row, respectively: 3 3 2 2 3 3 6 6 6 2 The problem is that there is now a in the second column So, we skip it, and move on to the next column The boldfaced 3 now becomes the pivot Subtract 6/3 times the second row from the third row: 3 3 2 2 3 3 6 Divide each (nonzero row by the pivot in that row to obtain 3 3 2 2 2

52 Why Should We Care? 45 We can only eliminate elements in the matrix above pivots So, take 3/ times the second row and subtract from the first row: 3 4 2 (56 This does not have the form advocated in Equation 54 So, we remind ourselves of the fact that Equation 56 stands for χ 3 χ 4 χ 2 = 2 (57 χ 3 Notice that we can swap the second and third column of the matrix as long as we also swap the second and third element of the solution vector: χ 3 χ 2 4 χ = 2 (58 χ 3 Now, we notice that χ and χ 3 are the free variables, and with those we can find equations for χ and χ 2 as before Also, we can now find vectors in the null space just as before We just have to pay attention to the order of the unknowns (the order of the elements in the vector x In other words, a specific solution is now given by χ x p χ 2 χ = χ 3 and two linearly independent vectors in the null space are given by the columns of 3 giving us a general solution of χ χ 2 χ χ 3 = 4 2 + α 3 4 2 + β

46 Chapter 5 Vector Spaces: Theory and Practice But notice that the order of the elements in the vector must be fixed (permuted: χ 4 3 χ χ 2 = 2 + α + β χ 3 Exercise 59 Let A R m n, x R m, and b R n Let P R m m be a permutation matrix Show that AP T Px = b Argue how this relates to the transition from Equation 57 to Equation 58 Exercise 52 Complete Example 58 by computing a particular solution and two vectors in the null space (one corresponding to χ =, χ 3 = and the other to χ =, χ 3 = 53 Linear Independence Definition 52 Let {a,, a n } R m Then this set of vectors is said to be linearly independent if χ a + χ a + + χ n a n = implies that χ = = χ n = A set of vectors that is not linearly independent is said to be linearly dependent then Notice that if and therefore χ a + χ a + + χ n a n = and χ j, χ j a j = χ a + χ a χ j a j χ j+ a j+ χ n a n a j = χ χ j a + χ χ j a χ j χ j a j χ j+ χ j a j+ χ n χ j a n In other words, a j can be written as a linear combination of the other n vectors This motivates the term linearly independent in the definition: none of the vectors can be written as a linear combination of the other vectors Theorem 522 Let {a,, a n } R m and let A = ( a a n Then the vectors {a,, a n } are linearly independent if and only if N (A ={}

53 Linear Independence 47 Proof: ( Assume {a,, a n } are linearly independent We need to show that N (A ={} Assume x N (A Then Ax = implies that = Ax = ( a a n χ a + χ a + + χ n a n and hence χ = = χ n = Hence x = χ χ n = ( Notice that we are trying to prove P Q, where P represents the vectors {a,, a n } are linearly independent and Q represents N (A ={} It suffices to prove the converse: P Q Assume that {a,, a n } are not linearly independent Then there exist {χ,,χ n } with at least one χ j such that χ a +χ a + +χ n a n = Let x =(χ,, χ n T Then Ax = which means x N (A and hence N (A {} Example 523 The columns of an identity matrix I R n n form a linearly independent set of vectors Proof: Since I has an inverse (I itself we know that N (I ={} Thus, by Theorem 522, the columns of I are linearly independent Example 524 The columns of L = 2 2 3 2 2 3 χ χ χ 2 are linearly independent If we consider = and simply solve this, we find that χ =/ =, χ = ( 2χ /( =, and χ 2 = ( χ 2χ /(3 = Hence, N (L ={} (the zero vector and we conclude, by Theorem 522, that the columns of L are linearly independent The last example motivates the following theorem:

48 Chapter 5 Vector Spaces: Theory and Practice Theorem 525 Let L R n n be a lower triangular matrix with nonzeroes on its diagonal Then its columns are linearly independent Proof: Let L be as indicated and consider Lx = If one solves this via whatever method one pleases, the solution x = will emerge as the only solution Thus N (L ={} and by Theorem 522, the columns of L are linearly independent Exercise 526 Let U R n n be an upper triangular matrix with nonzeroes on its diagonal Then its columns are linearly independent Exercise 527 Let L R n n be a lower triangular matrix with nonzeroes on its diagonal Then its rows are linearly independent (Hint: How do the rows of L relate to the columns of L T? Example 528 The columns of L = consider 2 2 3 2 2 2 3 2 χ χ χ 2 = are linearly independent If we and simply solve this, we find that χ =/ =, χ = ( 2χ /( =, χ 2 = ( χ 2χ /(3 = Hence, N (L ={} (the zero vector and we conclude, by Theorem 522, that the columns of L are linearly independent This example motivates the following general observation: Theorem 529 Let ( A R m n ( have linearly independent columns and let B R k n A B Then the matrices and have linearly independent columns B A

54 Bases 49 Proof: Proof by contradiction Assume that ( A B Theorem 522, there exists x R n such that x and ( ( Ax = Bx columns of A are linearly independent is not linearly independent Then, by ( A x = But that means that B, which in turn implies that Ax = This contradicts the fact that the Corollary 53 Let A R m n Then the matrix columns ( A I n n has linearly independent Next, we observe that if one has a set of more than m vectors in R m, then they must be linearly dependent: Theorem 53 Let {a,a,,a n } R m and n > m Then these vectors are linearly dependent Proof: This proof is a bit more informal than I would like it to be: Consider the matrix A = ( a a n If one apply the Gauss-Jordan method to this, at most m columns with pivots will be encountered The other n m columns correspond to free variables, which allow us to construct nonzero vectors x so that Ax = 54 Bases Definition 532 Let {v,v,,v n } R m Then the span of these vectors, Span({v,v,,v n }, is said to be the space of all vectors that are a linear combination of this set of vectors Notice that Span({v,v,,v n } equals the column space of the matrix ( v v n Definition 533 Let V be a subspace of R m Then the set {v,v,,v n } R m is said to be a spanning set for V if Span({v,v,,v n }=V

5 Chapter 5 Vector Spaces: Theory and Practice Definition 534 Let V be a subspace of R m Then the set {v,v,,v n } R m is said to be a basis for V if ( {v,v,,v n } are linearly independent and (2 Span({v,v,,v n }=V The first condition says that there aren t more vectors than necessary in the set The second says there are enough to be able to generate V Example 535 The vectors {e,, e m } R m, where e j equals the jth column of the identity, are a basis for R m Note: these vectors are linearly independent and any vector x R m with x = can be written as the linear combination χ e + + χ m e m χ χ m Example 536 Let {a,, a m } R m and let A = ( a a m be invertible Then {a,, a m } R m form a basis for R m Note: The fact that A is invertible means there exists A such that A A = I Since Ax = means x = A Ax = A =, the columns of A are linearly independent Also, given any vector y R m, there exists a vector x R m such that Ax = y (namely x = A y Letting x = χ χ m we find that y = χ a + + χ m a m and hence every vector in R m is a linear combination of the set {a,, a m } R m Now here comes a very important insight: Theorem 537 Let V be a subspace of R m and let {v,v,,v n } R m and {w,w,,w k } R m both be bases for V Then k = n In other words, the number of vectors in a basis is unique Proof: Proof by contradiction Without loss of generality, let us assume that k > n (Otherwise, we can switch the roles of the two sets Let V = ( v v n and W = ( w w k Let xj have the property that w j = Vx j (We know such a vector x j exists because V spans V and w j V Then W = VX, where X = ( x x k Now, X R n k and recall that k > n This means that N (X contains nonzero vectors (why? Let y N (X Then Wy = V Xy = V (Xy=V ( =, which contradicts the fact

55 Exercises 5 that {w,w,,w k } are linearly independent, and hence this set cannot be a basis for V Note: generally speaking, there are an infinite number of bases for a given subspace (The exception is the subspace {} However, the number of vectors in each of these bases is always the same This allows us to make the following definition: Definition 538 The dimension of a subspace V equals the number of vectors in a basis for that subspace A basis for a subspace V can be derived from a spanning set of a subspace V by, oneto-one, removing vectors from the set that are dependent on other remaining vectors until the remaining set of vectors is linearly independent, as a consequence of the following observation: Definition 539 Let A R m n The rank of A equals the number of vectors in a basis for the column space of A We will let rank(a denote that rank Theorem 54 Let {v,v,,v n } R m be a spanning set for subspace V and assume that v i equals a linear combination of the other vectors Then {v,v,,v i,v i,v i,,v n } is a spanning set of V Similarly, a set of linearly independent vectors that are in a subspace V can be built up to be a basis by successively adding vectors that are in V to the set while maintaining that the vectors in the set remain linearly independent until the resulting is a basis for V Theorem 54 Let {v,v,,v n } R m be linearly independent and assume that {v,v,,v n } V Then this set of vectors is either a spanning set for V or there exists w V such that {v,v,,v n,w} are linearly independent 55 Exercises (Most of these exercises are borrowed from Linear Algebra and Its Application by Gilbert Strang Which of the following subsets of R 3 are actually subspaces? (a The plane of vectors x =(χ,χ,χ 2 T R 3 such that the first component χ = In other words, the set of all vectors χ χ 2

52 Chapter 5 Vector Spaces: Theory and Practice where χ,χ 2 R (b The plane of vectors x with χ = (c The vectors x with χ χ 2 = (this is a union of two subspaces: those vectors with χ = and those vectors with χ 2 = (d All combinations of two given vectors (,, T and (2,, T (e The plane of vectors x =(χ,χ,χ 2 T that satisfy χ 2 χ +3χ = 2 Describe the column space and nullspace of the matrices ( ( 3 A =, B = 2 3, and C = ( 3 Let P R 3 be the plane with equation x +2y + z = 6 What is the equation of the plane P through the origin parallel to P? Are P and/or P subspaces of R 3? 4 Let P R 3 be the plane with equation x + y 2z = 4 Why is this not a subspace? Find two vectors, x and y, that are in P but with the property that x + y is not 5 Find the echolon form U, the free variables, and the special (particular solution of Ax = b for ( ( 3 β (a A =, b = When does Ax = b have a solution? (When 2 6 β β =? Give the complete solution β (b A = 2, b = β β 2 When does Ax = b have a solution? (When 3 6 β 3 Give the complete solution 6 Write the complete soluton x = x p + x n to these systems: ( 2 2 2 4 5 u v w = ( 4 and ( 2 2 2 4 4 u v w = ( 4 7 Which of these statements is a correct definition of the rank of a given matrix A R m n? (Indicate all correct ones

55 Exercises 53 (a The number of nonzero rows in the row reduced form of A (b The number of columns minus the number of rows, n m (c The number of columns minus the number of free columns in the row reduced form of A (Note: a free column is a column that does not contain a pivot (d The number of s in the row reduced form of A 8 Let A, B R m n, u R m, and v R n The operation B := A + αuv T is often called a rank- update Why? 9 Find the complete solution of Find the complete solution of x +3y +z = 2x +6y +9z = 5 x 3y +3z = 5 3 2 2 6 4 8 2 4 x y z t = 3

54 Chapter 5 Vector Spaces: Theory and Practice 56 The Answer to Life, The Universe, and Everything We complete this chapter by showing how many answers about subspaces can be answered from the upper-echolon form of the linear system To do so, consider again 3 2 2 6 4 8 2 4 x y z t = from Question in the previous section Reducing this to upper echolon format yields 3 2 2 6 4 8 3 2 4 3 3 2 2 4 Here the boxed entries are the pivots (the first nonzero entry in each row and they identify that the corresponding variables (x and z are dependent variables while the other variables (y and t are free variables Give the general solution to the problem To find the general solution to problem, you recognize that there are two free variables (y and t and a general solution can thus be given by Here x p = + α + β is a particular (special solution that solves the system To obtain it, you set the free variables to zero and solve for the values in the boxes: ( 3 2 2 4 x z = or x +z = 2z =

56 The Answer to Life, The Universe, and Everything 55 so that z =/2 and x =/2 yielding a particular solution x p = /2 /2 Next, we have to find the two vectors in the null space of the matrix To obtain the first, we set the first free variable to one and the other(s to zero: ( x 3 2 2 4 z = or x +3 +z = 2z = so that z = and x = 3, yielding the first vector in the null space x n = or To obtain the second, we set the second free variable to one and the other(s to zero: ( x 3 2 2 4 z = x +z +2 = 2z +4 = so that z = 4/2 = 2 and x = z 2 =, yielding the second vector in the null space x n = 2 And thus the general solution is given by where α and β are scalars /2 /2 + α 3 + β 2, 3 Find a specific (particular solution to the problem particular solution as part of the first step The above procedure yields the

56 Chapter 5 Vector Spaces: Theory and Practice Find vectors in the null space The first step is to figure out how many (linear independent vectors there are in the null space This equals the number of free variables The above procedure then gives you a step-by-step procedure for finding that many linearly independent vectors in the null space Find linearly independent columns in the original matrix Note: this is equivalent to finding a basis for the column space of the matrix To find the linearly independent columns, you look at the upper echolon form of the matrix: 3 2 2 4 with the pivots highlighted The columns that have pivots in them are linearly independent The corresponding columns in the original matrix are linearly independent: Thus, in our example, the answer is 3 2 2 6 4 8 2 4 2 and 4 2 (the first and third column Dimension of the Column Space (Rank of the Matrix The following are all equal: The dimension of the column space The rank of the matrix The number of dependent variables The number of nonzero rows in the upper echelon form The number of columns in the matrix minus the number of free variables The number of columns in the matrix minus the dimension of the null space The number of linearly independent columns in the matrix The number of linearly independent rows in the matrix

56 The Answer to Life, The Universe, and Everything 57 Find a basis for the row space of the matrix The row space (we will see in the next chapter is the space spanned by the rows of the matrix (viewed as column vectors Reducing a matrix to upper echelon form merely takes linear combinations of the rows of the matrix What this means is that the space spanned by the rows of the original matrix is the same space as is spanned by the rows of the matrix in upper echolon form Thus, all you need to do is list the rows in the matrix in upper echolon form, as column vectors For our example this means a basis for the row space of the matrix is given by 3 2 and 2 4