Methods for Finding Bases



Similar documents
160 CHAPTER 4. VECTOR SPACES

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

NOTES ON LINEAR TRANSFORMATIONS

( ) which must be a vector

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solutions to Math 51 First Exam January 29, 2015

Systems of Linear Equations

T ( a i x i ) = a i T (x i ).

Solving Linear Systems, Continued and The Inverse of a Matrix

MAT 242 Test 2 SOLUTIONS, FORM T

Math 312 Homework 1 Solutions

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

by the matrix A results in a vector which is a reflection of the given

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Solving Systems of Linear Equations

1.2 Solving a System of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

MATH1231 Algebra, 2015 Chapter 7: Linear maps

4.5 Linear Dependence and Linear Independence

α = u v. In other words, Orthogonal Projection

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Lecture Notes 2: Matrices as Systems of Linear Equations

Row Echelon Form and Reduced Row Echelon Form

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Orthogonal Diagonalization of Symmetric Matrices

A =

Math Practice Exam 2 with Some Solutions

LS.6 Solution Matrices

Linear Algebra Notes

LINEAR ALGEBRA W W L CHEN

Name: Section Registered In:

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Vector Spaces 4.4 Spanning and Independence

1 VECTOR SPACES AND SUBSPACES

Solving Systems of Linear Equations Using Matrices

Matrix Representations of Linear Transformations and Changes of Coordinates

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Practical Guide to the Simplex Method of Linear Programming

Section Continued

Orthogonal Projections and Orthonormal Bases

Notes on Determinant

5 Homogeneous systems

Inner Product Spaces

1 Introduction to Matrices

8 Square matrices continued: Determinants

Lecture 1: Systems of Linear Equations

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATH APPLIED MATRIX THEORY

Vector and Matrix Norms

Lecture 3: Finding integer solutions to systems of linear equations

Similarity and Diagonalization. Similar Matrices

MA106 Linear Algebra lecture notes

LINEAR ALGEBRA. September 23, 2010

Solving Systems of Linear Equations

1 Sets and Set Notation.

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

THE DIMENSION OF A VECTOR SPACE

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Arithmetic and Algebra of Matrices

Linearly Independent Sets and Linearly Dependent Sets

Chapter 17. Orthogonal Matrices and Symmetries of Space

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

Direct Methods for Solving Linear Systems. Matrix Factorization

1 Determinants and the Solvability of Linear Systems

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Lecture 4: Partitioned Matrices and Determinants

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Orthogonal Projections

DERIVATIVES AS MATRICES; CHAIN RULE

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Subspaces of R n LECTURE Subspaces

Factorization Theorems

Continued Fractions and the Euclidean Algorithm

Linear Algebra Review. Vectors

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

Solution to Homework 2

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Section Inner Products and Norms

1 Solving LPs: The Simplex Algorithm of George Dantzig

Systems of Linear Equations

7 Gaussian Elimination and LU Factorization

SOLVING LINEAR SYSTEMS

What is Linear Programming?

3. INNER PRODUCT SPACES

x + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

MAT 242 Test 3 SOLUTIONS, FORM A

University of Lille I PC first year list of exercises n 7. Review

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

Section 4.4 Inner Product Spaces

Transcription:

Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space, and column space of a matrix A. To begin, we look at an example, the matrix A on the left below. If we row reduce A, the result is U on the right. A = 2 2 4 2 4 2 2 U = 3 2 2. () Let the rows of A be denoted by r j, j =, 2, 3, and the columns of A by a k, k =, 2, 3, 4. Similarly, ρ j denotes the rows of U. (We will not need the columns of U.). Row space The row spaces of A and U are identical. This is because elementary row operations preserve the span of the rows and are themselves reversible operations. Let s see in detail how this works for A. The row operations we used to row reduce A are these. step : R 2 = R 2 2R = r 2 2r R 3 = R 3 2R = r 3 2r step 2 : R 3 = R 3 + 2 R 2 = step 3 : R 2 = 2 R 2 = 2 r 2 r step 4 : R = R R 2 = 2r 2 r 2 Inspecting these row operations shows that the rows of U satisfy ρ = 2r 2 r 2 ρ 2 = 2 r 2 r ρ 3 =. It s not hard to run the row operations backwards to get the rows of A in terms of those of U. r = ρ + ρ 2 r 2 = 2ρ + 4ρ 2 r 3 = 2ρ + ρ 2. Thus we see that the nonzero rows of U span the row space of A.

They are also linearly independent. To test this, we begin with the equation c ρ + c 2 ρ 2 = ( ) Inserting the rows in the last equation we get ( c c 2 3c c 2 2c + 2c 2 ) = ( ). This gives us c = c 2 =, so the rows are linearly independent. Since they also span the row space of A, they form a basis for the row space of A. This is a general fact: Theorem. The nonzero rows in U, the reduced row echelon form of a matrix A, comprise a basis for the row space of A... Rank The rank of a matrix A is defined to be the dimension of the row space. Since the dimension of a space is the number of vectors in a basis, the rank of a matrix is just the number of nonzero rows in the reduced row echelon form U. That number also equals the number of leading entries in the U, which in turn agrees with the number of leading variables in the corresponding homogeneous system. Corollary.2 Let U be the reduced row echelon form of a matrix A. Then, the number of nonzero zero rows in U, the number of leading entries in U, and the number of leading variables in the corresponding homogeneous sustem Ax = all equal rank(a). As an example, consider the matrices A and U in (). U has two nonzero rows, so rank(a) = 2. This agrees with the number of leading entries, which are U, and U 2,. (These are in boldface in ()). Finally, the leading variables for the homogeneous system Ax = are x and x 2, again there are two..2 Null space We recall that the null spaces of A and U are identical, because row operations don t change the solutions to the homogeneous equations involved. Let s look at an example where A and U are the matrices in (). The 2

equations we get from finding the null space of U i.e., solving Ux = are x + 3x 3 2x 4 = x 2 x 3 + 2x 4 =. The leading variables correspond to the columns containing the leading entries, which are in boldface in U in (); these are the variables x and x 2. The remaining variables, x 3 and x 4, are free (nonleading) variables. To emphasize this, we assign them new labels, x 3 = t and x 4 = t 2. (In class we frequently used α and β; this isn t convenient here.) Solving the system obtained above, we get x = x x 2 x 3 x 4 = t 3 } {{ } n +t 2 2 2 } {{ } n 2 = t n + t 2 n 2. (2) From this equation, it is easy to show that the vectors n and n 2 form a basis for the null space. Notice that we can get these vectors by solving Ux = first with t =, t 2 = and then with t =, t 2 =. This works in the general case as well: The usual procedure for solving a homogeneous system Ax = results in a basis for the null space. More precisely, to find a basis for the null space, begin by identifying the leading variables x l, x l2,...,x lr, where r is the number of leading variables, and the free variables x f, x f2,...,x fn r. For the free variables, let t j = x fj. Find the n r solutions to Ux = corresponding to the freevariable choices t =, t 2 =,...,t n r =, t =, t 2 =,...,t n r =,..., t =, t 2 =,...,t n r =. Call these vectors n,n 2,...,n n r. The set {n,n 2,...,n n r } is a basis for the null space of A (and, of course, U)..2. Nullity The nullity of a matrix A is defined to be the dimension of the null space, N(A). From our discussion above, nullity(a) is just the number of free variables. However, the number of free variables plus the number of leading variables is the total number of variables involved, which equals the number of columns of A. Consequently, we have proved the following result: Theorem.3 Let A R m n. Then, rank(a) + nullity(a) = n = # of columns of A. 3

.3 Column space We now turn to finding a basis for the column space of the a matrix A. To begin, consider A and U in (). Equation (2) above gives vectors n and n 2 that form a basis for N(A); they satisfy An = and An 2 =. Writing these two vector equations using the basic matrix trick gives us: 3a + a 2 + a 3 = and 2a 2a 2 + a 4 =. We can use these to solve for the free columns in terms of the leading columns, a 3 = 3a a 2 and a 4 = 2a + 2a 2. Thus the column space is spanned by the set {a, a 2 }. (a and a 2 are in boldface in our matrix A above in ().) This set is also linearly independent because the equation = x a + x 2 a 2 = x a + x 2 a 2 + a 3 + a 4 = A implies that ( x x 2 ) T is in the null space of A. Matching this vector with the general form of a vector in the null space shows that the corresponding t and t 2 are, and therefore so are x and x 2. It follows that {a, a 2 } is linearly independent. Since it spans the columns as well, it is a basis for the column space of A. Note that these columns correspond to the leading variables in the problems, x and x 2. This is no accident. The argument that we used can be employed to show that this is true in general: Theorem.4 Let A R m n. The columns of A that correspond to the leading variables in the associated homogeneous problem, U x =, form a basis for the column space of A. In addition, the dimension of the column space of A is rank(a). 2 Another matrix example Let s do another example. Consider the matrix A and the matrix U, its row reduced form, shown below. 6 A = U = 3 2 3 2 2 2 3 4 7 4 x x 2 7

From U, we can read off a basis for the row space, {( 6 ), ( 7 )}. Again, from U we see that the leading variables are x and x 2, so the leading columns in A are a and a 2. Thus, a basis for the column space is the set 3 2, 2. To get a basis for the null space, note that the free variables are x 3 through x. Let t = x 3, etc. The system corresponding to Ux = then has the form x t t 2 6 t 3 = x 2 + t 2 + 7 t 3 =. To get n, set t =, t 2 = t 3 = and solve for x and x 2. This gives us n = ( ) T. For n2, set t =, t 2 =, t 3 =, in the system above; the result is n 2 = ( ) T. Last, set t =, t 2 =, t 3 = to get n 3 = ( 6 7 ) T. The basis for the null space is thus 6 n =, n 2 =, n 7 3 =. We want to make a few remarks on this example, concerning the dimensions of the spaces involved. The common dimension of both the row space and the column space is rank(a) = 2, which is also the number of leading variables. The dimension of the null space is the nullity of A. Here, nullity(a) = 3. Thus, in this case we have verified that the number of columns of A. rank(a) + nullity(a) =,

3 Rank and Solutions to Systems of Equations One of the most important applications of the rank of a matrix is determining whether a system of equations is inconsistent (no solutions) or consistent, and if it is consistent, whether it has one solution or infinitely many solutions. Consider this system S of linear equations: a x + + a n x n = b a 2 x + + a 2n x n = b 2 S :. a m x + + a mn x n.. = b m Put S into augmented matrix form [A b], where A is the m n coefficient matrix for S, x is the n vector of unknowns, and b is the m vector of b j s. Then we have the following theorem, which is a tool that will help in deciding whther S has none, one or many solutions. Theorem 3. Consider the system S with coefficient matrix A and augmented matrix [A b]. As above, the sizes of b, A, and [A b] are m, m n, and m (n + ), respectively; in addition, n is the number of unknowns. We have these possibilities:. S is inconsistent if and only if rank[a] < rank[a b]. 2. S has a unique solution if and only if rank[a] = rank[a b] = n. 3. S has infinitely many solutions if and only if rank[a] = rank[a b] < n. We will skip the proof, which essentially involves what we have already learned about row reduction and linear systems. Instead, we will do a few examples. Example 3.2 Let A = and b = ( 3 4) T. Determine 2 3 whether the system Ax = b is consistent. Form the augnented matrix [A b] = 3 2 3 4 6

and find its reduced row echelon form, U: [A b] U =. Because of the way the row reduction process is done, the first two columns of U are the reduced row echelon form of A. that is, A From the equations above, we see that rank(a) = 2 < rank[a b] = 3. By part of Theorem 3., the system is inconsistent. ( ) 2 Example 3.3 Let A = and b = (3 ) 2 2 3 T. Determine whether the system Ax = b is consistent. The augmented form of the system and its reduced row echelon form are given below. ( ) ( ) 2 3 [A b] = U = 2 2 3 3 7 As before, the first four columns of U comprise the reduced row echelon form of A; that is, ( ) ( ) 2 A = 2 2 3 3 Inspecting the matrices, we see that rank(a) = rank[a b] = 2 < 4. By part 3 of Theorem 3., the system is consistent and has infinitely many solutions. We close by pointing out that for any system [A b] R m (n+), the first n columns of the reduced echelon form of [A b] always comprise the reduced echelon form of A. As above, we can use this to easily find rank(a) and rank[a b], without any additional work. 7