160 CHAPTER 4. VECTOR SPACES



Similar documents
1.2 Solving a System of Linear Equations

Methods for Finding Bases

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

NOTES ON LINEAR TRANSFORMATIONS

Systems of Linear Equations

Solving Linear Systems, Continued and The Inverse of a Matrix

Solutions to Math 51 First Exam January 29, 2015

Orthogonal Diagonalization of Symmetric Matrices

4.5 Linear Dependence and Linear Independence

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

( ) which must be a vector

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

T ( a i x i ) = a i T (x i ).

Solving Systems of Linear Equations

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Lecture 1: Systems of Linear Equations

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Subspaces of R n LECTURE Subspaces

by the matrix A results in a vector which is a reflection of the given

Section Continued

Name: Section Registered In:

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Math Practice Exam 2 with Some Solutions

Similarity and Diagonalization. Similar Matrices

Linear Algebra Notes

Representation of functions as power series

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

University of Lille I PC first year list of exercises n 7. Review

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Partial Fractions Decomposition

MAT 242 Test 2 SOLUTIONS, FORM T

A =

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Linear Algebra Review. Vectors

LINEAR ALGEBRA W W L CHEN

Matrix Representations of Linear Transformations and Changes of Coordinates

Mathematics Course 111: Algebra I Part IV: Vector Spaces

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Linearly Independent Sets and Linearly Dependent Sets

DERIVATIVES AS MATRICES; CHAIN RULE

MATH1231 Algebra, 2015 Chapter 7: Linear maps

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Solving Systems of Linear Equations Using Matrices

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Section Inner Products and Norms

Vector Spaces 4.4 Spanning and Independence

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

α = u v. In other words, Orthogonal Projection

Lecture 3: Finding integer solutions to systems of linear equations

MATH APPLIED MATRIX THEORY

[1] Diagonal factorization

Numerical Analysis Lecture Notes

8 Square matrices continued: Determinants

Notes on Determinant

Lecture 5: Singular Value Decomposition SVD (1)

1 VECTOR SPACES AND SUBSPACES

MA106 Linear Algebra lecture notes

x = + x 2 + x

Inner Product Spaces

Chapter 17. Orthogonal Matrices and Symmetries of Space

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

THE DIMENSION OF A VECTOR SPACE

Linear Equations in Linear Algebra

LS.6 Solution Matrices

CURVE FITTING LEAST SQUARES APPROXIMATION

Continued Fractions and the Euclidean Algorithm

5 Homogeneous systems

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Chapter 3: The Multiple Linear Regression Model

Row Echelon Form and Reduced Row Echelon Form

1 Sets and Set Notation.

Orthogonal Projections and Orthonormal Bases

The last three chapters introduced three major proof techniques: direct,

Practical Guide to the Simplex Method of Linear Programming

Orthogonal Projections

The Determinant: a Means to Calculate Volume

1 Introduction to Matrices

Solving Systems of Linear Equations

Math 312 Homework 1 Solutions

1 Review of Least Squares Solutions to Overdetermined Systems

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Matrix Differentiation

Recall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.

Vector and Matrix Norms

Chapter 6. Orthogonality

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Question 2: How do you solve a matrix equation using the matrix inverse?

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Linear Codes. Chapter Basics

Math 215 HW #6 Solutions

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Unit 18 Determinants

What is Linear Programming?

Transcription:

160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results which in turn will give us deeper insight into solving linear systems. 4..1 Rank and Nullity The rst important result, one which follows immediately from the previous section, is that the row space and the column space of a matrix have the same dimension. It is easy to see. For the row space, we use the corresponding matrix in row-echelon form. The number of leading ones is the number of row vectors in the basis of the row space, hence its dimension. But the same number of leading ones also gives us the number of vectors in the basis of the column space, hence also its dimension. We give this result as a theorem. Theorem 8 If A is any matrix, then its row space and column space have the same dimension. De nition 9 Let A be a matrix. 1. The dimension of its row space (or column space) is called the rank of A. It us denoted rank (A).. The dimension of its null space is called the nullity of A. It is denoted nullity (A). 5 8 0 1 Example 80 Find rank (A) and nullity (A) for A = 6 1 5 1 5 4 11 19 1 1 1 5 rank (A). It is enough to put A in row-echelon form and count the number ofleading ones. The reader will verify that a row-echelon form of 1 5 1 5 A is 6 0 1 4 0 0 0 1 5 5. There are three leading ones, therefore 0 0 0 0 0 ran (A) =. nullity (A). For this, we need to nd a basis for the solution set of 1 0 1 0 1 6 0 1 0 4 0 0 0 1 5 5. The 0 0 0 0 0 Ax = 0. Its reduced row-echelon form is: corresponding system is 8 < : x 1 + x + x 5 = 0 x x + x 5 = 0 x 4 5x 5 = 0. The leading variables 5.

4.. RANK AND NULLITY 161 are x 1, x, and x 4. Hence, the free variables are x8 and x 5. We can x 1 = s t >< x = s t write the solution in parametric form as follows: x = s or x 4 = 5t >: x 5 = t x 1 1 1 x 6 x 4 x 4 5 = s 6 1 4 0 5 + t 6 0 4 5 5. Thus, nullity (A) =. x 5 0 1 Remark 81 In the previous example, if we had had to nd a basis for the row space and column space, we would have used the row-echelon form of A. For the column space, we simply take the row vectors of the row-echelon form with a leading one. Thus, we see that a vector for the row space is 1; ; 5; 1; 5 ; 0; 1; ; ; ; 0; 0; For the column space, we must use the column vectors from the original matrix corresponding to the columns of the row-echelon form with leading ones. The column having leading ones are 1,, and 4. Hence, a basis for the column space 8 >< of A is 6 4 >: 1 1 5 ; 6 4 5 11 5 ; 6 4 0 1 5 9 >= 5. >; Remark 8 In the above example, A was 4 5. We see that rank (A) + nullity (A) = 5, the number of columns of A. This is not an accident. We will state it as a theorem below. Remark 8 Since the rows of A are the columns of A T, we see that the row space of A is the column space of A T and vice-versa. So, the dimension of the row space of A is the same as that of the column space of A T. we state this as a theorem. Theorem 84 rank (A) = rank A T. Now, we state an important result regarding the relationship between rank (A) and nullity (A). Theorem 85 If A is m n, then rank (A) + nullity (A) = n. In other words, rank (A) + nullity (A) = number of columns of A. Proof. Variables in a system can be separated in two categories. The leading variables, the ones corresponding to the leading 1 0 s and the free variables, the number of ones to which we usually assign a parameter. We have + leading variables number of = n. But the number of leading variables, being the same free variables as the number of leading 1 0 s, so it is rank (A). Similarly, the number of free variables is the number of parameters in the solution of the homogeneous system, hence it is the dimension of the null space, that is nullity (A).

16 CHAPTER 4. VECTOR SPACES Let us state as a corollary two important results used in the proof. Corollary 86 If A is m n, then: 1. rank (A) = the number of leading variables in the solution of Ax = 0.. nullity (A) = the number of parameters in the solution of Ax = 0. Remark 8 One important consequence of the theorem is that once we know the rank of a matrix, we also know its nullity and vice-versa. We illustrate it with an example. Example 88 Find the rank and nullity of A = 4 Not this matrix has 5 columns. The row-echelon form is 6 1 1 1 1 5. 4 5 8 4 1 1 4 0 0 1 5. 0 0 0 0 0 We see that rank (A) = ( leading 1 0 s). Therefore nullity (A) = 5 =. Proposition 89 Let A be an m n matrix. Then rank (A) min (m; n). Proof. Let us note that min (m; n) is the smallest values between m and n. The column vectors of A are in R m, hence the dimension of the column space is at most m. Similarly, the row vectors of A are in R n. Hence, the dimension of the row space is at most n. Since the column space and the row space have the same dimension, which is called rank (A), we see that rank (A) is at most m and at most n. Hence, we must have rank (A) min (m; n). We now see how this applies to linear systems. 4.. Applications to Linear Systems We look at conditions which must be satis ed in order for a system of m linear equations in n variables to have no solutions, a unique solution, or in nitely many solutions. Let us begin with some de nitions. De nition 90 A linear system with more equations than unknowns is called overdetermined. If it has fewer equations than unknowns, it is called underdetermined. We already know certain results, which we have developed so far. We will recall these results, then add new results. We rst consider the case m = n. Then, we will look at what happens if m 6= n.

4.. RANK AND NULLITY 16 Linear Systems Having m Equations and n Unknowns, m = n Theorem 91 If A is an nn matrix, then the following statements are equivalent: 1. A is invertible.. Ax = b has a unique solution for any n 1 column matrix b.. Ax = 0 has only the trivial solution. 4. A is row equivalent to I n. 5. A can be written as the product of elementary matrices. 6. jaj 6= 0.. The column vectors of A are linearly independent. 8. The row vectors of A are linearly independent. 9. The column vectors of A span R n. 10. The row vectors of A span R n.n 11. The column vectors of A form a basis for R n. 1. The row vectors of A form a basis for R n. 1. rank (A) = n. 14. nullity (A) = 0. The statements added are 14. It is easy to see why they are true. We look brie y why they are true. Number and 8 follow from properties of determinants and previous theorems. If say the rows of A were dependent, the one row would be a linear combination of others. But in this case, the determinant of A would be 0. The same applies for the columns. 9, 10, 11, and 1 follow immediately because we have n independent vectors in an space of dimension n. hence, they must also span and form a basis. Since rank (A) is the dimension of the row space and the row vectors are a basis for R n, it follows that rank (A) = n and hence nullity (A) = 0 since we must have rank (A) + nullity (A) = n. Linear Systems Having m Equations and n Unknowns, m 6= n This case is a little bit more di cult. We already know some results. In the case of a homogeneous system, if the system has more unknowns than equations, then it will have in nitely many solutions. We derive more similar results. First, we look at conditions under which if A is an m n matrix the system Ax = b is consistent for every m 1 matrix b. We already know that this

164 CHAPTER 4. VECTOR SPACES system has a solution if b is in the column space of A. Since b is an element of R m, for the system to have a solution for every b, the column space of A must span R m. But the column space of A is also a subspace of R m. This means that the column space of A must be equal to R m. This implies in particular that rank (A) = m. We state these as a theorem. Theorem 9 If A is an m n matrix, then the following statements are equivalent: 1. the system Ax = b is consistent for every m 1 matrix b.. The column space of A spans R m.. rank (A) = m. This has important consequences. Recall, we saw earlier that if A is an m n matrix, then rank (A) min (m; n). So, if m > n (more equations than unknowns or the system is overdetermined), then rank (A) n, hence we cannot have rank (A) = m, so the system cannot be consistent for every m 1 matrix b: Next, we look at the homogeneous system Ax = 0. We know that such a system always has the trivial solution. We also know the conditions which guaranty uniqueness of solution when m = n. What about when m 6= n? It is easy to see that the system only has the trivial solution if and only if the columns of A are linearly independent. It is so because Ax is a linear combination of the columns of A. Saying that Ax = 0 only has the trivial solution is therefore the same as saying that the only linear combination of the columns of A equal to the zero vector is the one for which all the coe cients are 0. This is the de nition of linear independence. It also turns out that in this case, the nonhomogeneous system Ax = b either has no solution, or a unique solution. We state this as a theorem. Theorem 9 If A is an m n matrix, then the following statements are equivalent: 1. Ax = 0 only has the trivial solution.. The columns of A are linearly independent.. Ax = b has at most one solution (1 or 0) for every m 1 matrix b. 4.. Concepts Review Know the de nition of rank and nullity. Be able to nd the rank and nullity of any matrix A. Know the relationship between rank and nullity. Know the various conditions under which a system of m equations in n unknowns is consistent.

4.. RANK AND NULLITY 165 4..4 Problems Do # 1,,, 4, 5, 6, 1, 1 on pages 88, 89.