Using pivot positions to prove the Invertible Matrix Theorem in Lay s Linear Algebra

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Row Echelon Form and Reduced Row Echelon Form

Linearly Independent Sets and Linearly Dependent Sets

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

160 CHAPTER 4. VECTOR SPACES

Methods for Finding Bases

Systems of Linear Equations

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

( ) which must be a vector

1.2 Solving a System of Linear Equations

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Similar matrices and Jordan form

Section Continued

1 Introduction to Matrices

8 Square matrices continued: Determinants

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Subspaces of R n LECTURE Subspaces

NOTES ON LINEAR TRANSFORMATIONS

Orthogonal Projections

Using row reduction to calculate the inverse and the determinant of a square matrix

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Solving Systems of Linear Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

by the matrix A results in a vector which is a reflection of the given

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Math 312 Homework 1 Solutions

1 VECTOR SPACES AND SUBSPACES

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Notes on Determinant

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

1.5 SOLUTION SETS OF LINEAR SYSTEMS

[1] Diagonal factorization

T ( a i x i ) = a i T (x i ).

Similarity and Diagonalization. Similar Matrices

α = u v. In other words, Orthogonal Projection

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Name: Section Registered In:

Question 2: How do you solve a matrix equation using the matrix inverse?

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

MAT188H1S Lec0101 Burbulla

Solutions to Math 51 First Exam January 29, 2015

18.06 Problem Set 4 Solution Due Wednesday, 11 March 2009 at 4 pm in Total: 175 points.

Linear Algebra Notes

Solving Linear Systems, Continued and The Inverse of a Matrix

University of Lille I PC first year list of exercises n 7. Review

MAT 242 Test 2 SOLUTIONS, FORM T

THE DIMENSION OF A VECTOR SPACE

LS.6 Solution Matrices

A =

The Characteristic Polynomial

MATH APPLIED MATRIX THEORY

Solution to Homework 2

Orthogonal Diagonalization of Symmetric Matrices

6. Cholesky factorization

4.5 Linear Dependence and Linear Independence

Lecture 2 Matrix Operations

Lecture 4: Partitioned Matrices and Determinants

x = + x 2 + x

1 Sets and Set Notation.

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Examination paper for TMA4115 Matematikk 3

Factorization Theorems

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Recall that two vectors in are perpendicular or orthogonal provided that their dot

DETERMINANTS TERRY A. LORING

is in plane V. However, it may be more convenient to introduce a plane coordinate system in V.

Matrix Representations of Linear Transformations and Changes of Coordinates

LINEAR ALGEBRA. September 23, 2010

ISOMETRIES OF R n KEITH CONRAD

Elementary Matrices and The LU Factorization

Linear Equations in Linear Algebra

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

26. Determinants I. 1. Prehistory

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Review. Vectors

Lecture 1: Systems of Linear Equations

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

LINEAR ALGEBRA W W L CHEN

Arithmetic and Algebra of Matrices

Inner products on R n, and more

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Lecture Notes 2: Matrices as Systems of Linear Equations

Practical Guide to the Simplex Method of Linear Programming

Solving Systems of Linear Equations Using Matrices

GENERATING SETS KEITH CONRAD

Brief Introduction to Vectors and Matrices

Direct Methods for Solving Linear Systems. Matrix Factorization

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

The Determinant: a Means to Calculate Volume

Vector Spaces 4.4 Spanning and Independence

Lecture 5: Singular Value Decomposition SVD (1)

Solving Systems of Linear Equations

CS3220 Lecture Notes: QR factorization and orthogonal transformations

5 Homogeneous systems

Transcription:

NYCCT: MAT 258 Dr. Terence Kivran-Swaine Using pivot positions to prove the Invertible Matrix Theorem in Lay s Linear Algebra 11/4/211 This handout provides an alternate proof to the Invertible Matrix Theorem from Chapter 2 of Lay s Linear Algbra with the intention of helping linear algebra students organize the theorem into easy to understand parts. As this is a companinion to Lay s book, the perspective is centered around the theory of pivot positions that is well developed by Lay. The various theorems regarding pivot positions from Chapter 1 of Lay s book are digested in another handout titled Interpretting Pivot Positions. That handout will be cited in the proof presented here. Following the proof of the theorem will be some commentary on the consequences of the proof on the construction of left and right inverses of notnecessarily-invertible matrices. While this handout is written with the 4 th edition ([2]) in mind, references for the 3 rd edition ([1]) are also included if they differ from those for the 4 th. Statement of the Theorem Here is the theorem in question. Theorem (The Invertible Matrix Theorem). [2, Theorem 8 from Chapter 2, page 112] Let A be a square n n matrix. Then the following statements are equivalent. That is, for a given A the statements are either all true or all false. a. A is an invertible matrix. b. A is row equivalent to the n n identity matrix. c. A has n pivot positions. d. The equation Ax = has only the trivial solution. e. The columns of A form a linearly independent set. f. The linear transformation x Ax is one-to-one. g. The equation Ax = b has at least one solution for each b in R n. h. The columns of A span R n i. The linear transformation x Ax is onto. j. There is an n n matrix C such that CA = I. k. There is an n n matrix D such that AD = I. l. A T is an invertible matrix. 1

Proof of the Theorem Throughout this proof the fact that only one pivot position can be found in a particular row or columns is used. In light of this a matrix with n columns (or rows) with n pivot postitions must have a pivot postition in each column (or row). This fact follows from the definition of reduced row echelon form in section 1.2 of [2]. In this proof I use the following lemmas. The first two include part of the content of Interpretting Pivot Positions. Lemma 1. Let A be a matrix with n rows. The following are equivalent. c. A has n pivot positions. g. The equation Ax = b has at least one solution for each b in R n. h. The columns of A span R n i. The linear transformation x Ax is onto. k. There is an n n matrix C such that AD = I. Proof. Let m denote the number of columns of A. As in Lay s proof, (k) (h) is Exercise 26 of Section 2.1 of [2] (Exercise 24 in [1]). To prove the converse, let e i denote the i th column of the identity matrix I n. If (h) holds, the equation Ax = e i always has a solution. Let D be a m n matrix, the i th column of which is a solution to Ax = e i. In this case AD = I n. This proves (k) (h). The conditions other than (k) are shown to be equivalent in Chapter 1 of [2]. These equivalences are organized in the Interpretting Pivot Positions handout. The equivalence of (c), (g) and (h) is Theorem 4 of Chapter 1 of [2]. That (i) (h) is part (a) of Theorem 12 of Chapter 1 of [2]. Lemma 2. Let A be a matrix with n columns. The following are equivalent. c. A has n pivot positions. d. The equation Ax = has only the trivial solution. e. The columns of A form a linearly independent set. f. The linear transformation x Ax is one-to-one. j. There is an n n matrix C such that CA = I. Proof. The conditions other than (c) and (j) are shown to be equivalent in Chapter 1 of [2]. These equivalences are organized in the Interpretting Pivot Positions handout. The equivalence (d) (e) is Theorem 11 of Chapter 1 of [2]. That (f) (e) is part (b) of Theorem 12 of Chapter 1 of [2]. The equivalence of (c) and (d) is implicitly noted in a boxed statement at the beginning of Section 1.5 of [2], The homogeneous equation Ax = has a 2

nontrivaial solution if and only if the equation has at least one free variable. Free variables are represented by columns without pivot positions. As a result, if A has a pivot position in each column there are no free variables and no nontrivial solution to the homogeneous system. Conversely, no non-trivial solution implies no free variables and thereby a pivot position in each column. As in Lay s proof, (j) (d) is Exercise 23 of Section 2.1 of [2]. Let m denote the number of rows of A. Denote by R A the matrix which is the product of the elementary matrices that perform the reduced row reduction on a matrix A by multiplication on the left. Let C be the n m matrix with entries c ij, where c ij = 1 if the j th row of A contains the pivot position of the i th pivot column of A and c ij = otherwise. 1 If A has n pivot postitions, C R A A = I so C = C R A satisfies (j). This proves (c) (j). Lemma 3. Let A be a matrix. Then the following statements are equivalent. a. A is an invertible matrix. b. A is row equivalent to the n n identity matrix. c. A has dimensions n n and has n pivot positions. l. A T is an invertible matrix. Proof. As is pointed out in Lay s proof, (a) (k) is a consequence of part (c) of Theorem 6 from Chapter 2 of [2]. To prove the other statements are equivalent I follow Lay s lead, nearly quoting him. A is n n and has n pivot positions if and only if the pivots lie on the main diagonal, if and only if the reduced echelon form of A is I n. Thus (c ) (b). Also (b) (a) by Theorem 7 of Section 2.2. These three lemmas in fact complete the proof of the theorem as A being an n n matrix means it has n rows and n columns. q.e.d. 1 For examples of this construction see the Comments section of this handout. 3

Here is a diagram of the structure of this proof. (l) Lemma 3 (a) (b) (c) (c ) (e) (d) (g) (h) (f) (j) (k) (i) Lemma 2 Lemma 1 (1) Comments In addition to providing the granularity of the three Lemma s, this proof provides the construction for left and right inverses if they exist. Constructing Right Inverses To construct a right inverse of a matrix, first find the reduced row-echelon form of the partitioned matrix [A I]. For this discussion label this matrix [A B]. This matrix will reveal whether A has a right inverse as it will establish the number of pivot positions in A. Let b i denote the columns of B. Note that a solution of A x = b i is a solution to Ax = e i. Any such solution is a good candidate for the i th column of a right inverse of A. In particular on can construct a solution for which all of the parameters are zero. The columns may then have a non-trivial solution to the corresponding homogeneous system added to them. Here is an example. Let 1 1 1 A = 1 1 1 1 1. (2) 1 1 1 Now I row reduce the matrix [A I 4 ] = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 4, (3)

which yields [A B] = 1 1 1 1 1 1 1 1 1 1 1 1 1. (4) This reveals that A has four pivot positions and therefor has a right inverse. Note that the fourth column represents the parameter. Setting the parameter to, the following vectors are solutions 1 1 1, 1 1, 1, (5) which provide the following right inverse of A, 1 1 1 1 1 1. (6) Note that this is B with a zero row inserted in the row of the parameter. That is, the fourth row. To employ a non-zero parameter r in any column would be to add to it the vector r r r r. (7) Setting r i for the i th column, the general form for a right inverse of A is 1 + r 1 r 2 r 3 r 4 1 r 1 1 r 2 r 3 r 4 1 r 1 1 r 2 1 r 3 r 4 r 1 r 2 r 3 r 4. (8) By specifying the values of each parameter, r 1 = 3, r 2 = 2, r 3 = and r 4 = 1 (9) 5

one specifies the following right inverse of A 4 2 3 3 1 2 3 1 1 3 2 1. (1) Constructing Left Inverses In order to construct left inverses on again begins with reduced row-echelon form of the partitioned matrix [A I]. The result will again be denoted [A B]. Here B = R A from the proof of Lemma 2. Multiplying B by C as specified in the proof 2 produces a left inverse. In fact any non-pivot column of C may be filled with whatever entries one like and the result will still be a left inverse. That is because that column will be multiplied by a row of zeroes in the reduced matrix BA. These non-pivot columns will always be on the right-most side of C, since the non-pivot rows of BA are at the bottom of the matrix. Here is an example. Let 1 1 1 1 A = 1 1 1 1. (11) 1 1 1 The reduced row echelon form of [A I 5 ] is 1 1 2 1 1 1 1 1 1 [A B] = 1 1 1 1 1 1 1 1 Here (12) 1 C = 1 1. (13) 1 As a result the general form of a left inverse of A is 1 2 1 1 1 r 1 1 r 2 1 1 1 1 r 3 1 1. (14) 1 r 4 1 1 1 1 2 Let C be the n m matrix with entries c ij, where c ij = 1 if the j th row of A contains the pivot position of the i th pivot column of A and c ij = otherwise. 6

Setting produces the left inverse r 1 = 3, r 2 = 2, r 3 = and r 4 = 1, (15) 3 2 1 2 1 2 3 3 2 1 1 1. (16) 1 2 2 2 1 Using the transpose Due to Theorem 3 of Chapter 2 of [2], these methods are interchangable. That is, to find s left-inverse C of A one may find the right inverse D of A T. Setting C = D T will suffice since D T A = (A T D) T = I T = I. (17) Likewise finding the right-inverse of A can be reduced to finding the left inverse of A. To Be Continued In Section 2.9 of Lay s Linear Algebra, the Invertible Matrix Theorem is continued. If this is discussed in class a companion handout building on this one will be distributed. Bibliography [1] David E. Lay. Linear Algebra and Its Applications. Addison-Wesley, 3 rd edition, 26. update. [2] David E. Lay. Linear Algebra and Its Applications. Addison-Wesley, 4 th edition, 212. update. 7