1.5 Elementary Matrices and a Method for Finding the Inverse

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Linear Systems, Continued and The Inverse of a Matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solving Systems of Linear Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MAT188H1S Lec0101 Burbulla

Using row reduction to calculate the inverse and the determinant of a square matrix

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solution to Homework 2

Lecture 2 Matrix Operations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

8 Square matrices continued: Determinants

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

6. Cholesky factorization

Elementary Matrices and The LU Factorization

1 Introduction to Matrices

Section 5.3. Section 5.3. u m ] l jj. = l jj u j + + l mj u m. v j = [ u 1 u j. l mj

Solving Systems of Linear Equations

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

1.2 Solving a System of Linear Equations

LINEAR ALGEBRA. September 23, 2010

Math 312 Homework 1 Solutions

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Name: Section Registered In:

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

by the matrix A results in a vector which is a reflection of the given

Similar matrices and Jordan form

Notes on Determinant

Direct Methods for Solving Linear Systems. Matrix Factorization

Introduction to Matrix Algebra

Linear Algebra Review. Vectors

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Methods for Finding Bases

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Systems of Linear Equations

Lecture notes on linear algebra

Solving Systems of Linear Equations Using Matrices

Row Echelon Form and Reduced Row Echelon Form

Lecture Notes 2: Matrices as Systems of Linear Equations

A =

Matrix Representations of Linear Transformations and Changes of Coordinates

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

160 CHAPTER 4. VECTOR SPACES

26. Determinants I. 1. Prehistory

Question 2: How do you solve a matrix equation using the matrix inverse?

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

7 Gaussian Elimination and LU Factorization

[1] Diagonal factorization

DETERMINANTS TERRY A. LORING

1 VECTOR SPACES AND SUBSPACES

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Vector Spaces 4.4 Spanning and Independence

Factorization Theorems

1 Determinants and the Solvability of Linear Systems

The Determinant: a Means to Calculate Volume

T ( a i x i ) = a i T (x i ).

Data Mining: Algorithms and Applications Matrix Math Review

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

University of Lille I PC first year list of exercises n 7. Review

Arithmetic and Algebra of Matrices

Chapter 6. Orthogonality

MATH APPLIED MATRIX THEORY

Vector and Matrix Norms

Linear Algebra: Determinants, Inverses, Rank

Linearly Independent Sets and Linearly Dependent Sets

LS.6 Solution Matrices

( ) which must be a vector

Similarity and Diagonalization. Similar Matrices

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

The Characteristic Polynomial

Linear Algebra Notes for Marsden and Tromba Vector Calculus

GENERATING SETS KEITH CONRAD

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Lecture 4: Partitioned Matrices and Determinants

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

1 Sets and Set Notation.

SOLVING LINEAR SYSTEMS

Applied Linear Algebra I Review page 1

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Section Inner Products and Norms

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Systems of Linear Equations

Linear Algebra Notes

Inner products on R n, and more

Eigenvalues and Eigenvectors

Lecture 5 Least-squares

5.5. Solving linear systems by the elimination method

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

5.3 The Cross Product in R 3

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Matrices and Linear Algebra

Linear Algebra and TI 89

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Inner Product Spaces and Orthogonality

Lecture 3: Finding integer solutions to systems of linear equations

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Transcription:

.5 Elementary Matrices and a Method for Finding the Inverse Definition A n n matrix is called an elementary matrix if it can be obtained from I n by performing a single elementary row operation Reminder: Elementary row operations:. Multiply a row a by k R 2. Exchange two rows 3. Add a multiple of one row to another Theorem If the elementary matrix E results from performing a certain row operation on I n and A is a m n matrix, then EA is the matrix that results when the same row operation is performed on A. Proof: Has to be done for each elementary row operation. Example [ a b A = c d [, E = E was obtained from I 2 by exchanging the two rows. [ [ [ a b c d EA = = c d a b EA is the matrix which results from A by exchanging the two rows. (to be expected according to the theorem above.) Theorem 2 Every elementary matrix is invertible, and the inverse is also an elementary matrix. Theorem 3 If A is a n n matrix then the following statements are equivalent. A is invertible 2. Ax = has only the trivial solution 3. The reduced echelon form of A is I n 4. A can be expressed as a product of elementary matrices.

Proof: Prove the theorem by proving (a) (b) (c) (d) (a): (a) (b): Assume A is invertible and x is a solution of Ax =, then Ax =, multiplying both sides with A gives (A A)x = I n x = x =. Thus x = is the only solution of Ax =. (b) (c): Assume that Ax = only has the trivial solution then the augmented matrix a a 2... a n a 2 a 22... a 2n.... a n a n2... a nn can be transformed by Gauss-Jordan elimination, so that the corresponding linear system must be (this is the only linear system that only has the trivial solution) that is the reduced row echelon form is x = x 2 =.... x n =............. This mean the reduced row echelon form of A is I n. (c) (d): Assume the reduced row echelon form of A is I n. Then I n is the resulting matrix from Gauss-Jordan Elimination for matrix A (let s say in k steps). Each of the k transformations in the Gauss-Jordan elimination is equivalent to the multiplication with an elementary matrix. Let E i be the elementary matrix corresponding to the i-th transformation in the Gauss-Jordan Elimination for matrix A. Then E k E k E 2 E A = I n () By the theorem?? E k E k E 2 E is invertible with inverse E E 2 E k. Multiplying above equation on both sides by the inverse from the left we get (E E2 E )E ke k E 2 E A = (E k E 2 E )I n A = E k E 2 E Since the inverse of elementary matrices are also elementary matrices, we found that A can be expressed as a product of elementary matrices. 2 k

(d) (a): If A can be expressed as a product of elementary matrices, then A can be expressed as a product of invertible matrices, therefore is invertible (theorem??). From this theorem we obtain a method to the inverse of a matrix A. Multiplying equation 5 from the right by A yields E k E k E 2 E I n = A Therefore, when applying the elementary row operations, that transform A to I n, to the matrix I n we obtain A. The following example illustrates how this result can be used to find A. Example 2. A = 3 2 2 3 4 Reduce A through elementary row operations to the identity matrix, while applying the same operations to I 3, resulting in A. To do so we set up a matrix row operations [A I [I A 3

The inverse of A is (r) + r2, 2(r) + r3 /2(r2) 3(r2) + r3 2(r3) /2(r3) + r2, (r3) + r 3(r2) + r A = 2. Consider the matrix [ 2 4 2 3 2 2 3 4 3 2 3 2 2 3 /2 /2 /2 3 2 2 3 /2 /2 /2 /2 /2 3/2 3 /2 /2 /2 3 2 3 2 3 2 2 3 2 2 9 5 2 3 2 2 9 5 2 3 2 Setup [A I and transform [ 3 2 6 2(r) + r2 [ 3 2 The last row is a zero row, indicating that A can not be transformed into I 2 by elementary row operation, according to theorem??, A must be singular, i.e. A is not invertible and has no inverse. 4

.6 Results of Linear Systems and Invertibility Theorem 4 Every system of linear equations has no solution, exactly one solution, or infinitely many solutions. Proof: If Ax = b is a linear system, then it can have no solution, this we saw in examples. So all we need to show is that, if the linear system has two different solutions then it has to have infinitely many solutions. Assume x, x 2, x x 2 are two different solutions of Ax = b, therefore Ax = b and Ax 2 = b. Let k R, then I claim x + k(x x 2 ) is also a solution of Ax = b. To show that this is true that A(x + k(x x 2 )) = b. Calculate A(x + k(x x 2 )) = Ax + ka(x x 2 ) = b + kax kax 2 = b + kb kb = b Theorem 5 If A is an invertible n n matrix, then for each n vector b, the linear system Ax = b has exactly one solution, namely x = A b. Proof: First show that x = A b is a solution Calculate Ax = AA b = Ib = b Now we have to show that if there is any solution it must be A b. Assume x is a solution, then Ax = b A Ax = A b x = A b We proved, that if x is a solution then it must be A x, which concludes the proof. Example 3 Let us solve Then the coefficient matrix is A = x + 3x 2 + x 3 = 4 x + x 2 + 2x 3 = 2 2x + 3x 2 + 4x 3 = A = 3 2 2 3 4 5

with (see example above) A = Then the solution of above linear system is 2 9 5 x = A b = 2 3 2 Theorem 6 Let A be a square matrix. 4 2 2 9 5 2 3 2 = (a) If B is a square matrix with BA = I, then B = A (b) If B is a square matrix with AB = I, then B = A 2(4) + 9(2) 5() (4) 2(2) + () (4) 3(2) + 2() = Proof: Assume BA = I. First show, that A is invertible. According to the equivalence theorem, we only have to show that Ax = has only the trivial solution, then A must be invertible. Ax = BAx = B Ix = x = Therefor if x is a solution then x =, thus the linear system has only the trivial solution, and A must be invertible. To show that B = A BA = I BAA = IA B = A So B = A which concludes the proof for part (a), try (b) by yourself. Extend the equivalence theorem Theorem 7 2 3 8 (a) A is invertible (b) For every n vector b the linear system Ax = b is consistent. (c) For every n vector b the linear system Ax = b has exactly one solution. Proof: Show (a) (c) (b) (a) (a) (c): This we proved above. (c) (b): If the linear system has exactly one solution for all n vector b, then it has at least one solution. Thus (b) holds. 6

(b) (a): Assume for every n vector b the linear system Ax = b has at least one solution. Then let x be a solution of Ax =. and x 2 be a solution of choose x 3,..., x n similar. Then let Then AC =. Ax =. C = [x x 2 x n According to theorem above then A is invertible, thus (a) holds...... = I n Example 4 One common problem that arises is the following: Given that we have a matrix A, what are the vectors b, so that Ax = b has a solution. Let [ A = 2 then we are asking for which values of b, b 2 does the linear system x + x 2 = b x + 2x 2 = b 2 have a solution? Use Gaussian Elimination to give the answer. The augmented matrix is [ [ b b 2 b 2 r2 r [ b (/3)b 2 + (/3)b ) 3 b 2 b r r2 /3(r2) [ (2/3)b + (/3)b 2 (/3)b 2 + (/3)b ) From this we conclude, that for any b, b 2 the linear system has a solution, which is x = (2/3)b + (/3)b 2, x 2 = (/3)b 2 + (/3)b ) 7

.7 Special Matrices Definition 2 A square matrix A is called diagonal, iff all entries NOT on the main diagonal are, i.e. for all i j : a ij =. Example 5. is diagonal. The inverse is A = A = 5 4 /5 /4 The inverse of a diagonal matrix is also diagonal, with entries (A ) ii = /(A) ii on the diagonal. 5 2 A k = k ( 4) k The power of a diagonal matrix is also a diagonal matrix, with entries (A k ) ii = ((A) ii ) k on the diagonal. Definition 3 A square matrix A is called an upper triangular matrix, if all entries below the main diagonal are zero, i.e. for i > j : a ij =. A square matrix A is called a lower triangular matrix, if all entries above the main diagonal are zero, i.e. for i < j : a ij =. Theorem 8 (a) The transpose of a lower triangular matrix is upper triangular, and the transpose of an upper triangular matrix is lower triangular. (b) The product of lower(upper) triangular matrices is lower(upper) triangular. (c) A triangular matrix is invertible, iff all diagonal entries are nonzero. (d) The inverse of an invertible lower(upper) triangular matrix is lower(upper) triangular. 8

Example 6 Consider the following matrices A = 5 3 2 2, B = 2 2 then A is invertible, because all diagonal entries are not zero, but B is not invertible b 22 =. Calculate 2 8 AB = 2 which is like predicted by the last theorem upper triangular. Definition 4 A square matrix is called symmetric iff A = A T, or a ij = a ji. Example 7 is symmetric. A = 5 3 5 2 2 3 2 Theorem 9 If A and B are symmetric matrices of the same size, and k R, then: (a) A T is symmetric. (b) A + B is symmetric (c) ka is symmetric. Proof: (a) Since A T = A and A is symmetric so must be A T. (b) (A + B) ji = (A) ji + (B) ji = (A) ij + (B) ij = (A + B) ij, so A + B is symmetric. (c) similar to (b). Remark: Usually AB is not symmetric, even if A and B are symmetric. commute, i.e. AB = BA, then AB is a symmetric matrix. But if the matrices Theorem If A is an invertible symmetric matrix, then A is also symmetric. 9

Proof: Assume A is invertible and A = A T, then (A T theorem before ) = (A T ) = A, so A is symmetric. Remark: In many situations for a m n matrix A the matrices A T A and AA T are of interest. A T A is a n n matrix and AA T is a m m matrix. Both are symmetric, because similar for AA T (A T A) T = A T (A T ) T = A T A Example 8 Let then and AA T = A T A = A = [ 5 3 5 2 2 5 5 2 3 2 [ 5 3 5 2 2 5 5 2 3 2 [ 5 3 5 2 2 [ = = 35 33 26 5 7 5 29 9 7 9 3 Theorem If A is invertible then A T A and AA T are also invertible Proof: Since A is invertible, then A T is also invertible, because a product of invertible matrices is invertible we conclude that AA T and A T A must also be invertible.