Inverse and Elementary Matrices

Similar documents
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Using row reduction to calculate the inverse and the determinant of a square matrix

Systems of Linear Equations

8 Square matrices continued: Determinants

Methods for Finding Bases

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Systems of Linear Equations

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

1.2 Solving a System of Linear Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

160 CHAPTER 4. VECTOR SPACES

Notes on Determinant

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

1 Introduction to Matrices

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Lecture 4: Partitioned Matrices and Determinants

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

LS.6 Solution Matrices

Solutions to Math 51 First Exam January 29, 2015

University of Lille I PC first year list of exercises n 7. Review

1 Sets and Set Notation.

Math 312 Homework 1 Solutions

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATH APPLIED MATRIX THEORY

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

( ) which must be a vector

Continued Fractions and the Euclidean Algorithm

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

4.5 Linear Dependence and Linear Independence

Matrix Representations of Linear Transformations and Changes of Coordinates

Solving Systems of Linear Equations

6. Cholesky factorization

Row Echelon Form and Reduced Row Echelon Form

Linear Equations in Linear Algebra

by the matrix A results in a vector which is a reflection of the given

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Similarity and Diagonalization. Similar Matrices

α = u v. In other words, Orthogonal Projection

MAT188H1S Lec0101 Burbulla

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

A =

1 VECTOR SPACES AND SUBSPACES

Chapter 6. Orthogonality

Lecture 1: Systems of Linear Equations

Solution to Homework 2

Name: Section Registered In:

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

Recall that two vectors in are perpendicular or orthogonal provided that their dot

MAT 242 Test 2 SOLUTIONS, FORM T

1.5 SOLUTION SETS OF LINEAR SYSTEMS

5.3 The Cross Product in R 3

Orthogonal Projections

Lecture 2 Matrix Operations

THE DIMENSION OF A VECTOR SPACE

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

The Characteristic Polynomial

Factorization Theorems

1 Determinants and the Solvability of Linear Systems

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Homogeneous equations, Linear independence

NOTES ON LINEAR TRANSFORMATIONS

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

DETERMINANTS TERRY A. LORING

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

9 MATRICES AND TRANSFORMATIONS

Solving Systems of Linear Equations Using Matrices

Chapter 17. Orthogonal Matrices and Symmetries of Space

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Introduction to Matrices for Engineers

Cross product and determinants (Sect. 12.4) Two main ways to introduce the cross product

5 Homogeneous systems

Typical Linear Equation Set and Corresponding Matrices

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Notes from February 11

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

Dot product and vector projections (Sect. 12.3) There are two main ways to introduce the dot product

Direct Methods for Solving Linear Systems. Matrix Factorization

The Determinant: a Means to Calculate Volume

Elementary Matrices and The LU Factorization

Lecture Notes 2: Matrices as Systems of Linear Equations

Homogeneous systems of algebraic equations. A homogeneous (ho-mo-geen -ius) system of linear algebraic equations is one in which

Lecture 14: Section 3.3

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Notes on Symmetric Matrices

[1] Diagonal factorization

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH10040 Chapter 2: Prime and relatively prime numbers

Examination paper for TMA4115 Matematikk 3

LINEAR ALGEBRA W W L CHEN

Unit 18 Determinants

ISOMETRIES OF R n KEITH CONRAD

Subspaces of R n LECTURE Subspaces

Transcription:

1. Inverse matrices Inverse and Elementary Matrices Recall the economy problem: B " œ total production of coal B # œ total production of steel B $ œ total production of electricity Based on an economic model, have system of equations for B À 3 B Þ#B Þ%B œ " " # $ (*) Þ"B Þ*B Þ#B œþ( Þ " # $ Þ"B Þ#B Þ*B œ#þ* " # $ Recall: we defined \œ Ô B" B# ÕB Ø $ à Eœ Ô " Þ# Þ% Þ" Þ* Þ# Õ Þ" Þ# Þ* Ø [known] Fœ Ô " Þ( Õ#Þ* Ø [known] Then equation for unknown matrix \ is: E\ œ F [recall any system of equations can be written this way].

How to solve in general for \? Definition 1: An 8 8 matrix is called the identity matrix M if it has 1's along the diagonal and zeroes elsewhere. Example 1: M œ % % Ô "!!!! "!! identity matrix œ Ö Ù.!! "! Õ!!! " Ø Note: ab % À [note size of the matrix is usually understood]. NOTE: M#$ œ!à M$# œ!à M$$ œ ". That is, M78 œ! unless 7œ8. Ô "!!! Ô," Ô,"! "!! ÙÖ, #,# Mb œ Ö ÙÖ Ù œ Ö Ù œ bþ!! "!, $, $ Õ!!! " ØÕ, Ø Õ, Ø % % Similarly this holds for all sizes 8 of the identity. Proposition 1: If M is an 8 8 identity matrix and a is an 8-vector, then Ma œ aþ Proof: as seen directly by multiplication. Ô "!! á! Ô + " Ô + "! "! á! ÙÖ+ # + # ÙÖ Ma œ!! "! ÙÖ+ $ œ + $ ß Ö ÙÖ Ù Ö Ù ã ã ã ä ã ã ã Õ!!! á " ØÕ+ Ø Õ+ Ø 8 8 Proposition 2: For any matrix Efor which the product makes sense, M E œ EÞ 8

Proof: MEœMÒa" áa7 ÓœÒMa" áma7 ÓœÒa" áa7óœeþ Definition 2: Given a square matrix E, a matrix F is called the inverse of E if EFœM and FEœMÞ A matrix for which an inverse exists is called invertible. " # " " " # Example 2: Eœ à E œ Þ # " $ # " Theorem 2: If E has an inverse matrix F, then F is unique (i.e., there is no other inverse matrix). Proof: If F and G are both inverses of E then: FœFMœFÐEGÑœÐFEÑGœMGœGß so that any two inverses are the same, i.e., there is only one inverse. Theorem: Given square matrices Eand F, if FEœM, then EFœM, i.e., Fis automatically the inverse of E. (Proved later in this lecture - after invertible matrix theorem) Now: +, B D Let's find inverse of Eœ : Find \œ s.t. -. C A +, B D œ "! -. C A! " Þ Get: +B,C œ " -B.C œ!

+D,A œ! -D.A œ " Augmented matrix for the first pair: Augmented matrix for second pair: +, l " -. l! Solve in parallel: +, l! -. l " +, l "! -. l! " Use row reduction: ",Î+ l "Î+! Ä ",Î+ l "Î+! -. l! "! Ð+.,-ÑÎ+ l -Î+ " Ä Ä ",Î+ l "Î+!! " l -ÎÐ+.,-Ñ +ÎÐ+.,-Ñ "! l "Î+ -,ÎÐ+Ð+.,-ÑÑ,ÎÐ+.,-Ñ! " l -ÎÐ+.,-Ñ +ÎÐ+.,-Ñ œ "! l.îð+.,-ñ,ð+.,-ñ! " l -ÎÐ+.,-Ñ +Ð+.,-Ñ But note the entries in the first column on the right should be the solution for B and C:

B.ÎÐ+.,-Ñ œ C -ÎÐ+.,-Ñ ß since the solution for B and C is obtained by manipulating the first column of the right hand side. And similarly the second column on the right should be D,ÎÐ+.,-Ñ œ A +ÎÐ+.,-Ñ So the inverse is: " B D.ÎÐ+.,-Ñ,ÎÐ+.,-Ñ E œ F œ œ C A -ÎÐ+.,-Ñ +ÎÐ+.,-Ñ œ " +.,-., - + +, Moral: to invert Eœ ß -. row reduce +, l "! "! l B D " Ä -. l! "! " l C A œ ÒMlE Ó (after row reduction) that is, after row reduction the identity will be in the left block, and the desired inverse matrix appears in the right block. Thus: \E œ Mà can also check that E\ œ MÞ Check: +, "., "! -. +. -, œ - +! " Þ Thus we have a general formula for Note this is a general method # # matrices. Definition 3: A matrix which has an inverse is called nonsingular

Theorem 3: If E and F are nonsingular then EF is also nonsingular and ÐEFÑ œ F E " " " Þ " " " " " Pf: Note that ÐF E ÑÐEFÑ = ", so that ÐEFÑ œ F E. Note that the other direction giving the identity can also be checked. [Note: this generalizes to larger products of matrices in the expected way; see corollary 1.1; you will do the proof in exercises.] [Note there are some other theorems in the book along same lines; proofs are simple] 2. Conclusion of the original matrix problem: Recall: We had (*) B Þ#B Þ%B œ " " # $ Þ"B Þ*B Þ#B œþ( " # $ Þ"B Þ#B Þ*B œ#þ* " # $ Showed that if we defined \œ Ô B" B# à ÕB Ø $ Eœ Ô " Þ# Þ% Þ" Þ* Þ# Õ Þ" Þ# Þ* Ø [known] Fœ Ô " Þ( Õ#Þ* Ø [known], then to find \ we solve E\ œ Fß

for the unknown matrix \. How to do that? Suppose that matrix E has an inverse matrix E " : then " " " E\œF Ê ÐE EÑ\œE F Ê \œe F Thus we would have the unknown vector X as a product of two known matrices. " How to find E for a $ $ matrix? We'll find out later. Example (2 2; a previous example revisited ): Consider a metro area with constant population of 1 million. Area consists of city and suburbs. Analyze changing urban and suburban populations. Let G" and W" denote the two populations in the first year of our study, and in general G8 and W8 denote the populations in the 8 >2 year of the study. Assume that each year 15% of people in city move to suburbs, and 10% of people in suburbs move to the city. Thus next year's city population will be.85 times this year's city population plus.10 times this year's suburban population: Similarly G œþ)&g Þ"W 8 " 8 8 W8 " œþ"&g8.*!w8 Þ A) If the populations in year 2 were.5 and.5 million, what were they in year 1? Þ& œ Þ)& G Þ& œ Þ"& G Þ" W.*! W " " " " System of equations for G and W Þ Write as matrix: " " = ðóóñóóò Þ)& Þ" ï G " ï Þ& Þ"& Þ* W Þ& " E \ F

" " Þ* Þ" E œ Þ Þ* Þ)& Þ" Þ"& Þ"& Þ)& œ " Þ(& Þ* Þ" "Þ# Þ"$$$ œ Þ"& Þ)& Þ# "Þ"$$$ Thus: " "Þ# Þ"$$$ Þ& Þ&$$% \œe F œ œ Þ# "Þ"$$$ Þ& Þ%''( Which give original populations of the city and suburbs. 3. Elementary Matrices: Definition: An elementary matrix is a matrix obtained from the identity by a single elementary row operation. Ex 1: Elementary matrix of type I [interchange rows] : Ô!! "! "! Õ"!! Ø Elementary matrix of type II [multiply row by a constant] : Ô "!!! $! Õ!! " Ø Elementary matrix of type III [add multiple of one row to another] :

Ô "!!! "! Õ$! " Ø 4. Some theorems about elementary matrices: Note: now we will prove some theorems about elementary matrices; we will make them statements (most of which I will prove; will state when not proving them) Þ This is a story about elementary matrices we will be writing. Theorem 1: Let E be a matrix, and let F be the result of applying an ERO to E. Let I be the elementary matrix obtained from I by performing the same ERO. Then FœIEÞ Proof is exercise. Ô "!! Example 1: Start with Mœ! "! ß and add 3 row 1 + row 3 Õ!! " Ø to get the elementary matrix: Ô "!! Iœ! "! Þ Õ$! " Ø Now note that multiplying any matrix E by I performs the same operation on E: Ô "!! Ô # "! Ô # "! IE œ! "! " " # œ " " # Þ Õ$! " ØÕ # " $ Ø Õ ) % $ Ø Thus conclude: If have matrix E, and apply series of ERO's to E: E Ä E Ä E Ä E Ä á Ä E " # $ 8

Each ERO can be obtained by multiplying the previous matrix by an elementary matrix: E œ I E " " E œie œiie # # " # " E œie œiiie $ $ # $ # " (*) E œii áiie 8 8 8 " # " ã Conclusion 1: If E 8 is the result of applying a series of row operations to E, then E8 can be expressed in terms of E as in (*) above. Note that every elementary row operation can be reversed by an elementary row operation of the same type. Since ERO's are equivalent to multiplying by elementary matrices, have parallel statement for elementary matrices: Theorem 2: Every elementary matrix has an inverse which is an elementary matrix of the same type. Proof: See book 5. More facts about matrices: Assume henceforth E is a square 8 8 matrix. Suppose we have homogeneous system Î Ô B" Ñ B# Ð Eœ [ E 34]; \œ Ö ÙÓ: ã Ï ÕB ØÒ %

E\ œ! Þ Assume that only solution is \œ! (i.e. zero matrix for \). What can we conclude about E? Reduce E to reduced row echelon form: E Ä F Then F\ œ! has the same solutions. But F has to have all non-zero rows, since if a row is 0 then there will be nontrivial solutions for \ (less equations than unknowns; see results from class last time). Therefore, F must be the identity. Conversely, if E is row equivalent to M, then E\ œ! is equivalent to M\ œ!, so the only solution is \ œ!þ We have just proved: Conclusion 2: E is row-equivalent to I, the identity matrix, 300 E\ œ! has no nontrivial solutions. Example: Eœ " # " # Ä 2 %!! System of equations: Lots of solutions: B œ B # # B œ #B Þ " # E\ œ! Ä B" #B# œ!!!œ!

B " # Thus \œ œb ß B arbitrary (infinite number of B # " # # solutions). Conclude: exist nontrivial solutions. Thus Eis not row-eqivalent to M. [as can be checked directly] Again we assume here our matrices are square: 1. Nonsingular matrices are just products of elementary matrices then: Notice that if a matrix E is nonsingular (invertible), then if E\ œ!, " " E\ œ! Ê E E\ œ E! Ê M\ œ! Thus E\ œ! Ê \ œ!þ has no nontrivial solutions. Thus E is row equivalent to I. Thus there exist elementary matrices IßáßI " 5 such that: I I I ái I E œ M 5 5 " 5 # # " " " " " " " " # 5 " # 5 Ê E œ ÐI I ái ÑM œ I I ái Þ So E is a product of elementary matrices. Also, note that if E is a product of elementary matrices, then Eis nonsingular since the product of nonsingular matrices is nonsingular. Thus Conclusion 3: E is a product of elementary matrices (note inverses of elementary matrices are also elementary) iff E is nonsingular, Now notice that:

if E is nonsingular, then EœIIáI " # 5 œiiái " # 5Mœ a matrix row equivalent to MÞ Can reverse the argument: if E row equivalent to M then so E is nonsingular. EœIIáI Mß " # 5 Conclusion 4: E3s row equivalent to M iff Eis nonsingular. Example 2: Eœ # " "Î#V" " "Î# Í " # #V " # " V" V # " "Î# #Î$ V# " "Î# Í Í V V! $Î# $Î#V! " " # # "Î# V# V " "! Í. "Î#V V! " # " Thus E is equivalent to the identity, so by conclusion 4, we have that E is nonsingular. Thus E must be a product of elementary matrices. But note we can reverse the above process and go from I to E as shown above.

Thus Eœ #! "! "! " "Î# "!! " " "! $Î#! "! " œiiiim " # $ % Thus since matrices. 2. Conclusions: E is nonsingular it is indeed a product of elementary We have shown the following are equivalent (if you follow through the arguments): 1. E is nonsingular 2. E\ œ! has only trivial soln. 3. E is row equivalent to M 4. E\ œ F has a unique solution for every given column matrix F. 5. E is a product of elementary matrices. [Note that other equivalences are shown in section 2.2; these are the important ones for now] T<990 À We have showed 2, 3, are equivalent in Conclusion 2. 1 and 5 are equivalent by Conclusion 3. And 1 and 3 are equivalent by Conclusion 4. To show that % is equivalent to ", note that if 1 holds, then the equation implies which implies E\ œ F " " E E\ œ E F \œe which means \ has a unique solution, so that 4 holds. Conversely, if 4 holds then E\œFhas a unique solution for each F. Setting Fœ!, we get that E\ œ! has only trivial solution, which shows E is nonsingular via condition 2 above. " Fß

We thus have 2 Í 3 Í 1 Í 5, and that 1 Í 4. This shows that all statements are equivalent. 3. A useful Corollary: Corollary: Let E be a 8 8matrix. If the square matrix F satisfies FE œ M, Ð"Ñ then automatically also EF œ M Ð#Ñ " i.e., F œ E. Similarly, Ð#Ñ also implies Ð"ÑÞ Proof: Suppose Ð"Ñ holds. Then E\œ! Ê FE\œ! Ê \œ! Ê only trivial solution Also we have from Ð"Ñ: ÊE " exists. " " " FEE œ ME Ê F œ E Þ Now suppose Ð#Ñ holds. Then reversing the roles of E and F, it follows that (1) holdsþ 4. Inverting a matrix: If Eis invertible then Eis row-equivalent to M, so II 5 5 " " áieœmß where the I are ERO which get M. Thus 5 E œ I I ái Mß " 5 5 " "

i.e. to get we need to perform the same elementary row operations on M that we perform on Eto turn it into identity. E " Example 1: Find inverse of Eœ # " " # [note we already know how to invert this in another way] Inversion process: # " l "! " # l! " Ä Ä " "Î# l "Î#! " # l! " " "Î# l "Î#!! $Î# l "Î# " Ä " "Î# l "Î#!! " l "Î$ #Î$ Ä "! l #Î$ "Î$! " l "Î$ #Î$ So " #Î$ "Î$ E œ Þ "Î$ #Î$ [Note that material below completes our coverage of material related to the inverse matrix theorem] 5. Inverses and linear transformations

Recall if E is a nonsingular square matrix, and x is a vector, then we can define the function Recall properties XÐxÑ œ ExÞ XÐx yñœeðx yñœex Ey œxðxñ XÐyÑ XÐ-xÑœE-x œ-ex œ-xðxñþ Thus X is a linear transformation. [recall X can be viewed as a rule for taking the vector xto Ex] We define the inverse X " of X as the operation which takes Xxback to x. Thus X " ÐXÐxÑÑ œ x [note this happens only if X is 1-1 and onto] Figure: Eviewed as a linear transformation: Ex œ XÐxÑÞ Note that X " inverts the operation. Note that " " E ÐXxÑ œ E ÐExÑ œ xþ Thus it must be that " " X ÐXxÑ œ E ÐXxÑ

" " so that X corresponds to multiplication by E. [this is stated as a theorem in the text]