Lecture 23: The Inverse of a Matrix

Similar documents
Solving Linear Systems, Continued and The Inverse of a Matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Lecture Notes 2: Matrices as Systems of Linear Equations

Using row reduction to calculate the inverse and the determinant of a square matrix

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

( ) which must be a vector

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

1 Introduction to Matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

MAT188H1S Lec0101 Burbulla

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Lecture 2 Matrix Operations

1 Review of Least Squares Solutions to Overdetermined Systems

Solving Systems of Linear Equations

Solving Systems of Linear Equations Using Matrices

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Methods for Finding Bases

Lecture notes on linear algebra

Chapter 6. Orthogonality

LINEAR ALGEBRA. September 23, 2010

A =

[1] Diagonal factorization

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Chapter 17. Orthogonal Matrices and Symmetries of Space

8 Square matrices continued: Determinants

Linearly Independent Sets and Linearly Dependent Sets

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

1 Sets and Set Notation.

Linear Algebra Review. Vectors

Similar matrices and Jordan form

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Similarity and Diagonalization. Similar Matrices

How To Understand And Solve Algebraic Equations

T ( a i x i ) = a i T (x i ).

by the matrix A results in a vector which is a reflection of the given

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

LS.6 Solution Matrices

Notes on Determinant

MATH APPLIED MATRIX THEORY

5. Orthogonal matrices

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Solution of Linear Systems

160 CHAPTER 4. VECTOR SPACES

6. Cholesky factorization

Question 2: How do you solve a matrix equation using the matrix inverse?

Lecture 14: Section 3.3

LAB 11: MATRICES, SYSTEMS OF EQUATIONS and POLYNOMIAL MODELING

Data Mining: Algorithms and Applications Matrix Math Review

Systems of Linear Equations

9 MATRICES AND TRANSFORMATIONS

Orthogonal Projections

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Linear Algebra Notes

Typical Linear Equation Set and Corresponding Matrices

7 Gaussian Elimination and LU Factorization

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Math 312 Homework 1 Solutions

Solutions to Math 51 First Exam January 29, 2015

Math 215 HW #6 Solutions

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

University of Lille I PC first year list of exercises n 7. Review

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

PYTHAGOREAN TRIPLES KEITH CONRAD

Solution to Homework 2

Elementary Matrices and The LU Factorization

Introduction to Matrices

6. Vectors Scott Surgent (surgent@asu.edu)

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Continued Fractions and the Euclidean Algorithm

Solving Systems of Linear Equations

Name: Section Registered In:

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Lecture 5: Singular Value Decomposition SVD (1)

x = + x 2 + x

Lecture 4: Partitioned Matrices and Determinants

Complex Eigenvalues. 1 Complex Eigenvalues

Arithmetic and Algebra of Matrices

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Section V.3: Dot Product

5.3 The Cross Product in R 3

1 Determinants and the Solvability of Linear Systems

Eigenvalues and Eigenvectors

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

LINEAR ALGEBRA W W L CHEN

Vector and Matrix Norms

Introduction to Matrices for Engineers

Systems of Linear Equations

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Unified Lecture # 4 Vectors

Rotation Matrices and Homogeneous Transformations

Here are some examples of combining elements and the operations used:

Introduction to Matrix Algebra

Applied Linear Algebra I Review page 1

Row Echelon Form and Reduced Row Echelon Form

Transcription:

Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016

The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1 A = I n. Theorem The inverse A 1, if it exists, is unique and satisfies AA 1 = I n. Note that the inverse of a matrix is the analogue of a reciprocal a 1 = 1 a of a number. Note that the reciprocal 1 a of a number exists if, and only if, a 0.

An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1? Let AB = = 3 4 1.5 0.5??

An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = 3 4 1.5 0.5??

An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = 3 4 1.5 0.5 0?

An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1? BA = = 1.5 0.5 3 4??

An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1 0 BA = = 1.5 0.5 3 4??

An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1 0 BA = = 1.5 0.5 3 4 0?

An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1 0 BA = = = I 1.5 0.5 3 4 0 1 2 We can see that B = A 1 and A = B 1. We can also see that: Verifying whether a given pair of matrices are inverses of each other is easy. You just need to multiply them and check whether the product is an identity matrix I.

Examples of non-invertible matrices 1 0 c11 c Example 1: Let A = and consider C = 12 0 0 c 21 c 22 1 0 c11 c Then AC = 12 c11 c = 12 1 0 = I 0 0 c 21 c 22 0 0 0 1 2. No matrix C can be the inverse matrix A 1. 1 2 c11 c Example 2: Let A = and consider C = 12 3 6 c 21 c 22 1 2 c11 c AC = 12 c11 + 2c = 21 c 12 + 2c 22 1 0 3 6 c 21 c 22 3c 11 + 6c 21 3c 12 + 6c 22 0 1 No matrix C can be the inverse matrix B 1. A square matrix A without an inverse A 1 is called non-invertible or singular. If A 1 exists, then A is invertible or non-singular. Neither of the singular matrices A, B of our examples was an (obviously singular) zero matrix O. But note that neither of them had full rank. Ohio University Since Winfried 1804 Just, Ohio University MATH3200, Lecture 23: The Department Inverse Matrix of Mathematics

Linear equations for numbers and matrices Numbers: Consider a linear equation ax = b. If a = 1, then x = b is the unique solution. If a 0, then a 1 ax = 1x = a 1 b so that x = a 1 b = b a is the unique solution. If a = 0, then there may be infinitely many solutions or none. Matrices: Consider a linear equation A x = b, where A is square. If A = I, then I x = x = b is the unique solution. If A is invertible, then A 1 A x = I x = x = A 1 b so that x = A 1 b is the unique solution. If A is non-invertible, then the system is either underdetermined or inconsistent.

Solving a system with the help of A 1 : An example Consider the system A x = b of linear equations x 1 + 2x 2 = 5 3x 1 + 4x 2 = 1 The coefficient matrix and its inverse are A = 1 2 3 4 A 1 = The unique solution can be obtained as x1 x = x 2 [ 2 1 ] 1.5 0.5 = A 1 2 1 5 b = = 1.5 0.5 1 11 8

More on systems with square coefficient matrices Let A be an n n square matrix. The following statements are equivalent, that is, express the same property of A: r(a) = n (that is, A has full rank). The column vectors of A form a linearly independent set. The row vectors of A form a linearly independent set. Every b in R n is a linear combination of the columns of A. T A maps R n onto R n. Every system A x = b is consistent. The system A x = 0 has exactly one solution. Every system A x = b has exactly one solution. The linear transformation T A is a one-to-one map. A is invertible, that is, A 1 exists.

Inverse matrices and linear transformations Consider transformations (functions) T : R n R n. The identity transformation T I maps every vector to itself: T I ( x) = I x = x. It does nothing. The inverse of a transformation T : R n R n is a transformation T 1 : R n R n that undoes the action of T, that is, such that T 1 (T ( x)) = (T 1 T )( x) = x = (T T 1 )( x) = T (T 1 ( x)). If T, T 1 are linear transformations of the form T = T A, T 1 = T B for some n n matrices A, B, then we must have: T 1 T = T I = T B T A = T BA and T T 1 = T I = T A T B = T AB. Thus BA = I = AB, and we must have B = A 1 and A = B 1. The inverse matrix A 1 defines the inverse transformation T A 1 = T 1 A. It undoes the action of A on any vector (or matrix).

Inverses of rotations of R 2 Let T α : R 2 R 2 be a rotation by an angle α. It is of the form T α = T Rα, where cos α sin α R α = sin α cos α To undo this transformation, we need to rotate by an angle α, that is, Tα 1 = T α = T R α. cos α sin α cos( α) sin( α) R α R α = sin α cos α sin( α) cos( α) cos(α α) sin(α α) cos 0 sin 0 1 0 = = = sin(α α) cos(α α) sin 0 cos 0 0 1 We can see that R 1 α = R α. = I 2

Stretching and compressing along coordinate axes Consider the transformation T A : R 2 R 2, where 3 0 x 3x A = T 0 0.5 A = y 0.5y This transformation corresponds to a threefold stretch in the horizontal (x-) direction and a twofold compression in the vertical (y-) direction. We can undo this transformation by a threefold compression in the x-direction and twofold stretch in the y-direction: [ 3 0 1 ] 3 0 0 0.5 0 2 = 1 0 0 1 A 1 = [ 1 ] 3 0 0 2 Note that the matrix A in this example is a diagonal matrix.

More coordinates: Inverses of diagonal matrices Consider a diagonal matrix of order n n: λ 1 0... 0 0 λ 2... 0 D =... 0 0... λ n If λ i 0 for all i = 1, 2,..., n, then 1 D 1 =. λ 1 0... 0 0 1 λ 2... 0.. 0 0... 1 λ n If λ i = 0 for at least one i = 1, 2,..., n, then r(d) < n and D 1 does not exist.

Why does this work? By Homework 18, the product of any two n n diagonal matrices is: λ 1 0... 0 κ 1 0... 0 λ 1 κ 1 0... 0 0 λ 2... 0 0 κ 2... 0...... = 0 λ 2 κ 2... 0... 0 0... λ n 0 0... κ n 0 0... λ n κ n In particular: 1 λ 1 0... 0 0 λ 2... 0.... 0 0... λ n λ 1 0... 0 0 1 λ 2... 0. 0 0... 1 λ n λ 1 λ 1 0... 0 λ. = 0 2 λ 2... 0... = I n 0 0... λ n λ n

An example: Elementary row operation (E2) Recall that performing elementary row operation (E 2): Multiply row i of A by λ 0 is the same as computing EA, where 1... 0 0 0... 0..... 0... 1 0 0... 0 E = 0... 0 e ii = λ 0... 0 0... 0 0 1... 0..... 0... 0 0 0... 1 To undo this operation, we divide row i of A by λ, which is another instance of (E2). The inverse E 1 is given by replacing e ii = λ with e ii = 1 λ in E.

How about elementary row operation (E1)? Recall that performing elementary row operation (E 1): Exchange rows i and j of A amounts to computing EA, where E was described in Lecture 10. For the special case n = 5 and rows i = 2, j = 5, E looks like this: 1 0 0 0 0 0 0 0 0 1 E = 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 Homework 48: How would one undo elementary row operation (E1)? Form a conjecture about how E 1 is in general related to E for this operation and verify it for the special case shown above.

How about elementary row operation (E3)? Recall that performing elementary row operation (E 3): Add λ(row i) to row j of A amounts to computing EA, where E was constructed in Homework 21 for the special case n = 4, λ = and rows i = 3, j = 4: 1 0 0 0 E = 0 1 0 0 0 0 1 0 0 0 3 1 Homework 49: (a) How would one undo elementary row operation (E3)? (b) Form a conjecture about how E 1 is in general related to E for this operation. Hint: Look up the solution for Homework 21 first. (c) Verify your conjecture for the special case shown above.

So far so good... We have seen that It is easy to verify that a given pair of matrices are inverses of each other (multiply them and check whether the product is an identity matrix I). It can be relatively easy to find A 1 when A has an intuitive interpretation in terms of linear transformations of vectors or other matrices. But how do we find A 1 in general? Even for a seemingly simple matrix like 0.5 0.5 0 A = 0.5 0 0.5 0 0.5 0.5 this seems hard.

An observation An n n matrix A can have an inverse only if r(a) = n. Then Gaussian elimination produces a matrix with all diagonal elements equal to 1. For n = 3 this looks as follows: a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 Gaussian elimination 1?? 0 1? 0 0 1 Now we can keep going and apply elementary row operation (E3) more times until we get I: 1?? 0 1? 0 0 1 More applications of (E3) So what? How could this help? 1 0 0 0 1 0 0 0 1

A magic trick: Gauss-Jordan elimination Let A be an n n matrix. Form an n 2n matrix C by dropping the internal brackets in [A, I n ] and replacing them with a vertical dividing line for visual clarity. For n = 3 we get: a 11 a 12 a 13 1 0 0 a 21 a 22 a 23 0 1 0 a 31 a 32 a 33 0 0 1 Perform Gaussian elimination. If the first half of the resulting row-reduced matrix has a zero row, then r(a) < n and A is not invertible. Otherwise keep going and apply instances of (E3) until the first half turn into I n. For n = 3 the result will look like: 1 0 0 b 11 b 12 b 13 0 1 0 b 21 b 22 b 23 0 0 1 b 31 b 32 b 33 Let s see what we get for the matrix B in the second half.

Trying out the magic trick Let A = 1 2 3 4 Here we already know A 1 = 2 1 1.5 0.5 Form a 2 4 matrix C and do Gaussian elimination on it: [ 1 2 C = 3 4 1 0 ] subtract 3(row 1) from row 2 1 2 1 0 0 1 0 2 3 1 1 2 1 0 divide row 2 by -2 1 2 0 2 1 0 3 1 0 1 1.5 0.5 Apply (E3) one more time to turn the first half into I 2 : [ 1 2 1 0 subtract 2(row 2) from row 1 1 0 0 1 1.5 0.5 0 1 2 1 ] 1.5 0.5 Magically, the matrix B in the right half is A 1!

Trying the magic trick on another matrix 0.5 0.5 0 Let A = 0.5 0 0.5 Here we don t know A 1. 0 0.5 0.5 Form a 3 6 matrix C and do Gaussian elimination on it. Start by subtracting row 1 from row 2: 0.5 0.5 0 1 0 0 0.5 0.5 0 1 0 0 C = 0.5 0 0 1 0 0 0.5 1 1 0 0 0.5 0.5 0 0 1 0 0.5 0.5 0 0 1 Next add row 2 to row 3: 0.5 0.5 0 1 0 0 0.5 0.5 0 1 0 0 0 0.5 1 1 0 0 0.5 0.5 1 1 0 0 0.5 0.5 0 0 1 0 0 1 1 1 1

Trying the magic trick on another matrix, continued Multiply row 1 by 2: 0.5 0.5 0 1 0 0 1 1 0 2 0 0 0 0.5 0.5 1 1 0 0 0.5 0.5 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 Multiply row 2 by -2: 1 1 0 2 0 0 1 1 0 2 0 0 0 0.5 0.5 1 1 0 0 1 1 2 2 0 0 0 1 1 1 1 0 0 1 1 1 1 The first half is now in row-reduced form. We still need to get rid of its off-diagonal nonzero elements.

Trying the magic trick on another matrix, completed Add row 3 to row 2: 1 1 0 2 0 0 0 1 1 2 2 0 1 1 0 2 0 0 0 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 Subtract row 2 from row 1: 1 1 0 2 0 0 1 0 0 1 1 1 0 1 0 1 1 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 Did the magic work? Homework 50: Check whether the matrix B on the right is A 1.