Lecture 23: The Inverse of a Matrix Winfried Just, Ohio University March 9, 2016
The definition of the matrix inverse Let A be an n n square matrix. The inverse of A is an n n matrix A 1 such that A 1 A = I n. Theorem The inverse A 1, if it exists, is unique and satisfies AA 1 = I n. Note that the inverse of a matrix is the analogue of a reciprocal a 1 = 1 a of a number. Note that the reciprocal 1 a of a number exists if, and only if, a 0.
An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1? Let AB = = 3 4 1.5 0.5??
An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = 3 4 1.5 0.5??
An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = 3 4 1.5 0.5 0?
An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1? BA = = 1.5 0.5 3 4??
An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1 0 BA = = 1.5 0.5 3 4??
An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1 0 BA = = 1.5 0.5 3 4 0?
An example of an inverse matrix 1 2 2 1 Let A = and let B = 3 4 1.5 0.5 1 2 2 1 1 0 AB = = = I 3 4 1.5 0.5 0 1 2 When we switch the order of multiplication: 2 1 1 2 1 0 BA = = = I 1.5 0.5 3 4 0 1 2 We can see that B = A 1 and A = B 1. We can also see that: Verifying whether a given pair of matrices are inverses of each other is easy. You just need to multiply them and check whether the product is an identity matrix I.
Examples of non-invertible matrices 1 0 c11 c Example 1: Let A = and consider C = 12 0 0 c 21 c 22 1 0 c11 c Then AC = 12 c11 c = 12 1 0 = I 0 0 c 21 c 22 0 0 0 1 2. No matrix C can be the inverse matrix A 1. 1 2 c11 c Example 2: Let A = and consider C = 12 3 6 c 21 c 22 1 2 c11 c AC = 12 c11 + 2c = 21 c 12 + 2c 22 1 0 3 6 c 21 c 22 3c 11 + 6c 21 3c 12 + 6c 22 0 1 No matrix C can be the inverse matrix B 1. A square matrix A without an inverse A 1 is called non-invertible or singular. If A 1 exists, then A is invertible or non-singular. Neither of the singular matrices A, B of our examples was an (obviously singular) zero matrix O. But note that neither of them had full rank. Ohio University Since Winfried 1804 Just, Ohio University MATH3200, Lecture 23: The Department Inverse Matrix of Mathematics
Linear equations for numbers and matrices Numbers: Consider a linear equation ax = b. If a = 1, then x = b is the unique solution. If a 0, then a 1 ax = 1x = a 1 b so that x = a 1 b = b a is the unique solution. If a = 0, then there may be infinitely many solutions or none. Matrices: Consider a linear equation A x = b, where A is square. If A = I, then I x = x = b is the unique solution. If A is invertible, then A 1 A x = I x = x = A 1 b so that x = A 1 b is the unique solution. If A is non-invertible, then the system is either underdetermined or inconsistent.
Solving a system with the help of A 1 : An example Consider the system A x = b of linear equations x 1 + 2x 2 = 5 3x 1 + 4x 2 = 1 The coefficient matrix and its inverse are A = 1 2 3 4 A 1 = The unique solution can be obtained as x1 x = x 2 [ 2 1 ] 1.5 0.5 = A 1 2 1 5 b = = 1.5 0.5 1 11 8
More on systems with square coefficient matrices Let A be an n n square matrix. The following statements are equivalent, that is, express the same property of A: r(a) = n (that is, A has full rank). The column vectors of A form a linearly independent set. The row vectors of A form a linearly independent set. Every b in R n is a linear combination of the columns of A. T A maps R n onto R n. Every system A x = b is consistent. The system A x = 0 has exactly one solution. Every system A x = b has exactly one solution. The linear transformation T A is a one-to-one map. A is invertible, that is, A 1 exists.
Inverse matrices and linear transformations Consider transformations (functions) T : R n R n. The identity transformation T I maps every vector to itself: T I ( x) = I x = x. It does nothing. The inverse of a transformation T : R n R n is a transformation T 1 : R n R n that undoes the action of T, that is, such that T 1 (T ( x)) = (T 1 T )( x) = x = (T T 1 )( x) = T (T 1 ( x)). If T, T 1 are linear transformations of the form T = T A, T 1 = T B for some n n matrices A, B, then we must have: T 1 T = T I = T B T A = T BA and T T 1 = T I = T A T B = T AB. Thus BA = I = AB, and we must have B = A 1 and A = B 1. The inverse matrix A 1 defines the inverse transformation T A 1 = T 1 A. It undoes the action of A on any vector (or matrix).
Inverses of rotations of R 2 Let T α : R 2 R 2 be a rotation by an angle α. It is of the form T α = T Rα, where cos α sin α R α = sin α cos α To undo this transformation, we need to rotate by an angle α, that is, Tα 1 = T α = T R α. cos α sin α cos( α) sin( α) R α R α = sin α cos α sin( α) cos( α) cos(α α) sin(α α) cos 0 sin 0 1 0 = = = sin(α α) cos(α α) sin 0 cos 0 0 1 We can see that R 1 α = R α. = I 2
Stretching and compressing along coordinate axes Consider the transformation T A : R 2 R 2, where 3 0 x 3x A = T 0 0.5 A = y 0.5y This transformation corresponds to a threefold stretch in the horizontal (x-) direction and a twofold compression in the vertical (y-) direction. We can undo this transformation by a threefold compression in the x-direction and twofold stretch in the y-direction: [ 3 0 1 ] 3 0 0 0.5 0 2 = 1 0 0 1 A 1 = [ 1 ] 3 0 0 2 Note that the matrix A in this example is a diagonal matrix.
More coordinates: Inverses of diagonal matrices Consider a diagonal matrix of order n n: λ 1 0... 0 0 λ 2... 0 D =... 0 0... λ n If λ i 0 for all i = 1, 2,..., n, then 1 D 1 =. λ 1 0... 0 0 1 λ 2... 0.. 0 0... 1 λ n If λ i = 0 for at least one i = 1, 2,..., n, then r(d) < n and D 1 does not exist.
Why does this work? By Homework 18, the product of any two n n diagonal matrices is: λ 1 0... 0 κ 1 0... 0 λ 1 κ 1 0... 0 0 λ 2... 0 0 κ 2... 0...... = 0 λ 2 κ 2... 0... 0 0... λ n 0 0... κ n 0 0... λ n κ n In particular: 1 λ 1 0... 0 0 λ 2... 0.... 0 0... λ n λ 1 0... 0 0 1 λ 2... 0. 0 0... 1 λ n λ 1 λ 1 0... 0 λ. = 0 2 λ 2... 0... = I n 0 0... λ n λ n
An example: Elementary row operation (E2) Recall that performing elementary row operation (E 2): Multiply row i of A by λ 0 is the same as computing EA, where 1... 0 0 0... 0..... 0... 1 0 0... 0 E = 0... 0 e ii = λ 0... 0 0... 0 0 1... 0..... 0... 0 0 0... 1 To undo this operation, we divide row i of A by λ, which is another instance of (E2). The inverse E 1 is given by replacing e ii = λ with e ii = 1 λ in E.
How about elementary row operation (E1)? Recall that performing elementary row operation (E 1): Exchange rows i and j of A amounts to computing EA, where E was described in Lecture 10. For the special case n = 5 and rows i = 2, j = 5, E looks like this: 1 0 0 0 0 0 0 0 0 1 E = 0 0 1 0 0 0 0 0 1 0 0 1 0 0 0 Homework 48: How would one undo elementary row operation (E1)? Form a conjecture about how E 1 is in general related to E for this operation and verify it for the special case shown above.
How about elementary row operation (E3)? Recall that performing elementary row operation (E 3): Add λ(row i) to row j of A amounts to computing EA, where E was constructed in Homework 21 for the special case n = 4, λ = and rows i = 3, j = 4: 1 0 0 0 E = 0 1 0 0 0 0 1 0 0 0 3 1 Homework 49: (a) How would one undo elementary row operation (E3)? (b) Form a conjecture about how E 1 is in general related to E for this operation. Hint: Look up the solution for Homework 21 first. (c) Verify your conjecture for the special case shown above.
So far so good... We have seen that It is easy to verify that a given pair of matrices are inverses of each other (multiply them and check whether the product is an identity matrix I). It can be relatively easy to find A 1 when A has an intuitive interpretation in terms of linear transformations of vectors or other matrices. But how do we find A 1 in general? Even for a seemingly simple matrix like 0.5 0.5 0 A = 0.5 0 0.5 0 0.5 0.5 this seems hard.
An observation An n n matrix A can have an inverse only if r(a) = n. Then Gaussian elimination produces a matrix with all diagonal elements equal to 1. For n = 3 this looks as follows: a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 Gaussian elimination 1?? 0 1? 0 0 1 Now we can keep going and apply elementary row operation (E3) more times until we get I: 1?? 0 1? 0 0 1 More applications of (E3) So what? How could this help? 1 0 0 0 1 0 0 0 1
A magic trick: Gauss-Jordan elimination Let A be an n n matrix. Form an n 2n matrix C by dropping the internal brackets in [A, I n ] and replacing them with a vertical dividing line for visual clarity. For n = 3 we get: a 11 a 12 a 13 1 0 0 a 21 a 22 a 23 0 1 0 a 31 a 32 a 33 0 0 1 Perform Gaussian elimination. If the first half of the resulting row-reduced matrix has a zero row, then r(a) < n and A is not invertible. Otherwise keep going and apply instances of (E3) until the first half turn into I n. For n = 3 the result will look like: 1 0 0 b 11 b 12 b 13 0 1 0 b 21 b 22 b 23 0 0 1 b 31 b 32 b 33 Let s see what we get for the matrix B in the second half.
Trying out the magic trick Let A = 1 2 3 4 Here we already know A 1 = 2 1 1.5 0.5 Form a 2 4 matrix C and do Gaussian elimination on it: [ 1 2 C = 3 4 1 0 ] subtract 3(row 1) from row 2 1 2 1 0 0 1 0 2 3 1 1 2 1 0 divide row 2 by -2 1 2 0 2 1 0 3 1 0 1 1.5 0.5 Apply (E3) one more time to turn the first half into I 2 : [ 1 2 1 0 subtract 2(row 2) from row 1 1 0 0 1 1.5 0.5 0 1 2 1 ] 1.5 0.5 Magically, the matrix B in the right half is A 1!
Trying the magic trick on another matrix 0.5 0.5 0 Let A = 0.5 0 0.5 Here we don t know A 1. 0 0.5 0.5 Form a 3 6 matrix C and do Gaussian elimination on it. Start by subtracting row 1 from row 2: 0.5 0.5 0 1 0 0 0.5 0.5 0 1 0 0 C = 0.5 0 0 1 0 0 0.5 1 1 0 0 0.5 0.5 0 0 1 0 0.5 0.5 0 0 1 Next add row 2 to row 3: 0.5 0.5 0 1 0 0 0.5 0.5 0 1 0 0 0 0.5 1 1 0 0 0.5 0.5 1 1 0 0 0.5 0.5 0 0 1 0 0 1 1 1 1
Trying the magic trick on another matrix, continued Multiply row 1 by 2: 0.5 0.5 0 1 0 0 1 1 0 2 0 0 0 0.5 0.5 1 1 0 0 0.5 0.5 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 Multiply row 2 by -2: 1 1 0 2 0 0 1 1 0 2 0 0 0 0.5 0.5 1 1 0 0 1 1 2 2 0 0 0 1 1 1 1 0 0 1 1 1 1 The first half is now in row-reduced form. We still need to get rid of its off-diagonal nonzero elements.
Trying the magic trick on another matrix, completed Add row 3 to row 2: 1 1 0 2 0 0 0 1 1 2 2 0 1 1 0 2 0 0 0 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 Subtract row 2 from row 1: 1 1 0 2 0 0 1 0 0 1 1 1 0 1 0 1 1 1 0 1 0 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 Did the magic work? Homework 50: Check whether the matrix B on the right is A 1.