Math 313 Lecture #10 2.2: The Inverse of a Matrix



Similar documents
Using row reduction to calculate the inverse and the determinant of a square matrix

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Lecture 2 Matrix Operations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Linear Systems, Continued and The Inverse of a Matrix

Math 312 Homework 1 Solutions

DETERMINANTS TERRY A. LORING

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Similar matrices and Jordan form

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

MAT188H1S Lec0101 Burbulla

1 Introduction to Matrices

Continued Fractions and the Euclidean Algorithm

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

= = 3 4, Now assume that P (k) is true for some fixed k 2. This means that

Notes from February 11

Lecture 4: Partitioned Matrices and Determinants

Math Common Core Sampler Test

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

8 Square matrices continued: Determinants

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Elementary Matrices and The LU Factorization

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Row Echelon Form and Reduced Row Echelon Form

Similarity and Diagonalization. Similar Matrices

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Introduction to Matrices for Engineers

You know from calculus that functions play a fundamental role in mathematics.

1 Sets and Set Notation.

A New Generic Digital Signature Algorithm

A note on companion matrices

Orthogonal Projections

Typical Linear Equation Set and Corresponding Matrices

26. Determinants I. 1. Prehistory

mod 10 = mod 10 = 49 mod 10 = 9.

Boolean Algebra Part 1

Chapter 17. Orthogonal Matrices and Symmetries of Space

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

9 MATRICES AND TRANSFORMATIONS

GENERATING SETS KEITH CONRAD

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

6. Cholesky factorization

α = u v. In other words, Orthogonal Projection

Solving Systems of Linear Equations

Linear Algebra Done Wrong. Sergei Treil. Department of Mathematics, Brown University

Mathematics Course 111: Algebra I Part IV: Vector Spaces

PYTHAGOREAN TRIPLES KEITH CONRAD

Math 1050 Khan Academy Extra Credit Algebra Assignment

Matrix Representations of Linear Transformations and Changes of Coordinates

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

3 Orthogonal Vectors and Matrices

[1] Diagonal factorization

T ( a i x i ) = a i T (x i ).

Question 2: How do you solve a matrix equation using the matrix inverse?

Notes on Determinant

Solving Systems of Linear Equations Using Matrices

GROUPS ACTING ON A SET

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

1.2. Successive Differences

Here are some examples of combining elements and the operations used:

We can express this in decimal notation (in contrast to the underline notation we have been using) as follows: b + 90c = c + 10b

Equations, Inequalities & Partial Fractions

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Section IV.21. The Field of Quotients of an Integral Domain

( ) which must be a vector

Matrix Differentiation

Solution to Homework 2

28 CHAPTER 1. VECTORS AND THE GEOMETRY OF SPACE. v x. u y v z u z v y u y u z. v y v z

5.5. Solving linear systems by the elimination method

Lecture Notes on Polynomials

SYSTEMS OF PYTHAGOREAN TRIPLES. Acknowledgements. I would like to thank Professor Laura Schueller for advising and guiding me

SOLVING COMPLEX SYSTEMS USING SPREADSHEETS: A MATRIX DECOMPOSITION APPROACH

Chapter 6. Orthogonality

MATH APPLIED MATRIX THEORY

Matrix Algebra in R A Minimal Introduction

LS.6 Solution Matrices

by the matrix A results in a vector which is a reflection of the given

The Characteristic Polynomial

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Lectures 5-6: Taylor Series

Introduction to Matrix Algebra

Linear Algebra: Determinants, Inverses, Rank

Lectures notes on orthogonal matrices (with exercises) Linear Algebra II - Spring 2004 by D. Klain

NSM100 Introduction to Algebra Chapter 5 Notes Factoring

~ EQUIVALENT FORMS ~

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

1 VECTOR SPACES AND SUBSPACES

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

Systems of Linear Equations

Methods for Finding Bases

Sample Induction Proofs

Lecture 14: Section 3.3

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Data Mining: Algorithms and Applications Matrix Math Review

Transcription:

Math 1 Lecture #10 2.2: The Inverse of a Matrix Matrix algebra provides tools for creating many useful formulas just like real number algebra does. For example, a real number a is invertible if there is a real number b such that ba = ab = 1; when this is the case, the real number b is called the multiplicative inverse of a, and of course, b = a 1 = 1/a. The only non invertible real number is 0. An n n matrix A is said to be invertible or nonsingular if there is an n n matrix B such that AB = BA = I; otherwise, A is said to be non-invertible or singular. When a square matrix A is invertible, the matrix B for which AB = BA = I is called a multiplicative inverse of A. The matrix B is unique (as you should have shown in the homework problem #25 in Section 2.1). So, there is only one multiplicative inverse of an invertible matrix; the unique multiplicative inverse of an invertible matrix A is denoted by A 1. 2 1 0 4 2 1 Example. Let A = 0 2 1 and B = 7 4 2. Then 7 0 2 14 7 4 4 2 1 2 1 0 AB = 0 1 0 and BA = 7 4 2 0 2 1 = 0 1 0. 14 7 4 7 0 2 Thus A is invertible and A 1 = B. [We will learn latter in this lecture how to find A 1 when A is invertible, by row reduction.] When A is a 2 2 matrix, there is a nice formula for its inverse when it is invertible. [ ] a b Theorem 4. For A =, if the quantity ad bc 0, then A is invertible and c d [ ] A 1 1 d b =. ad bc c a We can verify that the given A 1 (when it exists, i.e., when ad bc 0) indeed satisfies AA 1 = I and A 1 A = I. The contrapositive of Theorem 4 is that if A is not invertible, then ad bc = 0. [You have a homework problem #25 to prove that the converse if ad bc = 0, then A is not invertible is true.]

Thus a 2 2 matrix is invertible if and only if ad bc 0; we call this number the determinant of A. The existence of an inverse can be used to solve A x = b when A is a square matrix. Theorem 5. Suppose an n n matrix A is invertible. Then for each b R n the equation A x = b has a unique solution x = A 1 b. Proof. Let b be in R n. The guess A 1 b is a solution of A x = b because A(A 1 b) = (AA 1 ) b = I b = b. To show that A 1 b is the only solution, we suppose that u is a solution of A x = b and show that u = A 1 b. To this end we multiply both sides of A u = b by A 1 (on the left of each side) to get A 1 (A u) = A 1 b. This simplifies to u = A 1 b, and so the solution is unique. Notice in this proof that we used both A 1 A = I and AA 1 = I. Theorem 6 (Properties of an Inverse). Let A and B be n n matrices. a. If A is invertible, then A 1 is invertible and (A 1 ) 1 = A. b. If A and B are invertible, then AB is invertible and (AB) 1 = B 1 A 1. c. If A is invertible, then A T is invertible and (A T ) 1 = (A 1 ) T. Proof. (a) If A is invertible then A 1 exists, and A 1 A = I and AA 1 = I, so that A 1 is invertible with inverse (A 1 ) 1 = A. (b) If A and B are invertible, then B 1 A 1 exists and (AB)(B 1 A 1 ) = AIA 1 = I and (B 1 A 1 )(AB) = B 1 IB = I, so that AB is invertible with inverse B 1 A 1. (c) If A is invertible, then A 1 exists and A T (A 1 ) T = (A 1 A) T = I T = I and (A 1 ) T A T = (AA 1 ) T = I T = I, so that A T is invertible with inverse (A 1 ) T. Property (b) extends by induction so that the inverse of a product of a finite number of invertible matrices is the product of the inverse matrices in reverse order: (A 1 A 2 A p ) 1 = A 1 p A 1 2 A 1 1. Elementary Matrices. The theoretical basis for finding the inverse of an invertible matrix is the use of invertible matrices close to the identity matrix. A square matrix obtained by performing one elementary row operation on the identity matrix is called an elementary matrix. Examples. Let A = 4 5 6, 7 8 9

and consider the following three matrices obtained from the identity matrix by a single elementary row operation: 0 1 0 = E R2 R 1, 0 1 0 0 1 0 R 2 R 2 0 0 = E 2, 0 1 0 0 1 0 = E. R + 2R 1 R 2 0 1 Then E 1 A = 4 5 6 = 7 8 9, 0 1 0 7 8 9 4 5 6 E 2 A = 0 0 4 5 6 = 12 15 18, 7 8 9 7 8 9 E A = 0 1 0 4 5 6 = 4 5 6. 2 0 1 7 8 9 9 12 15 Each row operation encodes as multiplication by a matrix on the left! Which of the three elementary matrices are invertible? All of them are, where each inverse is another elementary matrix for the same type of row operation. Theorem 7. An n n matrix A is invertible if and only if A is row equivalent to I, i.e., there exists finitely many elementary matrices E 1, E 2,..., E p such that E p E 2 E 1 A = I. The proof of this is in the next lecture. How can we use Theorem 7 to find the inverse of an invertible matrix? Look at E p E 2 E 1 A = I. Multiplying both sides of this on the right by A 1 gives A 1 = E p E 2 E 1. So if we augment A with I, written [ A I ], and row reduce this augmented matrix by the row operations in the order E 1, E 2,..., E p we get (E p E 2 E 1 ) [ A I ] [ E p E 2 E 1 A E p E 2 E 1 ] = [ I A 1 ]. We use row reduction to find the inverse of an invertible matrix A. What would happen is A were not invertible and we row reduced [ A I ]?

Example. Find, if possible, the inverse of 2 A = 1. 1 1 1 Augment A with I and row reduce: 2 1 0 0 (A I) = 1 0 1 0 R 2 R 1 R 2 1 1 1 R R 1 R 2 R 1 + 2R 2 R 1 0 1 0 1 1 0 0 1 1 1 0 1 R R 2 R 1 0 2 0 R 1 + 2R R 1 0 1 0 1 1 0 0 0 1 0 1 1 1 0 2 0 1 0 1 1 0 R 2 R 2 0 0 1 0 1 1 R R 1 0 2 0 1 0 1 1 0. 0 1 1 So 1 0 2 A 1 = 1 1 0. 0 1 1 We can (and should) verify this: 2 1 0 2 AA 1 = 1 1 1 0 = 0 1 0 and A 1 A = I (this left to you). 1 1 1 0 1 1 We can use A 1 to solve the system A x = b for 1 b = 2. Premultiplying both sides of A x = b by A 1, i.e., A 1 A x = A 1 b, gives 1 0 2 1 5 x = A 1 b = 1 1 0 2 = 1. 0 1 1 1

We can (and should) verify this: 2 5 1 A x = 1 1 = 2 = b. 1 1 1 1 We would get the same solution if we row reduced [A b ]? [Left for you to verify.]