Solving Linear Systems, Continued and The Inverse of a Matrix



Similar documents
MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solving Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Methods for Finding Bases

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Using row reduction to calculate the inverse and the determinant of a square matrix

160 CHAPTER 4. VECTOR SPACES

Row Echelon Form and Reduced Row Echelon Form

Lecture Notes 2: Matrices as Systems of Linear Equations

Systems of Linear Equations

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Notes on Determinant

Solving Systems of Linear Equations

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Solving Systems of Linear Equations Using Matrices

1.2 Solving a System of Linear Equations

6. Cholesky factorization

Direct Methods for Solving Linear Systems. Matrix Factorization

Solutions to Math 51 First Exam January 29, 2015

Math 312 Homework 1 Solutions

7 Gaussian Elimination and LU Factorization

4.5 Linear Dependence and Linear Independence

1 Introduction to Matrices

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

8 Square matrices continued: Determinants

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

by the matrix A results in a vector which is a reflection of the given

1 VECTOR SPACES AND SUBSPACES

Factorization Theorems

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Subspaces of R n LECTURE Subspaces

Linearly Independent Sets and Linearly Dependent Sets

Continued Fractions and the Euclidean Algorithm

NOTES ON LINEAR TRANSFORMATIONS

Solution of Linear Systems

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

LINEAR ALGEBRA. September 23, 2010

Systems of Linear Equations

1 Determinants and the Solvability of Linear Systems

A =

DETERMINANTS TERRY A. LORING

Vector and Matrix Norms

T ( a i x i ) = a i T (x i ).

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Linear Codes. Chapter Basics

LS.6 Solution Matrices

Solution to Homework 2

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

The Determinant: a Means to Calculate Volume

Applied Linear Algebra I Review page 1

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Lecture 3: Finding integer solutions to systems of linear equations

MAT188H1S Lec0101 Burbulla

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Solving simultaneous equations using the inverse matrix

Matrix Differentiation

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

PYTHAGOREAN TRIPLES KEITH CONRAD

Vector Spaces 4.4 Spanning and Independence

Similar matrices and Jordan form

Some Lecture Notes and In-Class Examples for Pre-Calculus:

CURVE FITTING LEAST SQUARES APPROXIMATION

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

A note on companion matrices

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

1 Review of Least Squares Solutions to Overdetermined Systems

Lecture 2 Matrix Operations

LAB 11: MATRICES, SYSTEMS OF EQUATIONS and POLYNOMIAL MODELING

x = + x 2 + x

MAT 242 Test 2 SOLUTIONS, FORM T

University of Lille I PC first year list of exercises n 7. Review

Chapter 17. Orthogonal Matrices and Symmetries of Space

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Linear Algebra Notes for Marsden and Tromba Vector Calculus

MATH10040 Chapter 2: Prime and relatively prime numbers

Inner products on R n, and more

2.2/2.3 - Solving Systems of Linear Equations

Kevin James. MTHSC 412 Section 2.4 Prime Factors and Greatest Comm

Matrix Representations of Linear Transformations and Changes of Coordinates

( ) which must be a vector

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Linear Algebra Review. Vectors

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

Chapter 6. Linear Transformation. 6.1 Intro. to Linear Transformation

Section Inner Products and Norms

Unit 18 Determinants

5 Homogeneous systems

SOLVING LINEAR SYSTEMS

Linear Algebra and TI 89

Typical Linear Equation Set and Corresponding Matrices

Name: Section Registered In:

Transcription:

, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013

Agenda 1. The rank of a matrix 2. The inverse of a square matrix

Gaussian Gaussian solves a linear system by reducing to REF via elementary row ops and then using back substitution. Example 3x 1 2x 2 + 2x 3 = 9 3 2 2 9 x 1 2x 2 + x 3 = 5 1 2 1 5 2x 1 x 2 2x 3 = 1 2 1 2 1 1 2 1 5 x 1 2x 2 + x 3 = 5 0 1 3 5 x 2 + 3x 3 = 5 0 0 1 2 x 3 = 2 Steps 1. P 12 3. A 13 ( 2) 5. A 23 ( 3) ( 2. A 12 ( 3) 4. A 32 ( 1) 6. M 1 ) 3 13 Back substitution gives the solution (1, 1, 2).

Reducing the augmented matrix to RREF makes the system even easier to solve. Example 1 2 1 5 1 0 0 1 x 1 = 1 0 1 3 5 0 1 0 1 x 2 = 1 0 0 1 2 0 0 1 2 x 3 = 2 Steps 1. A 32 ( 3) 2. A 31 ( 1) 3. A 21 (2) Now, without any back substitution, we can see that the solution is (1, 1, 2). The method of solving a linear system by reducing its augmented matrix to RREF is called.

The rank of a matrix The rank of a matrix, A, is the number of nonzero rows it has after reduction to REF. It is denoted by rank(a). If A the coefficient matrix of an m n linear system and rank(a # ) = rank(a) = n then the REF looks like 1 1... x 1 =... x 2 =. 0 1. x 0........... 0 n = Lemma Suppose Ax = b is an m n linear system with augmented matrix A #. If rank(a # ) = rank(a) = n then the system has a unique solution.

Example Determine the solution set of the linear system x 1 + x 2 x 3 + x 4 = 1, 2x 1 + 3x 2 + x 3 = 4, 3x 1 + 5x 2 + 3x 3 x 4 = 5. The rank of a matrix Reduce the augmented matrix. 1 1 1 1 1 A 12 ( 2) 1 1 1 1 1 A 2 3 1 0 4 13 ( 3) 0 1 3 2 2 A 3 5 3 1 5 23 ( 2) 0 0 0 0 2 The last row says 0 = 2; the system is inconsistent. Lemma Suppose Ax = b is a linear system with augmented matrix A #. If rank(a # ) > rank(a) then the system is inconsistent.

Example Determine the solution set of the linear system 5x 1 6x 2 + x 3 = 4, 2x 1 3x 2 + x 3 = 1, 4x 1 3x 2 x 3 = 5. The rank of a matrix Reduce the augmented matrix. 5 6 1 4 1 0 1 2 2 3 1 1 0 1 1 1 x 1 x 3 = 2 x 4 3 1 5 0 0 0 0 2 x 3 = 1 The unknown x 3 can assume any value. Let x 3 = t. Then by back substitution we get x 2 = t + 1 and x 1 = t + 2. Thus, the solution set is the line {(t + 2, t + 1, t) : t R}.

The rank of a matrix When an unknown variable in a linear system is free to assume any value, we call it a free variable. Variables that are not free are called bound variables. The value of a bound variable is uniquely determined by a choice of values for all of the free variables in the system. Lemma Suppose Ax = b is an m n linear system with augmented matrix A #. If rank(a # ) = rank(a) < n then the system has an infinite number of solutions. Such a system will have n rank(a # ) free variables.

Solving linear systems with free variables Example Use Gaussian to solve x 1 + 2x 2 2x 3 x 4 = 3, 3x 1 + 6x 2 + x 3 + 11x 4 = 16, 2x 1 + 4x 2 x 3 + 4x 4 = 9. Reducing to row-echelon form yields x 1 + 2x 2 2x 3 x 4 = 3, x 3 + 2x 4 = 1. Choose as free variables those variables that do not have a pivot in their column. In this case, our free variables will be x 2 and x 4. The solution set is the plane {(5 2s 3t, s, 1 2t, t) : s, t R}.

The inverse of a square matrix Can we divide by a matrix? What properties should the inverse matrix have? Suppose A is a square, n n matrix. An inverse matrix for A is an n n matrix, B, such that AB = I n and BA = I n. If A has such an inverse then we say that it is invertible or nonsingular. Otherwise, we say that A is singular. Remark Not every matrix is invertible. If you have a linear system Ax = b and B is an inverse matrix for A then the linear system has the unique solution x = Bb.

The inverse of a square matrix Example If 1 1 2 0 1 3 A = 2 3 3 and B = 1 1 1 = A 1 1 1 1 1 0 1 then B is the inverse of A. Theorem (Matrix are well-defined) Suppose A is an n n matrix. If B and C are two of A then B = C. Thus, we can write A 1 for the inverse of A with no ambiguity. Useful [ Example ] [ ] a b If A = and ad bc 0 then A c d 1 = 1 d b ad bc. c a

Finding the inverse of a matrix sound great! How do I find one? Suppose A is a 3 3 invertible matrix. If A 1 = [ x 1 x 2 ] x 3 then 1 0 0 Ax 1 = 0, Ax 2 = 1, and Ax 3 = 0. 0 0 1 We can find A 1 by solving 3 linear systems at once! In general, form the augmented matrix and reduce to RREF. You end up with A 1 on the right. [ A In ] [ In A 1]

Example 1 1 2 Let s find the inverse of A = 2 3 3. 1 1 1 Finding the inverse of a matrix Take the augmented matrix and row reduce. 1 1 2 1 0 0 1 0 0 0 1 3 2 3 3 0 1 0 0 1 0 1 1 1 1 1 1 0 0 1 0 0 1 1 } 0 {{ 1 } A 1 Steps 1. A 12 ( 2) 2. A 13 ( 1) 3. M 2 ( 1) 4. M 3 ( 1) 5. A 32 ( 1) 6. A 31 ( 2) 7. A 21 (1)

Finding the inverse of a matrix In order to find the inverse of a matrix, A, we row reduced an augmented matrix with A on the left. What if we don t end up with I n on the left? Theorem An n n matrix, A, is invertible if and only if rank(a) = n. Example Find the inverse of the matrix A = [ ] 1 3. 2 6 Try to reduce the matrix to RREF. [ ] [ ] 1 3 A 12 ( 2) 1 3 2 6 0 0 Since rank(a) < 2, we conclude that A is not invertible. Notice that (1)(6) (3)(2) = 0.

Diagonal have simple. Finding the inverse of a matrix Proposition The inverse of a diagonal matrix is the diagonal matrix with reciprocal entries. 1 a 11 a 0 1 11... =... 0 0 ann 0 a 1 nn Upper and lower triangular have of the same form. Proposition The inverse of an upper triangular matrix is upper triangular. The inverse of a lower triangular matrix is lower triangular.

inverse Suppose A and B are n n invertible. A 1 is invertible and ( A 1) 1 = A. AB is invertible and (AB) 1 = B 1 A 1. A T is invertible and ( A T ) 1 = ( A 1 ) T. Corollary Suppose A 1, A 2,..., A k are invertible n n. Then their product, A 1 A 2 A k is invertible, and (A 1 A 2 A k ) 1 = A 1 k A 1 k 1 A 1 1.

Recall that if A is an invertible matrix then the linear system Ax = b has the unique solution x = A 1 b. Example Solve the linear system x 1 + 3x 2 = 1, 2x 1 + 5x 2 = 3. [ ] 1 3 The coefficient matrix is A =, so A 2 5 = [ ] 5 3. 2 1 The inverse of a 2 2 matrix is [ ] 1 [ ] a b 1 d b = when ad bc 0. c d ad bc c a Hence, [ x1 x 2 ] = [ 5 3 2 1 ][ ] 1 = 3 [ ] 4. 1

are an elegant way of solving linear systems. They do have some drawbacks: They are only applicable when the coefficient matrix is square. Even in the case of a square matrix, an inverse may not exist. They are hard to compute, at least as complicated as doing. However, they can be useful if the coefficient matrix has an obvious inverse, you need to solve multiple linear systems with the same coefficients.