Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).



Similar documents
Direct Methods for Solving Linear Systems. Matrix Factorization

7 Gaussian Elimination and LU Factorization

Elementary Matrices and The LU Factorization

6. Cholesky factorization

Solving Linear Systems, Continued and The Inverse of a Matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Solving Systems of Linear Equations

SOLVING LINEAR SYSTEMS

Using row reduction to calculate the inverse and the determinant of a square matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solving Systems of Linear Equations Using Matrices

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Solution of Linear Systems

Solving Systems of Linear Equations

2.3. Finding polynomial functions. An Introduction:

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Similar matrices and Jordan form

Factorization Theorems

Notes on Determinant

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

by the matrix A results in a vector which is a reflection of the given

Systems of Linear Equations

1.2 Solving a System of Linear Equations

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Unit 18 Determinants

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Question 2: How do you solve a matrix equation using the matrix inverse?

1 VECTOR SPACES AND SUBSPACES

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

MAT 242 Test 2 SOLUTIONS, FORM T

[1] Diagonal factorization

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

DETERMINANTS TERRY A. LORING

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

1 Introduction to Matrices

Linear Programming. March 14, 2014

8 Square matrices continued: Determinants

Row Echelon Form and Reduced Row Echelon Form

Lecture Notes 2: Matrices as Systems of Linear Equations

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Activity 1: Using base ten blocks to model operations on decimals

LINEAR ALGEBRA. September 23, 2010

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrix-vector multiplication in terms of dot-products

MAT188H1S Lec0101 Burbulla

Operation Count; Numerical Linear Algebra

CS3220 Lecture Notes: QR factorization and orthogonal transformations

F Matrix Calculus F 1

LS.6 Solution Matrices

Zeros of Polynomial Functions

Lecture 3: Finding integer solutions to systems of linear equations

Data Mining: Algorithms and Applications Matrix Math Review

A =

1 Solving LPs: The Simplex Algorithm of George Dantzig

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Solutions to Math 51 First Exam January 29, 2015

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Methods for Finding Bases

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Introduction to Matrix Algebra

Lecture notes on linear algebra

Lecture 1: Systems of Linear Equations

Lecture 4: Partitioned Matrices and Determinants

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions

Linear Equations and Inequalities

The Determinant: a Means to Calculate Volume

Vector and Matrix Norms

Linear Algebra Notes

Subspaces of R n LECTURE Subspaces

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

3.2 The Factor Theorem and The Remainder Theorem

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

LU Factoring of Non-Invertible Matrices

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Eigenvalues and Eigenvectors

Similarity and Diagonalization. Similar Matrices

October 3rd, Linear Algebra & Properties of the Covariance Matrix

α = u v. In other words, Orthogonal Projection

Math 215 HW #6 Solutions

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Syntax Description Remarks and examples Also see

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Math 312 Homework 1 Solutions

1 Determinants and the Solvability of Linear Systems

Lecture 5: Singular Value Decomposition SVD (1)

Lecture 2 Matrix Operations

Systems of Linear Equations

Algebraic expressions are a combination of numbers and variables. Here are examples of some basic algebraic expressions.

MATH APPLIED MATRIX THEORY

Continued Fractions and the Euclidean Algorithm

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Transcription:

MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication). What is an elementary matrix? Our goal in these notes is to rewrite Gaussian elimination using matrix multiplication. The starting point for this project is a special kind of matrix: An elementary matrix E is a square matrix with s on the main diagonal and at most one non-zero entry off of the main diagonal. For example, E 2 = 3 0 is a 3 3 elementary matrix. We call this matrix E 2 to signify that the non-zero entry in a non-diagonal position appears in the (2, ) entry. Let s find the inverse of E 2. The general procedure states that we should form the matrix [E 2 I] and apply elimination [I E2 ] until the identity matrix is on the left: 0 0 3 0 R2 3R 0 0 0 0 3 0. We are done after one step (there was only one non-zero entry in a non-diagonal entry, so we only needed to clear one entry to obtain the identity); the inverse of E 2 is given by E2 = 3 0. This pattern is easily shown to hold in general. Let E ij (i j) be an elementary matrix with non-zero (i, j) entry. Then inverse E ij is the elementary matrix formed by negating the (i, j) of E ij. Why do we care about elementary matrices? We care because multiplication by an elementary matrix on the left is equivalent to an performing an elementary row operation. For example, if A is any 3 3 matrix and E 2 is the matrix from above, then multiplying A by E 2 on the left is the equivalent to adding 3 Row of A to Row 2 of A: E 2 A = 3 0 Row Row Row 2 = 3 Row + Row 2. Row 3 Row 3 This is the type of operation we do in Gaussian elimination!

2 Rewriting Gaussian Elimination in Terms of Multiplication As usual we will illustrate the general process with an example. Assign A be the matrix A = 0 3 7. Forward elimination will convert A to an upper triangular matrix U. After each step of elimination, we will record the same step using multiplication by an elementary matrix. Step : We need to zero out the (2,) entry. In elimination notation, 0 3 7 2 0 E 2 R2 2R 0 3 7 = A 0 5 3 Notice the (2,) entry of E 2 came from the multiple of row added to row 2. Step 2: We need to zero out the (3,) entry. In elimination notation, 0 5 3 R3 3R 0 20 3 2 0 0 3 7 = 3 0 0 20 3 E 3 E 2 A Notice the (3,) entry of E 3 came from the multiple of row added to row 3. Step 3: We need to zero out the (3,2) entry. In elimination notation, 5 2 0 5 3 R3 4R2 0 5 3. 0 20 3 2 0 0 3 7 = 0 5 3. 0 4 3 0 E 32 E 3 E 2 A = U Notice the (3,2) entry of E 2 came from the multiple of row 2 added to row 3..

What have we done so far? We have used the process of Gaussian elimination to find elementary matrices E 2, E 3, E 32 so that E 32 E 3 E 2 A = U is upper triangular. This is what we wanted to do (write elimination using multiplication), but let s probe further. What happens if we move the matrices E 2, E 3, E 32 to the other side of the equation E 32 E 3 E 2 A = U? To do this, we just multiply the equation by on the left E2 E3 E32 : A = (E2 E3 E32 )U. Fortunately, it is easy to compute the inverse of an elementary matrix (remember the example on page just negate the (i, j) entry of E ij ). Let s calculate E2 E3 E32 : E2 E3 E32 = 2 0 3 0 0 4 } {{ } = 2 0 = 2 0 = L. 3 4 3 4 The matrix E2 E3 E32 is just a lower triangular matrix L! Thus we have shown that A = 0 3 7 = 2 0 0 5 3 = LU, 3 4 A = LU is a lower triangular matrix times an upper triangular matrix!!! LU Factorization Theorem. Suppose that A is n n matrix that can be reduced to an upper triangular matrix U by Gaussian elimination without using row exchanges. Then A = LU where L is lower triangular and U is upper triangular. Each entry on the diagonal of L is. For each i > j, the (i, j) entry of L is the multiple m of row j that was subtracted from row i during the elimination process. The diagonal entries of U are the pivots from the elimination process. Example : Reading Off Information from an LU Factorizatoin Suppose we are given a matrix A already written as A = LU: A = 5 0 3 2 0 4. 2 3 0 0 2 Question: What multiple of row was subtracted from row 3 during elimination? Answer: By looking at the (3,) entry of L, we know that 2 times row 2 was subtracted from row 3 during elimination. Question: What were the pivots? Answer: The pivots were 3, 4 and 2 (the diagonal entries of U). 3

4 Example 2: Finding an LU Factorization Suppose we are asked to find the the factors L and U for the matrix 24 8. 2 3 To do this, we perform Gaussian elimination and then read off the information we need from the elimination steps. 24 8 R2 4R 6 0 2 6 0 2 0 0 R3+2R R3 R2 0 0. 2 3 2 3 0 The matrix U is just the upper triangular matrix we end with doing elimination, U =. To record L, we just start with a lower triangular matrix with s on the diagonal and then for each i > j put the multiple m of row j which was subtracted from row i in the (i, j) entry of L: L = 4 0. 2 (Note: Since we added 2 Row to Row 3, we subtracted 2 Row from Row 3.) Example 3: Solving a System of Equations Given the LU Factorization Once one knows the LU factorization of a matrix A, solving Ax = b is very quick. Let A, L and U be matrices from Example 2. Suppose we are asked to solve Ax = b where b = 4 9. 6 Since we already know the LU factorization A = LU, we can solve Ax = b as follows: first solve the system of equations Lc = b; second solve the system of equations Ux = c. (This works because if Lc = b and Ux = c, then Ax = LUx = L(Ux) = Lc = b.) The system of equations Lc = b looks like 4 c c 2 = 4 9. 2 6 Since L is lower triangular, we can find c, c 2 and c 3 using forward substitution: c = 4 4c + c 2 = 9 = 4(4) + c 2 = 9 = c 2 = 3 2c + c 2 + c 3 = 6 = 2(4) + (3) + c 3 = 6 = c 3 =. c 3

The system of equations Ux = c looks like 0 x x 2 = 4 3. Since U is upper triangular, we can find x, x 2 and x 3 using back substitution: x 3 = x 2 = 3 x 3 6x + 2x 3 = 4 = 6x + 2( ) = 4 = x = Therefore, the solution to Ax = b is x =, x 2 = 3 and x 3 =. The point of this example is that if you already know A = LU, then to solve Ax = b no elimination steps are required! Just use substitution (forward on Lc = b, then backward on Ux = c)! 5