Elementary Matrices and The LU Factorization



Similar documents
Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Direct Methods for Solving Linear Systems. Matrix Factorization

6. Cholesky factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7 Gaussian Elimination and LU Factorization

Using row reduction to calculate the inverse and the determinant of a square matrix

Notes on Determinant

Solving Systems of Linear Equations

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Arithmetic and Algebra of Matrices

Solving Systems of Linear Equations Using Matrices

Here are some examples of combining elements and the operations used:

LINEAR ALGEBRA. September 23, 2010

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

1 VECTOR SPACES AND SUBSPACES

Systems of Linear Equations

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Lecture notes on linear algebra

Solution of Linear Systems

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

1 Introduction to Matrices

Lecture 4: Partitioned Matrices and Determinants

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

How To Understand And Solve A Linear Programming Problem

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

8 Square matrices continued: Determinants

Linear Algebra: Determinants, Inverses, Rank

Lecture 2 Matrix Operations

Data Mining: Algorithms and Applications Matrix Math Review

Question 2: How do you solve a matrix equation using the matrix inverse?

Lecture Notes 2: Matrices as Systems of Linear Equations

Introduction to Matrix Algebra

1.2 Solving a System of Linear Equations

Factorization Theorems

Lecture 3: Finding integer solutions to systems of linear equations

Unit 18 Determinants

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Math 312 Homework 1 Solutions

Solving Linear Systems, Continued and The Inverse of a Matrix

SOLVING LINEAR SYSTEMS

K80TTQ1EP-??,VO.L,XU0H5BY,_71ZVPKOE678_X,N2Y-8HI4VS,,6Z28DDW5N7ADY013

Rotation Matrices and Homogeneous Transformations

by the matrix A results in a vector which is a reflection of the given

Solving Systems of Linear Equations

Math 1050 Khan Academy Extra Credit Algebra Assignment

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

2.2/2.3 - Solving Systems of Linear Equations

DETERMINANTS TERRY A. LORING

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

MAT188H1S Lec0101 Burbulla

26. Determinants I. 1. Prehistory

Lecture 1: Systems of Linear Equations

Vector and Matrix Norms

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

GENERATING SETS KEITH CONRAD

Similar matrices and Jordan form

7.4. The Inverse of a Matrix. Introduction. Prerequisites. Learning Style. Learning Outcomes

Notes on Symmetric Matrices

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

F Matrix Calculus F 1

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Orthogonal Bases and the QR Algorithm

LU Factoring of Non-Invertible Matrices

SECTIONS NOTES ON GRAPH THEORY NOTATION AND ITS USE IN THE STUDY OF SPARSE SYMMETRIC MATRICES

DATA ANALYSIS II. Matrix Algorithms

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Prerequisites: TSI Math Complete and high school Algebra II and geometry or MATH 0303.

Some Lecture Notes and In-Class Examples for Pre-Calculus:

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Inner Product Spaces and Orthogonality

A permutation can also be represented by describing its cycles. What do you suppose is meant by this?

α = u v. In other words, Orthogonal Projection

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

More than you wanted to know about quadratic forms

Arkansas Tech University MATH 4033: Elementary Modern Algebra Dr. Marcel B. Finan

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

1 Sets and Set Notation.

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

( ) which must be a vector

The Determinant: a Means to Calculate Volume

Matrices and Linear Algebra

Subspaces of R n LECTURE Subspaces

Solution to Homework 2

Linear Algebra Review. Vectors

Section Inner Products and Norms

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Name: Section Registered In:

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

A Direct Numerical Method for Observability Analysis

SOLVING COMPLEX SYSTEMS USING SPREADSHEETS: A MATRIX DECOMPOSITION APPROACH

Methods for Finding Bases

MATH APPLIED MATRIX THEORY

Elements of Abstract Group Theory

Transcription:

lementary Matrices and The LU Factorization Definition: ny matrix obtained by performing a single elementary row operation (RO) on the identity (unit) matrix is called an elementary matrix. There are three elementary operations:. Permute rows i and j. Multiply row i by a non-zero scalar. dd times row i to row j Corresponding to the three RO, we have then three elementary matrices: Type : P ij - permute rows i and j in I n. Type : M i ( ) - multiply row i of I n by a non-zero scalar Type : ij ( ) - dd times row i of I n to row j ll three types of elementary matrices are shown below: Permutation matrix: P Scaling matrix: M ( ) Row combination: ( ) Pre-multiplying an n p matrix by an n n elementary matrix has the effect of performing the corresponding RO on. xample: We can multiply the First row of the matrix by (an elementary row operation). The resulting matrix will become 6

We can achieve the same result by pre-multiplying by the corresponding elementary matrix. M 6 ( ) n RO can be performed on a matrix by pre-multiplying the matrix by a corresponding elementary matrix. Therefore, we can show that any matrix can be reduced to a row echelon form (RF) by multiplication by a sequence of elementary matrices. Therefore, we can write m R () where R denotes an RF of. Consider a nonsingular n n matrix. Since the unique reduced row echelon form (RRF) of such a matrix is the identity matrix I n, it follows that there exists elementary matrices (i.e. there exists elementary row operations),,..., such that m I n () But we now that I n and this implies from qn. () that equivalently m I n This shows that reduces to the identity matrix. This is what we do to find method. () m can be obtained by applying to I n the same sequence of RO that using the Gauss-Jordan LU decomposition of a nonsingular matrix nonsingular matrix can be reduced to an upper triangular matrix using elementary row operations of Type only. The elementary matrices corresponding to Type ROs are unit lower triangular matrices. We can write U () where,,..., are unit lower triangular Type elementary matrices and U is an upper triangular matrix. Since each elementary matrix is nonsingular (meaning their inverse exist) we can write from qn. () that m U () We now that the product of two lower triangular matrices is also a lower triangular matrix. Therefore qn. () can be written as LU where L (6) Of course we need to now the inverses of the Type elementary matrices. Inverses of the three n n elementary matrices are: M M P P and ( ) ( ) ( ) ( ) i i, ij ij ij ij or

xample: Determine the LU factorization of the matrix First, let us do the ROs to reduce into an upper triangular matrix in the following manner. ( ), ( ) ( ) These ROs can be written in terms of their equivalent elementary matrices as where () ( ), ( ), ( ) Note the order of multiplication in qn. (). U and L We can compute the inverses of the elementary matrices very easily. ( ), ( ), ( ) Therefore, L L Therefore can be written as

We can construct the lower triangular matrix L without multiplying the elementary matrices if we utilize the multipliers obtained while we converted matrix into an upper triangular matrix. But, what exactly are those multipliers? Definition: When using RO of Type, the multiple of a specific row i that is subtracted from row j to put a zero in the ji position is called a multiplier, and is denoted as ji m. In our example we have three multipliers: m, m, m If we notice the unit lower triangular matrix L carefully, we see that the elements beneath the leading diagonal are just the corresponding multipliers. This relationship holds in general. Therefore, we can do elementary row operations of Type to reduce to upper triangular form and then utilize the corresponding multipliers to write L directly. xample: Determine the LU factors for the matrix 6 : Type ROs to reduce to the upper triangular matrix can be achieved by premultiplying by the corresponding elementary matrices. The elementary matrices are listed in the order they are multiplied. : : :

: : :.66.66..66 8 8.8 If you lie the fractional form then U 8 : The lower triangular matrix L can be found by the following. L : L..66.66.6.66. If you lie the fractional form then

L : Note that the multipliers corresponding to the ROs are: m : m : m : m : m : m : In a unit lower triangular case, the matrix L can be constructed directly by utilizing the multipliers. To verify multiply L and U. LU 6 TH ND