Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Similar documents
Direct Methods for Solving Linear Systems. Matrix Factorization

7 Gaussian Elimination and LU Factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solution of Linear Systems

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Solving Linear Systems, Continued and The Inverse of a Matrix

Factorization Theorems

Solving Systems of Linear Equations

6. Cholesky factorization

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

SOLVING LINEAR SYSTEMS

Operation Count; Numerical Linear Algebra

Systems of Linear Equations

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

8 Square matrices continued: Determinants

Elementary Matrices and The LU Factorization

Row Echelon Form and Reduced Row Echelon Form

by the matrix A results in a vector which is a reflection of the given

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

LINEAR ALGEBRA. September 23, 2010

1 Determinants and the Solvability of Linear Systems

Notes on Determinant

Linear Programming. March 14, 2014

CS3220 Lecture Notes: QR factorization and orthogonal transformations

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Using row reduction to calculate the inverse and the determinant of a square matrix

Lecture 3: Finding integer solutions to systems of linear equations

Solving Systems of Linear Equations

Linear Programming Notes V Problem Transformations

Typical Linear Equation Set and Corresponding Matrices

Lecture Notes 2: Matrices as Systems of Linear Equations

Vector and Matrix Norms

1 Solving LPs: The Simplex Algorithm of George Dantzig

Methods for Finding Bases

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Lecture 5: Singular Value Decomposition SVD (1)

1 Introduction to Matrices

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Unit 18 Determinants

5.5. Solving linear systems by the elimination method

Practical Guide to the Simplex Method of Linear Programming

Solutions to Math 51 First Exam January 29, 2015

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

1 VECTOR SPACES AND SUBSPACES

Chapter 2 Determinants, and Linear Independence

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Solution to Homework 2

1.2 Solving a System of Linear Equations

LS.6 Solution Matrices

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

University of Lille I PC first year list of exercises n 7. Review

Orthogonal Bases and the QR Algorithm

Similar matrices and Jordan form

A Direct Numerical Method for Observability Analysis

The Characteristic Polynomial

What is Linear Programming?

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

MATRICES WITH DISPLACEMENT STRUCTURE A SURVEY

Introduction to Matrix Algebra

Nonlinear Programming Methods.S2 Quadratic Programming

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Eigenvalues and Eigenvectors

Numerical Analysis Lecture Notes

Linear Codes. Chapter Basics

5. Orthogonal matrices

Syntax Description Remarks and examples Also see

Linearly Independent Sets and Linearly Dependent Sets

The Determinant: a Means to Calculate Volume

1 Review of Least Squares Solutions to Overdetermined Systems

Lecture 2 Matrix Operations

MAT188H1S Lec0101 Burbulla

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

II. Linear Systems of Equations

( ) which must be a vector

3 Some Integer Functions

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Linear Programming in Matrix Form

CONTENTS. Please note:

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Lecture 1: Systems of Linear Equations

Linear Programming. April 12, 2005

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

9.2 Summation Notation

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Inner Product Spaces and Orthogonality

160 CHAPTER 4. VECTOR SPACES

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

5 Homogeneous systems

CHAPTER 5 Round-off errors

Continued Fractions and the Euclidean Algorithm

Transcription:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General case: We can have n equations for n unknowns: Given: Coefficients a,, a nn, right hand side entries b,, b n Wanted: Numbers x,, x n such that a x + + a n x n = b a n x + + a nn x n = b n Using matrix-vector notation this can be written as follows: Given a matrix A R n n and a right-hand side vector b R n, find a vector x R n such that a a n x b = a n {{ a nn x n {{ b n {{ A x b Singular and Nonsingular Matrices Definition: We call a matrix A R n n singular if there exists a nonzero vector x R n such that Ax = 0 [ ] ( ) ( ) 0 Example: The matrix A = is singular since for x = we have Ax = 4 0 [ ] ( ) b The matrix A = is nonsingular: Ax = implies x 0 4 b = b /4, x = b + x Therefore Ax = 0 implies x = 0

Observation: If A is singular, the linear system Ax = b has either no solution or infinitely many solutions: As A is singular there exists a nonzero vector y with Ay = 0 If Ax = b has a solution x, then x + αy is also a solution for any α R We will later prove: If A is nonsingular, then the linear system Ax = b has a unique solution x for any given b R n We only want to consider problems where there is a unique solution, ie where the matrix A is nonsingular How can we check whether a given matrix A is nonsingular? If we use exact arithmetic we can use Gaussian elimination with pivoting (which will be explained later) to show that A is nonsingular But in machine arithmetic we can only assume that a machine approximation for the matrix A is known, and we will have to use a different method to decide whether A is nonsingular in this case Gaussian Elimination without Pivoting Basic Algorithm: We can add (or subtract) a multiple of one equation to another equation, without changing the solution of the linear system By repeatedly using this idea we can eliminate unknowns from the equations until we finally get an equation which just contains one unknown variable Elimination: step : eliminate x from equation,, equation n by subtracting multiples of equation : eq := eq l eq,, eq n := eq n l n eq step : eliminate x from equation,, equation n by subtracting multiples of equation : eq := eq l eq,, eq n := eq n l n eq step n : eliminate x n from equation n by subtracting a multiple of equation n : eq n := eq n l n,n eq n Back substitution: Solve equation n for x n Solve equation n for x n Solve equation for x The elimination transforms the original linear system Ax = b into a new linear system Ux = y with an upper triangular matrix U, and a new right hand side vector y Example: We consider the linear system 4 {{ A x x x {{ x = 6 {{ b Elimination: To eliminate x from equation we choose l = 4/ = and subtract l times equation from equation To eliminate x from equation we choose l = / = and subtract l times equation from equation Then the

linear system becomes 0 0 5 x x x = 4 7 To eliminate x from equation we choose l = 5/( ) and subtract l times equation from equation, yielding the linear system 0 {{ U x x x = 4 {{ y Now we have obtained a linear system with an upper triangular matrix (denoted by U) and a new right hand side vector (denoted by y) Back substitution: The third equation is x = and gives x = Then the second equation becomes x = 4, yielding x = Then the first equation becomes x + + =, yielding x = Transforming the matrix and right hand side vector separately: We can split the elimination into two parts: The first part acts only on the matrix A, generating the upper triangular matrix U and the multipliers l jk The second part uses the multipliers l jk to transform the right hand side vector b to the vector y Elimination for matrix: Given the matrix A, find an upper triangular matrix U and multipliers l jk : Let U := A and perform the following operations on the rows of U: step : l := u /u, row := row l row,, l n := u n /u, row n := row n l n row step : l := u /u, row := row l row,, l n := u n /u, row n := row n l n row step n : l n,n := u n,n /u n,n, row n := row n l n,n row n Transform right hand side vector: Given the vector b and the multiplies l jk find the transformed vector y: Let y := b and perform the following operations on y: step : y := y l y,, y n := y n l n y step : y := y l y,, y n := y n l n y step n : y n := y n l n,n y n Back substitution: Given the matrix U and the vector y find the vector x by solving Ux = y: x n := b n /u n,n x n := (b n u n,n x n )/u n,n x := (b u x u n x n )/u

What can possibly go wrong: Note that we have to divide by the so-called pivot elements u,, u n,n in part, and we divide by u,, u nn in part If we encounter a zero pivot we have to stop the algorithm with an error message Part of the algorithm therefore becomes step : If u = 0 then stop with error message else l := u /u, row := row l row, step : If u = 0 then stop with error message else l := u /u, row := row l row, step n : If u n,n = 0 then stop with error message else l n,n := u n,n /u n,n, row n := row n l n,n row n If u n,n = 0 then stop with error message Observation: If for a given matrix A part of the algorithm finishes without an error message, then parts and work, and the linear system ( has) a unique solution Hence the matrix A must be nonsingular However, the converse is not true: For 0 the matrix A = the algorithm stops immediately since u 0 = 0, but the matrix A is nonsingular Reformulating part as forward substitution: Now we want to express the relation between b and y in a different way Assume we are given y, and we want to reconstruct b by reversing the operations: For n = we would perform the operations y := y + l y, y := y + l y, y := y + l y and obtain b b b = y y + l y y + l y + l y = l 0 l l {{ L y y y with the lower triangular matrix L Therefore we can rephrase the transformation step as follows: Given L and b, solve the linear system Ly = b for y Since L is lower triangular, we can solve this linear system by forward substitution: We first solve the first equation for y, then solve the second equation for y, The LU decomposition: In the same way as we reconstructed the vector b from the vector y, we can reconstruct the matrix A from the matrix U: For n = we obtain row of A (row of U) 0 0 row of U row of A = (row of U) + l, (row of U) = l 0 row of U row of A (row of U) + l, (row of U) + l, (row of U) l l {{ row of U L Therefore we have A = LU We have written the matrix A as the product of a lower triangular matrix (with s on the diagonal) and an upper triangular matrix U This is called the LU-decomposition of A 4

Summary: Now we can rephrase the parts,, of the algorithm as follows: Find the LU-decomposition A = LU: Perform elimination on the matrix A, yielding an upper triangular matrix U Store the multipliers in a matrix L and put s on the diagonal of the matrix L: 0 0 L := l 0 l n, l n,n Solve Ly = b using forward substitution Solve U x = y using back substitution The matrix decomposition A = LU allows us to solve the linear system Ax = b in two steps: Since L {{ Ux = b y we can first solve Ly = b to find y, and then solve Ux = y to find x Example: We start with U := A and L being the zero matrix: L =, U = step : step : L = L = 5 0 We put s on the diagonal of L and obtain L = We solve the linear system Ly = b 0 5, U =, U = 0 5 y y y 4 0 0 5 0, U = = 0 by forward substitution: The first equation gives y = Then the second equation becomes + y = yielding y = 4 Then the third equation becomes 5 ( 4) + y = 6, yielding y = The back substitution for solving Ux = b is performed as explained above 6 5

Gaussian Elimination with Pivoting There is a problem with Gaussian elimination without pivoting: If we have at step j that u jj = 0 we cannot continue since we have to divide by u jj This element u jj by which we have to divide is called the pivot 4 Example: For A = we use l = 4, l = 4 and obtain after step of the elimination U = 4 4 Now we have u = 0 and cannot continue 0 Gaussian elimination with pivoting uses row interchanges to overcome this problem For step j of the algorithm we consider the pivot candidates u j,j, u j+,j,, u nj, ie, the diagonal element and all elements below If there is a nonzero pivot candidate, say u kj we interchange rows j and k of the matrix U Then we can continue with the elimination Since we want that the multipliers correspond to the appropriate row of U, we also interchange the rows of L whenever we interchange the rows of L In order to keep track of the interchanges we use a vector p which is initially (,,, n), and we interchange its rows whenever we interchange the rows of U Algorithm: Gaussian Elimination with pivoting: Input: matrix A Output: matrix L (lower triangular), matrix U (upper triangular), vector p (contains permutation of,, n) 0 0 L := ; U := A ; p := 0 0 n For j = to n If (U jj,, U nj ) = (0,, 0) Stop with error message Matrix is singular Else Pick k {j, j +,, n such that U kj 0 End Interchange row j and row k of U Interchange row j and row k of L Interchange p j and p k For k = j + to n L kj := U kj /U jj (row k of U) := (row k of U) L kj (row j of U) End End If U nn = 0 Stop with error message Matrix is singular End For j = to n L jj := End Note that we can have several nonzero pivot candidates, and we still have to specify which one we pick This is called a pivoting strategy Here are two examples: 6

Pivoting strategy : Pick the smallest k such that U kj 0 Pivoting strategy : Pick k so that U kj is maximal If we use exact arithmetic, it does not matter which nonzero pivot we choose However, in machine arithmetic the choice of the pivot can make a big difference for the accuracy of the result, as we will see below Example: L =, U = Here the pivot candidates are 4,,, and we use 4: L =, U = 4 4 4 0, p =, p = Here the pivot candidates are 0,, and we use Therefore we interchange rows and of L, U, p: 4 L =, U = 0, p = 4 For column we have l = 0 and U does not change Finally we put s on the diagonal of L and get the final result 4 L = 0, U = 0, p = 0 4 LU Decomposition for Gaussian elimination with pivoting: If we perform row interchanges during the algorithm we no longer have LU = A Instead we obtain a matrix à which contains the rows of A in a different order: row p of A LU = à := row p n of A The reason for this is the following: If we applied Gaussian elimination without pivoting to the matrix Ã, we would get the same L and U that we got for the matrix A with pivoting Solving a linear system using Gaussian elimination with pivoting: In order to solve a linear system Ax = b we first apply Gaussian elimination with pivoting to the matrix A, yielding L, U, p Then we know that LU = à By reordering the equations of Ax = b in the order p,, p n we obtain the linear system row p of A x b p = row p n of A {{ x n b pn {{ Ã=LU b ie, we have L(Ux) = b We set y = Ux Then we solve the linear system Ly = b using forward substitution, and finally we solve the linear system Ux = y using back substitution 7

Summary: Apply Gaussian elimination with pivoting to the matrix A, yielding L, U, p such that LU = Solve Ly = b p b pn using forward substitution Solve U x = y using back substitution row p of A row p n of A Note that this algorithm is also known as Gaussian elimination with partial pivoting (partial means that we perform row interchanges, but no column interchanges) Example: Solve Ax = b for A = 4, b = linear system Ly = (b p, b p,b p ) = (b, b, b ) is 0 0 y y y 4 = With L and p from the Gaussian elimination the 4 yielding y = (, 5, ) using forward substitution Then we solve U x = y 4 x 0 x = 5 4 x using back substitution and obtain x =, x =, x = Existence and Uniqueness of Solutions If we perform Gaussian elimination with pivoting on a matrix A (in exact arithmetic), there are two possibilities: The algorithm can be performed without an error message In this case the diagonal elements u,, u nn are all nonzero For an arbitrary right hand side vector b we can then perform forward substitution (since the diagonal elements of L are ), and back substitution, without having to divide by zero Therefore this yields exactly one solution x for our linear system The algorithm stops with an error message Eg, assume that the algorithm stops in column j = since all pivot candidates are zero This means that a linear system Ax = b is equivalent to the linear system of the form 0 x x x x n = c c c c n 8

Here denots an arbitrary number, and denotes a nonzero number Let us first consider equations through n of this system These equations only contain x 4, x n There are two possibilities: (a) There is a solution x 4,, x n of equations through n Then we can choose x arbitrarily Now we can use equation to find x, and finally equation to find x (note that the diagonal elements are nonzero) Therefore we have infinitely many solutions for our linear system (b) There is no solution x 4,, x n of equations through n That means that our original linear system has no solution Observation: In case the linear system has a unique solution for any b In particular, Ax = 0 implies x = 0 Hence A is nonsingular In case we can construct a nonzero solution x for the linear system Ax = 0 Hence A is singular We have shown: Theorem: If a square matrix A is nonsingular, then the linear system Ax = b has a unique solution x for any right hand side vector x If the matrix A is singular, then the linear system Ax = b has either infinitely many solutions, or it has no solution, depending on the right hand side vector [ ] [ ] 4 Example: Consider A = The first step of Gaussian elimination gives l = 4 4 and U = Now we have u = 0, therefore the algorithm stops Therefore A is singular [ ] ( ) ( ) 4 x Consider the linear system = x 4 [ ] ( ) ( ) 4 x By the first step of elimination this becomes = Note that the second equation has no solution, x 5 and therefore the linear system has no solution [ ] ( ) ( ) 4 x Now consider = x [ ] ( ) ( ) 4 x By the first step of elimination this becomes = Note that the second equation is always true x 0 We can choose x arbitrarily, and then determine x from equation : 4x x =, yielding x = ( + x )/4 This gives infinitely many solutions Gaussian Elimination with Pivoting in Machine Arithmetic For a numerical computation we can only expect to find a reasonable answer if the original problem has a unique solution For a linear system Ax = b this means that the matrix A should be nonsingular In this case we have found that Gaussian elimination with pivoting, together with forward and back substitution always gives us the answer, at least in exact arithmetic It turns out that Gaussian elimination with pivoting can give us solutions with an unnecessarily large roundoff error, depending on the choice of the pivots 9

Example ) In the first column we have the two pivot candidates 00 and Consider the linear system [ 00 If we choose 00 as pivot we get l = 000 and ] ( x x ) = [ 00 0 999 ( ] ( x x ) = ( 998 ) Now back substitution gives x = 998 999 and x = ( 998 999 )/00 Note that we have subtractive cancellation in the computation of x : We have to expect a roundoff error of ε M in x When we compute x we get a relative error of x ε M = 999ε M, ie, we lose about three digits of accuracy [ ] ( ) x Alternatively, we can choose as the pivot in the first column and interchange the two equations: = 00 x ( ) [ ] ( ) ( ) x Now we get l = 00 and = This yields x 0 999 x 998 = 998 999 = 999 998, and x = 998 999 Note that there is no subtractive cancellation, and we obtain x with almost the full machine accuracy This example is typical: Choosing very small pivot elements leads to subtractive cancellation during back substitution Therefore we have to use a pivoting strategy which avoids small pivot elements The simplest way to do this is the following: Select the pivot candidate with the largest absolute value In most practical cases this leads to a numerically stable algorithm, ie, no unnecessary magnification of roundoff error 0