1 Matrix Representation of linear equations

Similar documents
7 Gaussian Elimination and LU Factorization

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Direct Methods for Solving Linear Systems. Matrix Factorization

Solution of Linear Systems

SOLVING LINEAR SYSTEMS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Solving Linear Systems, Continued and The Inverse of a Matrix

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

Using row reduction to calculate the inverse and the determinant of a square matrix

6. Cholesky factorization

Solving Systems of Linear Equations

Systems of Linear Equations

Factorization Theorems

Operation Count; Numerical Linear Algebra

Lecture 3: Finding integer solutions to systems of linear equations

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

1.2 Solving a System of Linear Equations

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

1 Introduction to Matrices

1 Determinants and the Solvability of Linear Systems

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Methods for Finding Bases

Solving Systems of Linear Equations Using Matrices

Row Echelon Form and Reduced Row Echelon Form

Lecture 2 Matrix Operations

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

1 Solving LPs: The Simplex Algorithm of George Dantzig

by the matrix A results in a vector which is a reflection of the given

1 VECTOR SPACES AND SUBSPACES

1 Review of Least Squares Solutions to Overdetermined Systems

Solving Systems of Linear Equations

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

160 CHAPTER 4. VECTOR SPACES

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Question 2: How do you solve a matrix equation using the matrix inverse?

Notes on Determinant

Elementary Matrices and The LU Factorization

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Similarity and Diagonalization. Similar Matrices

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

DATA ANALYSIS II. Matrix Algorithms

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

4.5 Linear Dependence and Linear Independence

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

MAT188H1S Lec0101 Burbulla

Unit 18 Determinants

Solution to Homework 2

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

CS3220 Lecture Notes: QR factorization and orthogonal transformations

LINEAR ALGEBRA. September 23, 2010

8 Square matrices continued: Determinants

Lecture 4: Partitioned Matrices and Determinants

Applied Linear Algebra I Review page 1

Arithmetic and Algebra of Matrices

Linearly Independent Sets and Linearly Dependent Sets

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Lecture Notes 2: Matrices as Systems of Linear Equations

Typical Linear Equation Set and Corresponding Matrices

Solutions to Math 51 First Exam January 29, 2015

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Lecture 1: Systems of Linear Equations

1 Review of Newton Polynomials

T ( a i x i ) = a i T (x i ).

Lecture notes on linear algebra

Lecture 5: Singular Value Decomposition SVD (1)

Natural cubic splines

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

The Determinant: a Means to Calculate Volume

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

DETERMINANTS TERRY A. LORING

Numerical Analysis Lecture Notes

Introduction to Matrices for Engineers

Recall that two vectors in are perpendicular or orthogonal provided that their dot

Vector and Matrix Norms

Name: Section Registered In:

Linear Programming. March 14, 2014

Linear Algebra: Determinants, Inverses, Rank

Matrix Multiplication

MATHEMATICS FOR ENGINEERS BASIC MATRIX THEORY TUTORIAL 2

Linear Algebra Notes for Marsden and Tromba Vector Calculus

5 Numerical Differentiation

Introduction to Matrix Algebra

MAT 242 Test 2 SOLUTIONS, FORM T

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Eigenvalues and Eigenvectors

Orthogonal Bases and the QR Algorithm

The Characteristic Polynomial

Section Inner Products and Norms

Nonlinear Algebraic Equations Example

LU Factorization Method to Solve Linear Programming Problem

x = + x 2 + x

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Transcription:

Linear System of Equations GOAL Understand linear system Learn solution methods for triangular linear system Learn how to do vectorization in high performance computing Solve linear system by LU factorization KEY WORDS Matrix, lower triangular matrix, upper triangular matrix, tridiagonal system, LU factorization, Gaussian elimination, pivoting 1 Matrix Representation of linear equations A system of linear equations is a set of linear equations in multi-variables For example, if the system involves m variables x = (x 1, x,, x m ) T and n equations, then it can be written as as a 11 x 1 + a 1 x + + a 1m x m = b 1 a 1 x 1 + a x + + a m x m = b a n1 x 1 + a n x + + a nm x m = b n This system of linear equations can be represented in a matrix-vector form Ax = b, where the matrix A = (a ij ) n m, the vector x = (x 1, x,, x m ) T and the vector b = (b 1, b,, b n ) T If the number of unknows m is more than the number of equations n, then the system is said to be under-determined and may have infinitely many solutions On the other hand, if the number of unknowns m is less than the number of equations, then the system is said to be over-determined, and the system might have no solution For simplicity, we only consider the case of m = n and the case of A being invertible in this Chapter The case of m < n is addressed in the Chapter of Least Square Methods When m = n, x = A 1 b Solving linear system Ax = b arises in earlier part of the course, for example in Lecture 3, power series form of polynomial interpolation, where the system size is relatively small The problem often arises in solving differential equations, eg boundary value problems, in areas of scientific computing and engineering applications, where the size of the system can be in hundreds, thousands, and even millions We are concerned about solving linear system Ax = b by numerical algorithms Several questions are to be addressed The answer to each of the question is one topic in this Chapter 1

1 How do computer softwares, eg Matlab, compute inv(a) or A 1 b? Gaussian elimination Assuming A 1 exists (det A ), is the numerical algorithm robust enough to compute inv(a) or A 1 b for all A? If not, what can be done to improve the numerical algorithm? The will be some instability associated with Gaussian elimination, which can be remedied by Gaussian elimination with pivoting 3 How long does it take to compute the answer? Or what is the complexity of the algorithm, or the computational cost as a function of n, the size of A? We will study the complexity of a numerical algorithm, eg Gaussian elimination, by counting the number of flops (floating point operations) as a function of n 4 There is always a limit in our computational resource and computational time If the current algorithm takes too long to compute the answer, what can be done to improve the efficiency of the algorithm (or reduce the cost of algorithm), so that the computational cost is affordable? Iterative method is alternative approach of solving Ax = b with reduced cost, compared with Gaussian elimination Triangular linear system The direct method that solves the general matrix problem Ax = b is called Gaussian elimination The goal of Gaussian elimination is to convert a given linear system into an equivalent triangular system Triangular system solving is easy because the unknowns can be solved without any further manipulation of the matrix of coefficients The goal here is to study direct methods for triangular linear systems 1 Lower triangular system: general forward substitution A general lower triangular system looks like Lx = b where the matrix L = (l ij ) n n has the property that l ij =, if i < j and looks like l 11 l 1 l l n1 l n l nn

Such a linear system can be written as l i1 x 1 + l i x + + l ii x i = b i, for i = 1,,, n The solution for x is given by i 1 x i = (b i l ij x j )/l ii j=1 This procedure of solving a lower triangular system is called the general forward substitution Example 1 Solve for x from 1 3 3 1/ 1 From the first equation, we have x = x 1 = b 1 /l 11 = 1/ From the second equation x 1 + 3x =, we have From the third equation, we have 1 3 x = (b l 1 x 1 )/l = ( + x 1 )/3 = 5/6 x 3 = (b 3 l 31 x 1 l 3 x )/l 33 = (3 3x 1 1 x )/( 1) = 13/1 Therefore x = (, 5/6, 13/1) T Code (Pseudo code) The pseudo code for forward substitution is as follows for i = 1 : n i 1 x i = (b i l ij x j )/l ii j=1 Code 3 (Matlab code) The following is an implementation in Matlab You might mimic this function in a language of your choice for i = 1 : n x(i) = b(i) for j = 1 : i 1 x(i) = x(i) L(i, j) x(j) x(i) = x(i)/l(i, i) 3

Code 4 (Vectorization of Matlab code) Notice the the j-loop in the Matlab implementation in fact subtracts the following inner product i 1 l ij x j = L(i, 1 : i 1) x(1 : i 1) j=1 from the component b i follows: Thus we can have a vectorized implementation as for i = 1 : n x(i) = (b(i) L(i, 1 : i 1) x(1 : i 1))/L(i, i) Complexity Based on the pseudo/matlab code above, the forward substitution for computing x involves flops in each of the j-loop, therefore (i 1) in the j-loops; i(= (i 1) + ) flops in each of the i-loop Thus the computation of x involves n i = (1 + + + n) = n(n + 1) = O(n ) i=1 flops Upper triangular system: general backward substitution A general upper triangular system looks like Ux = b where the matrix (u ij ) n n has the property that u ij =, if i > j, and looks like u 11 u 1 u 1n u u n u nn Such a linear system can be written as u ii x i + u i,i+1 x i+1 + u in x n = b i, for i = 1,,, n The solution for x is given by x i = (b i n j=i+1 u ij x j )/u ii, for i = n,, 1 This procedure of solving a upper triangular system is called the general backward substitution 4

Example 5 Solve for x from 3 1 1 3 1 1 From the last equation, we have x = x 3 = b 3 /u 33 = 1 3 1 From the second equation 3x x 3 =, we have From the first equation, we have Therefore x = (1, 1, 1) T x = (b u 3 x 3 )/u = ( + x 3 )/3 = 1 x 1 = (b 1 u 13 x 3 u 1 x )/u 11 = (3 + x 3 x )/3 = 1 Code 6 (Pseudo code) The pseudo code for backward substitution is as follows for i = n : 1 x i = (b i n j=i+1 u ij x j )/u ii Code 7 (Matlab code) The following is an implementation in Matlab You might mimic this function in a language of your choice for i = n : 1 x(i) = b(i) for j = i + 1 : n x(i) = x(i) U(i, j) x(j) x(i) = x(i)/u(i, i) Code 8 (Vectorization of Matlab code) Notice the the j-loop in the Matlab implementation in fact subtracts the following inner produce n u ij x j = U(i, i + 1 : n) x(i + 1 : n) j=i+1 from the component b i follows: for i = n : 1 Thus we can have a vectorized implementation as x(i) = (b(i) U(i, i + 1 : n) x(i + 1 : n))/u(i, i) 5

Complexity The backward substitution for computing x involves flops in each of the j-loop, therefore (n (i + 1) + 1) in the j-loops; (n i + 1)(= (n i) + ) flops in each of the i-loop Thus the computation of x involves n (n i + 1) = (n + (n + 1) + + 1) = n(n + 1) = O(n ) flops i=1 3 LU Factorization We have seen that triangular linear systems are easy to solve via forward and backward substitutions without any further manipulation on the matrix The goal of this section is to devise a method that solves linear systems by reducing it to triangular linear system Consider the general matrix problem Ax = b An LU factorization of A in an expression of A = LU where L = (l ij ) n n is a lower triangular matrix and U = (u ij ) n n is an upper triangular matrix If the matrix A is non-singular, then both L and U is not singular (think about why?) What can we do with the factorization? It turns out that the solution x is easy to resolve once an LU factorization is known To see why, we substitute A = LU into Ax = b to obtain LUx = b By letting y = Ux, we see that both x and the new variable y satisfies the following enlarged system { y = Ux Ly = b The above linear system can be decoupled in the sense that the second one can be solved indepently of the first one Therefore, the following procedure can be adopted in resolving x: Forward substitution: solve Ly = b Backward substitution: solve U x = y That sounds great and easy, isn t it? But wait a minute, how can we find and LU factorization for a general matrix A? The answer is given by Gaussian elimination, to be discussed later on 6

31 Tridiagonal linear system Let s consider a special class of linear systems in which the matrices are tridiagonal A tridiagonal matrix is one with the following property: A = (a ij ) n n, a ij =, if i j > For simplicity, we shall represent a tridiagonal matrix by using three vectors: and shall write A = a = (a,, a n ) c = (c 1,, c n 1 ) d = (d 1,, d n ) d 1 c 1 a d c a 3 d 3 d n 1 c n 1 a n d n We look for an LU factorization of the matrix A in the following form: 1 u 1 c 1 l 1 u c l 3 1 u 3 L = 1 l n 1 U = u n 1 c n 1 u n Equaling A = LU, we find that u 1 = d 1 and for i = : n, at (i, i 1): a i = l i u i 1 l i = a i /u i 1 at (i, i): d i = l i c i 1 + u i u i = d i l i c i 1 at (i, i + 1): c i = c i The pseudo code for computing the triangular matrices L and U is then given by: u 1 = d 1 for i = : n l i = a i /u i 1 u i = d i l i c i 1 Once L and U is computed, the solution of the tridiagonal matrix problem Ax = b can be obtained by a forward and backward substitution 7

Example 31 The LU factorization of a 3-by-3 tridiagonal matrix A 1 1 1 1 is A = LU with L = 1 l 1 l 3 1 U = u 1 1 u 1 u 3 where according to the procedure outlined above, we can obtain u 1 = d 1 =, l = a /u 1 = 1/, u = d l c 1 = 1 = 3 l 3 = a 3 /u = 3, u 3 = d 3 l 3 c = ( 3 )( 1) = 3 = 4 3 Therefore, the LU factorization of A is 1 1 A = 1 1 3 1 4 3 1 3 4 Naive Gaussian Elimination The goal here is to develop a solver for general matrix problem Ax = b The approach is again to find a lower triangular matrix L and an upper triangular U such that A = LU Once such a factorization is accomplished, the solution x can be obtained from a forward and backward substitution The method for computing L and U is call Gaussian elimination 41 An example of 3 by 3 system We wish to find the solution for the following linear system x 1 + x 3x 3 = x 1 + x x 3 = x 1 + 5x = 3 (1) Solution procedure: The augmented matrix representation for the above linear system is given by 1 3 D = (A, b) = 1 1 1 1 5 3 8

where the forth column responds to the right-hand vector b In the following, our goal is to transform the linear system to an upper triangular system via elementary row elimination In other words, we want to make a ij =, for i > j We follow the order of a 1 a 31 a 3 1 To make entry a 1 be zero, we perform the following elementary row elimination: D(, :) l 1 D(1, :) where l 1 = a1 a 11 = 1 In other words, we have This leads to A(, :) A(, :) + 1 A(1, :), b b + 1 b 1 A(, :) = [, 3, 5 ], b = In matrix notation, this step can be illustrated by 1 3 1 3 D = 1 1 1 D = 3 5 1 5 3 1 5 3 To eliminate a 13, we have D(3, :) D(3, :) l 31 D(1, :) where l 31 = a31 a 11 = 1 In other words, we have This leads to A(3, :) A(3, :) 1 A(1, :), b 3 b 3 1 b 1 A(3, :) = [, 9, 3 ], b 3 = 3 In matrix notation, this step can be illustrated by 1 3 1 3 D = 3 5 D = 3 5 3 1 5 3 9 3 3 To eliminate a 3 in the updated matrix A, we perform the following row operations D(3, :) D(3, :) a 3 D(, :) a By letting l 3 = a3 a = 3, we have D(3, :) D(3, :) l 3 D(, :) = D(3, :) 3D(, :) We illustrate the computation in the matrix form as follows: 1 3 1 3 D = 3 5 D = 3 5 9 3 9 3 3 9

Lastly, we solve the upper triangular matrix problem produced in previous step, where Ux = b, 1 3 U = 3 5, b = (,, 3) T 9 The solution via the backward substitution is x = ( 8 9, 7 9, 1 3 ) Remark 41 The first three steps in the solution procedure (Gaussian elimination) provides an LU factorization for the original coefficient matrix A To see how it works, we collect the multiplier l ij in the above elimination process and form a lower triangular matrix L as follows: L = 1 l 1 1 l 31 l 3 1 = 1 1 1 1 3 1 The standard matrix multiplication shows that 1 1 3 1 3 LU = 1 1 3 5 = 1 1 1 = A 1 3 1 9 1 5 Remark 4 The Gaussian elimination to the augmented matrix and LU factorization of matrix A are two different (but similar) approaches in solving x Code 43 The pseudo code for the LU factorization of this 3-by-3 matrix is as follows L = eye(3); for j = 1 : 4 General case for i = j + 1 : 3 L(i, j) = A(i, j)/a(j, j) A(i, :) = A(i, :) L(i, j) A(j, :) U = A The objective here is to obtain an LU factorization for linear general linear system Ax = b, where A is of n by n: a 11 a 1 a 13 a 1,n 1 a 1n a 1 a a 3 a,n 1 a n A = a 31 a 3 a 33 a n a 3n a n1 a n a n3 a n,n 1 a nn 1

By following the spirit illustrated in the example of 3-by-3 case, we eliminate the elements a i1 for i =,, n by performing A(i, :) A(i, :) l i1 A(1, :), with l i1 = a i1 a 11 With the above elementary row operation, the matrix A is updated as a 11 a 1 a 13 a 1,n 1 a 1n a (1) a (1) 3 a (1),n 1 a (1) n A (1) = a (1) 3 a (1) 33 a (1) n a (1) 3n a (1) n a (1) n3 a (1) n,n 1 a (1) nn where a (1) ij = a ij l i1 a 1j for i, j =, n If we represent the above procedure as a matrix multiplication process, then we have with L (1) A = A (1), 1 l 1 1 L (1) = l 31 1 l n1 1 The same procedure can then to applied to matrix A (1) = where we eliminate the elements a (1) i, a (1) 11 a (1) 1 a (1) 13 a (1) 1,n 1 a (1) 1n a (1) a (1) 3 a (1),n 1 a (1) n a (1) 3 a (1) 33 a (1) n a (1) 3n a (1) n a (1) n3 a (1) n,n 1 a (1) nn for i = 3,, n by performing A(i, :) A(i, :) l i A(, :), with l i = a(1) i If we represent the above procedure as a matrix multiplication process, then we have L () A (1) = A (), with 1 1 L () = l 3 1 l n 1 a (1) 11

Repeat this process till the matrix A is reduced to an upper triangular one At the k th step of the elimination, the multiplier required are given by for i = k + 1 : n L(i, k) = A(i, k)/a(k, k) The act of multiplying row k by L(i, k) and subtracting from the row i can be implemented as A(i, k : n) = A(i, k : n) L(i, k)a(k, k : n) Incorporating these ideas, we get the following procedure for the upper triangular matrix A L (k) A (k 1) = A (k), k = 1, n 1 with A (n 1) being an upper triangular matrix denoted as U Combining the operations above, L (n 1) L () L (1) A = U, hence A = (L (1) ) 1 (L (n 1) ) 1 U = LU, with L = (L (1) ) 1 (L (n 1) ) 1 A simple computation shows that 1 1 (L (k) ) 1 = l k+1,k 1 l nk 1 and L = (L (1) ) 1 (L (n 1) ) 1 = 1 l 1 1 l n,1 l n, l n,n 1 1 Below is a pseudo code for computing the LU factorization for general A matrix using Gaussian elimination 1

Code 44 L = eye(n); for k = 1 : n 1 for L(k + 1 : n, k) = A(k + 1 : n, k)/a(k, k); i = k + 1, n A(i, k + 1 : n) = A(i, k + 1 : n) L(i, k) A(k, k + 1 : n) U = A Once an LU factorization is completed, one can get the solution for Ax = b by doing a forward and backward substitution Complexity Based on the pseudo/matlab code above, the LU factorization of a general matrix A of size n n involves for each k-loop (n k) flops (n k) + 1 flops in each of the i-loop, therefore ((n k) + 1)(n k) for i-loops with i range from k + 1 to n; Thus the computation involves n 1 (n k) + ((n k) + 1)(n k) k=1 n 1 = ((n k) + )(n k) k=1 n 1 = (k + )k k=1 (n 1)n(n 1) (n 1)n = + 6 = O(n 3 ) flops () Remark 45 Alternatively, one can form an augmented matrix D = [A; b], and perform Gaussian elimination on D matrix to form an upper triangular system Finally one can perform backward substitution for an upper triangular system to solve for x The complexity of the algorithm would be the same as that of the LU factorization for A above, that is O(n 3 ) Remark 46 If one has to solve many linear systems with the same A matrix but different rhs vector b, then first performing the LU factorization of A with 13

O(n 3 ) computational cost, then perform forward and backward substitution for the lower and upper triangular system with O(n ) computational cost would be preferred to optimize the efficiency (save computational cost) Remark 47 It turns out that the LU factorization (when it exists) is not unique If L has 1 s on it s diagonal, then it is called a Doolittle factorization If U has 1 s on its diagonal, then it is called a Crout factorization The LU factorization introduced above has 1 as diagonal entries, therefore it is a Doolittle factorization Note that the Doolittle factorization does not always exists (see the example in the next section with ɛ = ) Remark 48 In the above algorithm, there is a division operation in computing l ij This could lead to a potentially danger of divided by The Gaussian elimination with pivoting in the next section is designed to avoid such problem 5 Gaussian Elimination with Pivoting The elimination process in the native Gaussian elimination is based on the following elementary row operation: A(i, :) A(i, :) l ik A(k, :), l ik = a ik a kk When the updated diagonal entry a kk vanishes, the above process can t be completed because the multiplier l ik is not defined We want to modify the naive Gaussian elimination in a way that the eliminating can continue even if a kk = 51 Stability: the need for pivoting Consider the matrix problem Ax = b, where ( ) ( ɛ 1 1 A =, b = 1 1 ) Assume that the parameter ɛ is sufficiently small, we would like to see how the naive Gaussian elimination is affected by the change of ɛ The LU factorization of A is given by A = ( ɛ 1 1 1 ) = ( 1 1/ɛ 1 ) ( ɛ 1 1 1/ɛ ) = LU We then solve for x in the following two steps: (1) use forward substitution for Ly = b, () backward substitution for Ux = y In doing so, the intermedia variable y is given by y 1 = 1, y = ( y 1 /ɛ) = 1/ɛ, 14

and the final solution is given by x = y /(1 ɛ 1 ) = ɛ 1 ɛ 1, x 1 = 1 x ɛ When ɛ, then x 1, and x 1 from computer is unstable as we have / Notice that when ɛ =, the exact solution to the given linear system is x 1 = x = 1 Therefore, the LU factorization will not provide a stable solution (See homework for examples of very small ɛ) 5 Pivoting The instability seen in the previous subsection in really caused by the fact that the elimination step involves the division of a very small number ɛ The entry of a 11 = ɛ in the native Gaussian elimination is known as the pivot element and the computation of l 1 involves the division of this small pivot element How shall we avoid such a situation? Consider the k-th step in the Gaussian elimination To eliminate the entries a ik, for i = k + 1,, n, the following row operations were performed A(i, :) A(i, :) l ik A(k, :), l ik = a ik a kk Wouldn t it be nice if a kk was the largest entry among a kk, a k,k+1,, a kn? This would ensure that all the multipliers are less than or equal to 1 in their absolute value This suggest that at the beginning of the k th step, we would to swap row k and row q, where we assume that a qk is the largest element in terms of absolute value of a kk, a k,k+1,, a kn The swapping idea is known as the selection of pivot elements The resulting elimination is called Gaussian elimination with pivoting Code 51 Let D be the augmented matrix, ie D = [A, b] for k = 1 : n 1 for r : index of the maximum of D(k : n, k) swap row r and row k i = k + 1, n L(i, k) = D(i, k)/d(k, k); D(i, k + 1 : n + 1) = D(i, k + 1 : n + 1) L(i, k) D(k, k + 1 : n + 1) Let the updated D = [A, b], and use backward substitution to solve for x, since A is a upper triangular matrix now 15

Example 5 Perform the Gaussian elimination with pivoting to solve the following linear system Ax = b with A = 4 1 4 1 1 4 and b = (5, 8, 13) Solution Step 1: form an augmented matrix D = 4 1 5 4 1 8 1 4 13 Step (pivoting): swap row 1 and row 4 1 8 D = 4 1 5 1 4 13 Step 3: Apply Gaussian elimination to eliminate D,1 and D 3,1 4 1 8 D = 9 9 9/ 7/ 15 Step 4 (pivoting): check D > D 3 : no need for swapping Step 5: Apply Gaussian elimination to eliminate D 3, 4 1 8 D = 9 9 7/ 1/ Step 6: x = ( 1, 1, 3) 16