Iterative Methods for Solving Linear Systems



Similar documents
10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Solving Linear Systems, Continued and The Inverse of a Matrix

7 Gaussian Elimination and LU Factorization

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Inner Product Spaces and Orthogonality

Notes on Determinant

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Direct Methods for Solving Linear Systems. Matrix Factorization

Operation Count; Numerical Linear Algebra

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

Solving Systems of Linear Equations

160 CHAPTER 4. VECTOR SPACES

Similarity and Diagonalization. Similar Matrices

Inner product. Definition of inner product

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Lecture 1: Schur s Unitary Triangularization Theorem

by the matrix A results in a vector which is a reflection of the given

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

1 Determinants and the Solvability of Linear Systems

Lecture 3: Finding integer solutions to systems of linear equations

SOLVING LINEAR SYSTEMS

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Inner Product Spaces

Section Inner Products and Norms

Systems of Linear Equations

Solution to Homework 2

LS.6 Solution Matrices

Methods for Finding Bases

Notes on Symmetric Matrices

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

Factorization Theorems

Continued Fractions and the Euclidean Algorithm

Vector and Matrix Norms

1 VECTOR SPACES AND SUBSPACES

Inner products on R n, and more

LINEAR ALGEBRA W W L CHEN

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Linear Algebra Notes

Matrix Representations of Linear Transformations and Changes of Coordinates

1.2 Solving a System of Linear Equations

α = u v. In other words, Orthogonal Projection

Unit 18 Determinants

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

MATH APPLIED MATRIX THEORY

Math 312 Homework 1 Solutions

Solutions to Math 51 First Exam January 29, 2015

DERIVATIVES AS MATRICES; CHAIN RULE

19 LINEAR QUADRATIC REGULATOR

Solution of Linear Systems

A note on companion matrices

Numerical Analysis Lecture Notes

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

10.2 Series and Convergence

A =

Chapter 3. Distribution Problems. 3.1 The idea of a distribution The twenty-fold way

4.5 Linear Dependence and Linear Independence

1 Norms and Vector Spaces

4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Chapter 17. Orthogonal Matrices and Symmetries of Space

BANACH AND HILBERT SPACE REVIEW

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

1 Solving LPs: The Simplex Algorithm of George Dantzig

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

Row Echelon Form and Reduced Row Echelon Form

Linear Algebra Notes for Marsden and Tromba Vector Calculus

October 3rd, Linear Algebra & Properties of the Covariance Matrix

Classification of Cartan matrices

DATA ANALYSIS II. Matrix Algorithms

The Characteristic Polynomial

Lecture 5: Singular Value Decomposition SVD (1)

Linear Algebra Review. Vectors

Continuity of the Perron Root

6. Cholesky factorization

UNCOUPLING THE PERRON EIGENVECTOR PROBLEM

T ( a i x i ) = a i T (x i ).

University of Lille I PC first year list of exercises n 7. Review

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Metric Spaces. Chapter Metrics

Using row reduction to calculate the inverse and the determinant of a square matrix

Scalar Valued Functions of Several Variables; the Gradient Vector

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

MATH10040 Chapter 2: Prime and relatively prime numbers

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Orthogonal Bases and the QR Algorithm

F Matrix Calculus F 1

13 MATH FACTS a = The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.

3. Mathematical Induction

NOTES ON LINEAR TRANSFORMATIONS

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

Systems of Linear Equations

5 Homogeneous systems

Introduction to Matrix Algebra

8 Square matrices continued: Determinants

Transcription:

Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations. These methods are direct methods, inthesensethattheyyieldexact solutions (assuming infinite precision!). Another class of methods for solving linear systems consists in approximating solutions using iterative methods. 387

388 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS The basic idea is this: Given a linear system Ax = b (with A asquareinvertiblematrix),findanothermatrix B and a vector c, suchthat 1. The matrix I B is invertible 2. The unique solution ex of the system Ax = b is identical to the unique solution eu of the system u = Bu + c, and then, starting from any vector u 0,computethesequence (u k )givenby u k+1 = Bu k + c, k 2 N. Under certain conditions (to be clarified soon), the sequence (u k )convergestoalimiteu which is the unique solution of u = Bu + c, andthusofax = b.

5.1. CONVERGENCE OF SEQUENCES OF VECTORS AND MATRICES 389 Let (E,k k)beanormedvectorspace. Recallthata sequence (u k )ofvectorsu k 2 E converges to a limit u 2 E, ifforevery >0, there some natural number N such that ku k ukapple, for all k N. We write u = lim k7!1 u k. If E is a finite-dimensional vector space and dim(e) = n, weknowfromtheorem4.3thatanytwonormsare equivalent, and if we choose the norm kk 1,weseethat the convergence of the sequence of vectors u k is equivalent to the convergence of the n sequences of scalars formed by the components of these vectors (over any basis).

390 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS The same property applies to the finite-dimensional vector space M m,n (K) ofm n matrices (with K = R or K = C), which means that the convergence of a sequence of matrices A k =(a (k) ij )isequivalenttotheconvergence of the m n sequences of scalars (a (k) ij ), with i, j fixed (1 apple i apple m, 1apple j apple n). The first theorem below gives a necessary and su cient condition for the sequence (B k )ofpowersofamatrixb to converge to the zero matrix. Recall that the spectral radius (B) ofamatrixb is the maximum of the moduli i of the eigenvalues of B.

5.1. CONVERGENCE OF SEQUENCES OF VECTORS AND MATRICES 391 Theorem 5.1. For any square matrix B, the following conditions are equivalent: (1) lim k7!1 B k =0, (2) lim k7!1 B k v =0, for all vectors v, (3) (B) < 1, (4) kbk < 1, for some subordinate matrix norm kk. The following proposition is needed to study the rate of convergence of iterative methods. Proposition 5.2. For every square matrix B and every matrix norm kk, we have lim k7!1 kbk k 1/k = (B). We now apply the above results to the convergence of iterative methods.

392 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS 5.2 Convergence of Iterative Methods Recall that iterative methods for solving a linear system Ax = b (with A invertible) consists in finding some matrix B and some vector c, suchthati B is invertible, and the unique solution ex of Ax = b is equal to the unique solution eu of u = Bu + c. Then, starting from any vector u 0,computethesequence (u k )givenby u k+1 = Bu k + c, k 2 N, and say that the iterative method is convergent i for every initial vector u 0. lim u k = eu, k7!1

5.2. CONVERGENCE OF ITERATIVE METHODS 393 Here is a fundamental criterion for the convergence of any iterative methods based on a matrix B, calledthematrix of the iterative method. Theorem 5.3. Given a system u = Bu + c as above, where I B is invertible, the following statements are equivalent: (1) The iterative method is convergent. (2) (B) < 1. (3) kbk < 1, for some subordinate matrix norm kk. The next proposition is needed to compare the rate of convergence of iterative methods. It shows that asymptotically, the error vector e k = B k e 0 behaves at worst like ( (B)) k.

394 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS Proposition 5.4. Let kk be any vector norm, let B be a matrix such that I B is invertible, and let eu be the unique solution of u = Bu + c. (1) If (u k ) is any sequence defined iteratively by u k+1 = Bu k + c, k 2 N, then apple lim k7!1 ku 0 sup euk=1 ku k euk 1/k = (B).

5.2. CONVERGENCE OF ITERATIVE METHODS 395 (2) Let B 1 and B 2 be two matrices such that I B 1 and I B 2 are invertibe, assume that both u = B 1 u+c 1 and u = B 2 u + c 2 have the same unique solution eu, and consider any two sequences (u k ) and (v k ) defined inductively by u k+1 = B 1 u k + c 1 v k+1 = B 2 v k + c 2, with u 0 = v 0. If (B 1 ) < (B 2 ), then for any >0, there is some integer N( ), such that for all k N( ), we have apple 1/k kvk euk (B 2 ) sup ku k euk (B 1 )+. ku 0 euk=1 In light of the above, we see that when we investigate new iterative methods, we have to deal with the following two problems: 1. Given an iterative method with matrix B, determine whether the method is convergent. This involves determining whether (B) < 1, or equivalently whether there is a subordinate matrix norm such that kbk < 1.

396 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS By Proposition 4.8, this implies that I B is invertible (since k Bk = kbk, Proposition4.8applies). 2. Given two convergent iterative methods, compare them. The iterative method which is faster is that whose matrix has the smaller spectral radius. We now discuss three iterative methods for solving linear systems: 1. Jacobi s method 2. Gauss-Seidel s method 3. The relaxation method.

5.3. METHODS OF JACOBI, GAUSS-SEIDEL, AND RELAXATION 397 5.3 Description of the Methods of Jacobi, Gauss-Seidel, and Relaxation The methods described in this section are instances of the following scheme: Given a linear system Ax = b, witha invertible, suppose we can write A in the form A = M N, with M invertible, and easy to invert, which means that M is close to being a diagonal or a triangular matrix (perhaps by blocks). Then, Au = b is equivalent to Mu = Nu+ b, that is, u = M 1 Nu+ M 1 b. Therefore, we are in the situation described in the previous sections with B = M 1 N and c = M 1 b.

398 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS In fact, since A = M N, wehave B = M 1 N = M 1 (M A) =I M 1 A, which shows that I B = M 1 A is invertible. The iterative method associated with the matrix B = M 1 N is given by u k+1 = M 1 Nu k + M 1 b, k 0, starting from any arbitrary vector u 0. From a practical point of view, we do not invert M, and instead we solve iteratively the systems Mu k+1 = Nu k + b, k 0.

5.3. METHODS OF JACOBI, GAUSS-SEIDEL, AND RELAXATION 399 Various methods correspond to various ways of choosing M and N from A. The first two methods choose M and N as disjoint submatrices of A, buttherelaxation method allows some overlapping of M and N. To describe the various choices of M and N, itisconvenient to write A in terms of three submatrices D, E, F, as A = D E F, where the only nonzero entries in D are the diagonal entries in A, theonlynonzeroentriesine are entries in A below the the diagonal, and the only nonzero entries in F are entries in A above the diagonal.

400 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS More explicitly, if A = 0 1 a 11 a 12 a 13 a 1n 1 a 1n a 21 a 22 a 23 a 2n 1 a 2n a 31 a 32 a 33 a 3n 1 a 3n,........ B @ a n 11 a n 12 a n 13 a n 1 n 1 a C n 1 n A a n 1 a n 2 a n 3 a nn 1 a nn then D = 0 1 a 11 0 0 0 0 0 a 22 0 0 0 0 0 a 33 0 0,........ B @ 0 0 0 a n 1 n 1 0 C A 0 0 0 0 a nn

5.3. METHODS OF JACOBI, GAUSS-SEIDEL, AND RELAXATION 401 E = 0 1 0 0 0 0 0 a 21 0 0 0 0 a 31 a 32 0 0 0,.......... B @ a n 11 a n 12 a n 13... 0 0C A a n 1 a n 2 a n 3 a nn 1 0 F = 0 1 0 a 12 a 13 a 1n 1 a 1n 0 0 a 23 a 2n 1 a 2n 0 0 0... a 3n 1 a 3n........... B @ 0 0 0 0 a C n 1 n A 0 0 0 0 0

402 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS In Jacobi s method, weassumethatalldiagonalentries in A are nonzero, and we pick M = D N = E + F, so that B = M 1 N = D 1 (E + F )=I D 1 A. As a matter of notation, we let J = I D 1 A = D 1 (E + F ), which is called Jacobi s matrix.

5.3. METHODS OF JACOBI, GAUSS-SEIDEL, AND RELAXATION 403 The corresponding method, Jacobi s iterative method, computes the sequence (u k )usingtherecurrence u k+1 = D 1 (E + F )u k + D 1 b, k 0. In practice, we iteratively solve the systems Du k+1 =(E + F )u k + b, k 0. If we write u k =(u k 1,...,u k n), we solve iteratively the following system: a 11 u k+1 1 = a 12 u k 2 a 13 u k 3 a 1n u k n + b 1 a 22 u k+1 2 = a 21 u k 1 a 23 u k 3 a 2n u k n + b 2.... a n 1 n 1 u k+1 n 1 = a n 11 u k 1 a n 1 n 2 u k n 2 a n 1 n u k n + b n 1 a nn u k+1 n = a n 1 u k 1 a n 2 u k 2 a nn 1 u k n 1 + b n Observe that we can try to speed up the method by using the new value u k+1 1 instead of u k 1 in solving for u k+2 2 using the second equations, and more generally, use u k+1 1,...,u k+1 i 1 instead of uk 1,...,u k i 1 in solving for uk+1 i in the ith equation.

404 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS This observation leads to the system a 11 u k+1 1 = a 12 u k 2 a 13 u k 3 a 1n u k n + b 1 a 22 u k+1 2 = a 21 u k+1 1 a 23 u k 3 a 2n u k n + b 2... a n 1 n 1 u k+1 n 1 = a n 11 u k+1 1 a n 1 n 2 u k+1 n 2 a n 1 n u k n + b n 1 a nn u k+1 n = a n 1 u k+1 1 a n 2 u k+1 2 a nn 1 u k+1 n 1 + b n, which, in matrix form, is written Du k+1 = Eu k+1 + Fu k + b. Because D is invertible and E is lower triangular, the matrix D E is invertible, so the above equation is equivalent to u k+1 =(D E) 1 Fu k +(D E) 1 b, k 0. The above corresponds to choosing M and N to be M = D N = F, and the matrix B is given by E B = M 1 N =(D E) 1 F.

5.3. METHODS OF JACOBI, GAUSS-SEIDEL, AND RELAXATION 405 Since M = D E is invertible, we know that I B = M 1 A is also invertible. The method that we just described is the iterative method of Gauss-Seidel, andthematrixb is called the matrix of Gauss-Seidel and denoted by L 1,with L 1 =(D E) 1 F. One of the advantages of the method of Gauss-Seidel is that is requires only half of the memory used by Jacobi s method, since we only need u k+1 to compute u k+1 i. 1,...,u k+1 i 1,uk i+1,...,u k n We also show that in certain important cases (for example, if A is a tridiagonal matrix), the method of Gauss- Seidel converges faster than Jacobi s method (in this case, they both converge or diverge simultaneously).

406 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS The new ingredient in the relaxation method is to incorporate part of the matrix D into N: wedefinem and N by M = D! E N = 1!! D + F, where! 6= 0isarealparametertobesuitablychosen. Actually, we show in Section 5.4 that for the relaxation method to converge, we must have! 2 (0, 2). Note that the case! =1correspondstothemethodof Gauss-Seidel. If we assume that all diagonal entries of D are nonzero, the matrix M is invertible.

5.3. METHODS OF JACOBI, GAUSS-SEIDEL, AND RELAXATION 407 The matrix B is denoted by L! and called the matrix of relaxation, with 1 D 1! L! = E!! D + F =(D!E) 1 ((1!)D +!F). The number! is called the parameter of relaxation. When!>1, the relaxation method is known as successive overrelaxation, abbreviatedassor. At first glance, the relaxation matrix L! seems at lot more complicated than the Gauss-Seidel matrix L 1,butthe iterative system associated with the relaxation method is very similar to the method of Gauss-Seidel, and is quite simple.

408 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS Indeed, the system associated with the relaxation method is given by D 1! E u k+1 =!! D + F u k + b, which is equivalent to (D!E)u k+1 =((1!)D +!F)u k +!b, and can be written Du k+1 = Du k!(du k Eu k+1 Fu k b). Explicitly, this is the system a 11 u k+1 1 = a 11 u k 1!(a 11 u k 1 + a 12 u k 2 + a 13 u k 3 + + a 1n 2 u k n 2 + a 1n 1 u k n 1 + a 1n u k n b 1 ) a 22 u k+1 2 = a 22 u k 2!(a 21 u k+1 1 + a 22 u k 2 + a 23 u k 3 + + a 2n 2 u k n 2 + a 2n 1 u k n 1 + a 2n u k n b 2 ). a nn u k+1 n = a nn u k n!(a n 1 u k+1 1 + a n 2 u k+1 2 + + a nn 2 u k+1 n 2 + a nn 1 u k+1 n 1 + a nn u k n b n ).

5.3. METHODS OF JACOBI, GAUSS-SEIDEL, AND RELAXATION 409 What remains to be done is to find conditions that ensure the convergence of the relaxation method (and the Gauss- Seidel method), that is: 1. Find conditions on!, namelysomeintervali R so that! 2 I implies (L! ) < 1; we will prove that! 2 (0, 2) is a necessary condition. 2. Find if there exist some optimal value! 0 of! 2 I, so that (L!0 )=inf!2i (L!). We will give partial answers to the above questions in the next section. It is also possible to extend the methods of this section by using block decompositions of the form A = D E F, where D, E, andf consist of blocks, and with D an invertible block-diagonal matrix.

410 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS 5.4 Convergence of the Methods of Jacobi, Gauss-Seidel, and Relaxation We begin with a general criterion for the convergence of an iterative method associated with a (complex) Hermitian, positive, definite matrix, A = M N. Next, we apply this result to the relaxation method. Proposition 5.5. Let A be any Hermitian, positive, definite matrix, written as A = M N, with M invertible. Then, M + N is Hermitian, and if it is positive, definite, then (M 1 N) < 1, so that the iterative method converges.

5.4. CONVERGENCE OF THE METHODS 411 Now, as in the previous sections, we assume that A is written as A = D E F,withD invertible, possibly in block form. The next theorem provides a su cient condition (which turns out to be also necessary) for the relaxation method to converge (and thus, for the method of Gauss-Seidel to converge). This theorem is known as the Ostrowski-Reich theorem. Theorem 5.6. If A = D E F is Hermitian, positive, definite, and if 0 <!<2, then the relaxation method converges. This also holds for a block decomposition of A.

412 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS Remark: What if we allow the parameter! to be a nonzero complex number! 2 C? Inthiscase,therelaxation method also converges for! 2 C, providedthat! 1 < 1. This condition reduces to 0 <!<2if! is real. Unfortunately, Theorem 5.6 does not apply to Jacobi s method, but in special cases, Proposition 5.5 can be used to prove its convergence. On the positive side, if a matrix is strictly column (or row) diagonally dominant, then it can be shown that the method of Jacobi and the method of Gauss-Seidel both converge.

5.4. CONVERGENCE OF THE METHODS 413 The relaxation method also converges if! 2 (0, 1], but this is not a very useful result because the speed-up of convergence usually occurs for!>1. We now prove that, without any assumption on A = D E F,otherthanthefactthatA and D are invertible, in order for the relaxation method to converge, we must have! 2 (0, 2). Proposition 5.7. Given any matrix A = D E F, with A and D invertible, for any! 6= 0, we have (L! )! 1. Therefore, the relaxation method (possibly by blocks) does not converge unless! 2 (0, 2). If we allow! to be complex, then we must have! 1 < 1 for the relaxation method to converge.

414 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS We now consider the case where A is a tridiagonal matrix, possiblybyblocks. We begin with the case! =1,whichistechnicallyeasier to deal with. The following proposition gives us the precise relationship between the spectral radii (J) and (L 1 )ofthejacobi matrix and the Gauss-Seidel matrix. Proposition 5.8. Let A be a tridiagonal matrix (possibly by blocks). If (J) is the spectral radius of the Jacobi matrix and (L 1 ) is the spectral radius of the Gauss-Seidel matrix, then we have (L 1 )=( (J)) 2. Consequently, the method of Jacobi and the method of Gauss-Seidel both converge or both diverge simultaneously (even when A is tridiagonal by blocks); when they converge, the method of Gauss-Seidel converges faster than Jacobi s method.

5.4. CONVERGENCE OF THE METHODS 415 We now consider the more general situation where! is any real in (0, 2). Proposition 5.9. Let A be a tridiagonal matrix (possibly by blocks), and assume that the eigenvalues of the Jacobi matrix are all real. If! 2 (0, 2), then the method of Jacobi and the method of relaxation both converge or both diverge simultaneously (even when A is tridiagonal by blocks). When they converge, the function! 7! (L! ) (for! 2 (0, 2)) has a unique minimum equal to! 0 1 for! 0 = 2 1+ p 1 ( (J)) 2, where 1 <! 0 < 2 if (J) > 0. We also have (L 1 )=( (J)) 2, as before.

416 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS Combining the results of Theorem 5.6 and Proposition 5.9, we obtain the following result which gives precise information about the spectral radii of the matrices J, L 1,andL!. Proposition 5.10. Let A be a tridiagonal matrix (possibly by blocks) which is Hermitian, positive, definite. Then, the methods of Jacobi, Gauss-Seidel, and relaxation, all converge for! 2 (0, 2). There is a unique optimal relaxation parameter such that! 0 = (L!0 )= 2 1+ p 1 ( (J)) 2, inf (L!)=! 0 1. 0<!<2

5.4. CONVERGENCE OF THE METHODS 417 Furthermore, if (J) > 0, then (L!0 ) < (L 1 )=( (J)) 2 < (J), and if (J) =0, then! 0 =1and (L 1 )= (J) =0. Remark: It is preferable to overestimate rather than underestimate the relaxation parameter when the optimum relaxation parameter is not known exactly.

418 CHAPTER 5. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS