Iterative Methods. Chapter Introduction Simple Iteration Example

Similar documents
Vector and Matrix Norms

General Framework for an Iterative Solution of Ax b. Jacobi s Method

7 Gaussian Elimination and LU Factorization

by the matrix A results in a vector which is a reflection of the given

MATH APPLIED MATRIX THEORY

Notes on Determinant

Inner Product Spaces

DATA ANALYSIS II. Matrix Algorithms

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Similarity and Diagonalization. Similar Matrices

LS.6 Solution Matrices

[1] Diagonal factorization

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = i.

Iterative Methods for Solving Linear Systems

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Vieta s Formulas and the Identity Theorem

Direct Methods for Solving Linear Systems. Matrix Factorization

The Characteristic Polynomial

Factorization Theorems

Similar matrices and Jordan form

Systems of Linear Equations

University of Lille I PC first year list of exercises n 7. Review

Nonlinear Algebraic Equations Example

1 Solving LPs: The Simplex Algorithm of George Dantzig

Continuity of the Perron Root

Operation Count; Numerical Linear Algebra

Introduction to Matrix Algebra

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Solutions to Homework 10

8 Primes and Modular Arithmetic

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Numerical Methods I Eigenvalue Problems

Section Inner Products and Norms

Nonlinear Programming Methods.S2 Quadratic Programming

Linear Algebra I. Ronald van Luijk, 2012

Solving Quadratic Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Chapter 6. Orthogonality

Chapter 17. Orthogonal Matrices and Symmetries of Space

Linear Algebra Notes

Solutions to Math 51 First Exam January 29, 2015

LINEAR ALGEBRA. September 23, 2010

Solving Systems of Linear Equations

Scalar Valued Functions of Several Variables; the Gradient Vector

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

NOTES ON LINEAR TRANSFORMATIONS

DERIVATIVES AS MATRICES; CHAIN RULE

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Linear Programming Problems

Unit 18 Determinants

Numerical Analysis Lecture Notes

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

1 Lecture: Integration of rational functions by decomposition

8 Square matrices continued: Determinants

The Method of Partial Fractions Math 121 Calculus II Spring 2015

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Solving Linear Systems, Continued and The Inverse of a Matrix

MAT 242 Test 2 SOLUTIONS, FORM T

6. Cholesky factorization

Zeros of a Polynomial Function

Data Mining: Algorithms and Applications Matrix Math Review

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

CS3220 Lecture Notes: QR factorization and orthogonal transformations

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

Review of Fundamental Mathematics

7.6 Approximation Errors and Simpson's Rule

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Practice with Proofs

Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems

1 if 1 x 0 1 if 0 x 1

5 Numerical Differentiation

Continued Fractions and the Euclidean Algorithm

Chapter 20. Vector Spaces and Bases

2.4 Real Zeros of Polynomial Functions

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

(Quasi-)Newton methods

is identically equal to x 2 +3x +2

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Zeros of Polynomial Functions

Linearly Independent Sets and Linearly Dependent Sets

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Lecture 5: Singular Value Decomposition SVD (1)

Metric Spaces. Chapter Metrics

MA106 Linear Algebra lecture notes

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Solution of Linear Systems

1 Norms and Vector Spaces

Network Traffic Modelling

4.6 Linear Programming duality

Lecture 3: Finding integer solutions to systems of linear equations

Polynomial and Rational Functions

Lecture 1: Schur s Unitary Triangularization Theorem

DETERMINANTS TERRY A. LORING

Inner Product Spaces and Orthogonality

BANACH AND HILBERT SPACE REVIEW

Transcription:

Chapter Iterative Methods.1 Introduction In this section, we will consider three different iterative methods for solving a sets of equations. First, we consider a series of examples to illustrate iterative methods. To construct an iterative method, we try and rearrange the system of equations such that we generate a sequence..1.1 Simple Iteration Example Example.1.1: Let us consider the equation 1 f(x) = x + e x = 0. (.1) y=e x When solving an equation such as (.1) for α α y= x where f(α) = 0, 0 < α <, we can generate a sequence {x (k) } k=0 x (0) by re-writing the equation as x = e x, from some initial value (guess) i.e. by computing x (k+1) = e x(k) from some x (0). If the series converges, it will converge to the solution. For example, let us consider x (0) = 1 and x (0) = 1: 1

k x (k) x (k) 0 1.0-1.0 1 1.631-0.7188 1.80449-0.05091 3 1.83544 0.947776 4 1.84046 1.6140 5 1.8416 1.80059 6 1.84138 1.83480 7 1.84140 1.8414 8 1.84141 1.84138 9...... In this example, both sequences appear to converge to a value close to the root α = 1.84141 where 0 < α <. Hence, we have constructed a simple algorithm for solving an equation and it appears to be a robust iterative method. However, (.1) has two solutions: a positive root at 1.84141 and a negative root at -1.14619. Why do we only find one root? If f(x) = 0 has a solution x = α then x (k+1) = g(x (k) ) will converge to α, provided g (α) < 1 and x (0) is suitably chosen. The condition g (α) < 1 is a necessary condition. In the above example, and g(x) = e x and g (x) = e x, g (x) < 1 if x > 0. So this method can be used to find the positive root of (.1). However, it will never converge to the negative root. Hence, this kind of approach will not always converge to a solution..1. Linear Systems Let us adopt the same approach for a linear system. Example.1.:

Consider the following set of linear equations: Let us re-write these equations as Thus, we can use the following: 10x 1 + x = 1 x 1 + 10x = 1 x 1 = (1 x )/10 x = (1 x 1 )/10. x (k+1) 1 = 1. x (k) /10 x (k+1) =.1 x (k) 1 /10, to generate a sequence of vectors x (k) = (x (k) 1, x(k) )T from some starting vector, x (0). If then where x (0) = ( ) 0, x (1) = 0 x (0) = ( ) 1., x () =.1 x (k) ( ) 1 ( ) 0, 0 ( ) 0.99, x (3) = 1.98 as k, ( ) 1.00,....001 which is indeed the correct answer. So we have generated a convergent sequence. Let us consider the above set of linear equations again. Possibly the more obvious rearrangement was Thus, we can generate a sequence using: x 1 = 1 10x x = 1 10x 1. x (k+1) 1 = 1 10x (k) x (k+1) = 1 10x (k) 1, If, we again use then x (0) = ( ) 0, x (1) = 0 x (0) = ( ) 1, x () = 1 3 ( ) 0, 0 ( ) 99, x (3) = 198 ( ) 1011,... 199

Clearly, this sequence is not converging! Why? Example.1.3: Let us consider the above example (.1.) again. Can we find a method that allows the system to converge more quickly? Let us look at the computation more carefully. In the first step x (1) 1 is computed from x (0) and in the second step we compute x (1) from x (0) 1. It seems more natural, from a computational point of view, to use x (1) 1 rather then x (0) 1 in the second step. i.e. to use the latest available value. In effect, we want to compute the following: x (k+1) 1 = 1. x (k) /10 x (k+1) =.1 x (k+1) 1 /10, which gives, x (0) = ( ) 0, x (1) = 0 ( ) 1., x () = 1.98 ( ) 1.00 1.9998 ( ) 1, which converges to ( 1 ) much more rapidly! In the following sections, we will consider, in general terms, iterative methods for solving a system Ax = b. First, though we introduce some important results about a sequence of vectors. Sequences of Vectors..1 The Limit of a Sequence Let { x (k)} be a sequence in a Vector Space V. How do we know if this sequence has a limit? k=0 First observe that x = y x = y. i.e. two distinct objects in a Vector Space can have the same size. However, from rule 1 for norms (1.1) we know that if x y = 0, then x y. So if then The vector x is the limit of the sequence. lim k x(k) x = 0 lim k x(k) = x 4

.. Convergence of a Sequence Suppose the sequence {x (k) } k=0 converges to x, where x (k+1) = Bx (k) + c. If x (k) x for k, then x satisfies the equation: x = Bx + c, and so we have and thus, taking norms, If B < 1 then x (k+1) x = B(x (k) x), x (k+1) x B x (k) x. x (k+1) x < x (k) x, i.e. we have a monotonically decreasing sequence, or, in other words, the error in the approximations decreases. Say we start from an initial guess x (1) x = B(x (0) x). Then x () x = B(x (1) x) ( ) = B B(x (0) x) = B (x (0) x), and so on, to give x (k) x = B k (x (0) x). Taking norms, and using rule 5 (1.9) for sub-ordinate matrix norms x (k) x B k (x (0) x) B k 1 B (x (0) x) B k B (x (0) x) B k (x (0) x). If B < 1, then B k 0 as k and hence, x (k) x as k. Recall that ρ(b) B ( 1.5) so a necessary condition for convergence is ρ(b) < 1. Furthermore, it is possible to show that if ρ(b) < 1, then B < 1. and if ρ(b) > 1, then B > 1, although we do not prove these results in this course. Hence, ρ(b) < 1 is not only a necessary condition, but it is also sufficient condition. 5

..3 Spectral radius and rate of convergence In numerical analysis, to compare different methods for solving systems of equations we are interested in determining the rate of convergence of the method. As we will see below the spectral radius is a measure of the rate of convergence. Consider the situation where B N N has N linearly independent eigenvectors. As before we have x (k+1) x = B(x (k) x), or substituting in for v (k) = x (k) x, we have v (k+1) = Bv (k). Now write v (0) = N i=1 α ie i where e i are the eigenvectors (with associated eigenvalues λ i ) of B, then continuing this sequence gives ( N ) v (1) = B α i e i = α i Be i = α i λ i e i, i=1 i=1 i=1 i=1 i=1 ( N ) v () = B α i λ i e i = α i λ i Be i = α i λ i e i, v (k) = Now suppose λ 1 > λ i (i =,...,N), then v (k) = α 1 λ k 1 e 1 + = λ k 1 [ α i λ k i e i. i=1 α 1 e 1 + α i λ k i e i i= i= i=1 ( ) ] k λi α i e i. λ 1 Given that λ i /λ 1 < 1, for large k, v (k) α 1 λ k 1e 1. Hence, the error associated with x (k), the kth vector in the sequence, is given by v (k) which varies as the kth power of the largest eigenvalue. In other words, it varies as the kth power of the spectral radius ρ(b) (= λ 1 ). So the spectral radius is a good indication of the rate of convergence...4 Gerschgorin s Theorem The above result means that if we know the magnitude of the largest vector of the iteration matrix we can estimate the rate of convergence of a system of equations for a particular method. However, this 6

requires the magnitudes of all eigenvalues to be known, which would probably have to be determined numerically. The Gerschgorin Theorem is a surprisingly simple result concerning eigenvalues that allows us to put bounds on the size of the eigenvalues of a matrix without actually finding the eigenvalues themselves. The equation Ae = λe, where (λ, e) are an eigenvalue, eigenvector pair of the matrix A, can be written in component notation as a ij e j = a ii e i + a ij e j = λe i. j i Rearranging implies and thus, e i (a ii λ) = a ij e j, e i a ii λ j i a ij e j. j i Suppose the component of eigenvector e with the largest absolute value is e l, such that e l e j for j (note e j 0 for all j). Then from above so,dividing by e l gives e l a ll λ a lj e j j l a ll λ a lj. j l a lj e l Each eigenvalue lies inside a circle with centre a ll and radius N a lj with j l. However, we don t know l without finding λ and e. But we can say that the union of all such circles must contain all the eigenvalues. This is Gerschgorin s Theorem. Example.5.1: Determine the bounds on the eigenvalues for the matrix 1 0 0 1 1 0 A =. 0 1 1 0 0 1 7 j l

Gerschgorin s Theorem implies that the union of all circles must contain all eigenvalues. a ll λ a lj. j l For l = 1 and 4 we get the relation λ 1. For l = and 3 we get λ. The matrix is symmetric - the eigenvalues are real so Gerschgorin s Theorem implies 0 λ 4. The eigenvalues of A are λ 1 = 3.618, λ =.618, λ 3 = 1.38, and λ 4 = 0.38. 0 1 3 4 hence, the largest eigenvalue is indeed less than 4..3 The Jacobi Iterative Method The Jacobi Iterative Method follows the iterative method shown in Example.1.. Consider the linear system Ax = b, A N N = [a ij ], x N = [x i ], b N = [b i ]. Let us try to isolate x i. The ith equation looks like a ij x j = b i. Assuming a ii 0 for all i, we can re-write this as so, giving the recurrence relation a ii x i = b i x i = 1 a ii x (k+1) i = 1 a ii b i b i 8 a ij x j, j i a ij x j j i a ij x (k) j, (.) j i

for each x i (i = 1,..., N). This is known as the Jacobi Iterative Method. In matrix form, we have A = D L U, (.3) where D is a diagonal matrix with elements a ii, L is a strictly lower triangular matrix, L = [l ij ] such that a ij, i > j l ij = 0, i j, and U is a strictly upper triangular matrix, U = [u ij ] such that a ij, i < j u ij = 0, i j. The system becomes or, (D L U)x = b, Dx = (L + U)x + b. Dividing each equation by a ii is equivalent to writing x = D 1 (L + U)x + D 1 b where the elements of D 1 are 1/a ii, so we have pre-multiplied by the inverse of D. Hence, the matrix form of the iterative method (.), known as the Jacobi Iteration Method is x (k+1) = D 1 (L + U)x (k) + D 1 b. (.4) The matrix B J = D 1 (L + U) is called the iteration matrix for the Jacobi Iteration method..3.1 Convergence of the Jacobi Iteration Method From.. recall that an iterative method of the form x (k+1) = Bx (k) + c will converge provided B < 1 and that a necessary and sufficient condition for this is to be true is ρ(b) < 1. Thus, for the Jacobi method, we require B J = D 1 (L + U) < 1 for convergence and, hence, ρ(b J ) < 1. 9

Example.3.1: Let us return once more to Example.1. and recast it in the form of the Jacobi iterative method. The linear system we wish to solve is Ax = 10 1 x 1 = 1 = b. 1 10 1 The first thing we need to do is find D and L + U where A = D L U: A = 10 1 D = 10 0 and L + U = 0 1 1 10 0 10 1 0 hence, D 1 (L + U) = B J = 0 1/10. 1/10 0 Now choosing the matrix norm sub-ordinate to the infinity norm we find B J = 1 10 < 1. Alternatively we can consider the spectral radius of B J. The eigenvalues of B J are given by λ 1/100 = 0 x and so which in this case is equal to B J. ρ(b J ) = 1 10, So if x is the limit to our sequence then x (k+1) x 1 10 x(k) x. In Example.1.. we had so x (0) x = and Remember, so, and indeed, x (0) = ( ) 0 0 and x = ( ) 1, x (1) x 1 10 = 0.. x (1) = x (1) x = ( ) 1.,.1 ( ) 0.. 0.1 x (1) x 0.. Since the size of ρ(b J ) is an indication of the rate of convergence we see here that this system converges at a rate of ρ(b J ) = 0.1. The smaller the spectral radius the more rapid the convergence. So is it possible to modify this method to make it faster? 30

.4 The Gauss-Seidel Iterative Method To produce a faster iterative method we amend the Jacobi Method to make use of the new values as they become available (e.g. as in Example...). Expanding out the Jacobi Method (.4) we have x (k+1) = D 1 (L + U)x (k) + D 1 b = D 1 Lx (k) + D 1 Ux (k) + D 1 b. Here D 1 L is a lower triangular matrix so the ith row of D 1 Lx (k) contains the values x (k) 1, x(k), x(k) 3,...,x(k) i 1. (components up to, but not including the diagonal). Likewise, D 1 U is an upper triangular matrix so the ith row contains x (k) i+1, x(k) i+,..., x(k) N. If we compute the x (k+1) i s in the order of increasing i (i.e. from the top of the vector to the bottom) then when computing x (k+1) i, we have available x (k+1) 1, x (k+1),..., x (k+1) i 1. Hence, a more efficient version of the Jacobi Method is to compute (in the order of increasing i) x (k+1) = D 1 Lx (k+1) + D 1 Ux (k) + D 1 b. This is equivalent to finding x (k+1) from (I D 1 L)x (k+1) = D 1 Ux (k) + D 1 b, or, x (k+1) = (I D 1 L) 1 D 1 Ux (k) + (I D 1 L) 1 D 1 b. This is known as the Gauss-Seidel Iterative Method. The iteration matrix becomes B GS = (I D 1 L) 1 D 1 U = [D(I D 1 L)] 1 U = (D L) 1 U. 31

The way of deriving the Gauss-Seidel method formally is as follows: A = D L U, so Ax = b becomes and hence, (D L)x = Ux + b, x = (D L) 1 Ux + (D L) 1 b, generating the recurrence relation x (k+1) = (D L) 1 Ux (k) + (D L) 1 b. (.5) The iteration matrix for the Gauss-Seidel method is given by B GS = (D L) 1 U. Thus, for convergence (from..) we require that B GS = (D L) 1 U < 1. Example.4.1: Again we reconsider the linear system used in Examples (.1.,.1.3 &.3.1) and recast it in the form of the Gauss-Seidel Method: A = 10 1, 1 10 and since A = D L U, we have D L = 10 0 and U = 0 1. 1 10 0 0 Then (D L) 1 = 1/10 0, so (D L) 1 U = 1/10 0 1, 1/100 1/10 1/100 1/10 0 0 and thus the Gauss-Seidel iteration matrix is B GS = (D L) 1 U = 0 1/10. 0 1/100 Clearly, the norm of the iteration matrix is B GS = (D L) 1 U = 1 10 < 1, and hence, the method will converge for this example. Let us look at the eigenvalues to get a feel for the rate of convergence. The eigenvalues are given by λ det 1/10 = 0, 0 1/100 λ 3

or, ( λ 1 ) λ = 0, 100 so we have and hence, λ = 0 or λ = 1 100, ρ(b GS ) = ρ [ (D L) 1 U ] = 1 100. Observe that in this example even though B GS = B J we have ρ(b GS ) = [ρ(b J )] (cf Example.3.1), implying that Gauss-Seidel converges twice as fast as Jacobi..5 The Successive Over Relaxation Iterative Method The third iterative method we will consider is a method which accelerates the Gauss-Seidel method. Consider the system Ax = b, with A = D L U as before. When trying to solve Ax = b, we obtain an approximate solution x (k) of the true solution x. The quantity r (k) = b Ax (k) is called a residual and it is a measure of the accuracy of x (k). Clearly, we would like to make the residual r (k) to be as small as possible for each approximate solution x (k). Now remember, when calculating x (k) i, the components x (k+1) 1,..., x (k+1) i 1 are already known. So in the Gauss-Seidel iterative method for the most recent approximation, the residual vector is given by r (k) = b Dx (k) + Lx (k+1) + Ux (k). Ultimately, we wish to make x x (k) as small as possible. However, as we don t know x yet, we instead consider x (k+1) x (k) as a measure for x x (k). We now wish to calculate x (k+1) such that D(x (k+1) x (k) ) = ω(b Dx (k) + Lx (k+1) + Ux (k) ), where ω is called the relaxation parameter. Re-arranging, we get (D ωl)x (k+1) = ((1 ω)d + ωu)x (k) + ωb, and hence, the recurrence relation is given by x (k+1) = (D ωl) 1 ((1 ω)d + ωu)x (k) + (D ωl) 1 ωb. (.6) The process of reducing residuals at each stage is called Successive Relaxation. If 0 < ω < 1, the iterative method is known as a Successive Under Relaxation and they can be used to obtain convergence when the Gauss-Seidel scheme is not convergent. For choices of ω > 1 the scheme 33

is a Successive Over-Relaxation and is used to accelerate convergent Gauss-Seidel iterations. Note, ω = 1 is simply the Gauss-Seidel Iterative Method. The iteration matrix for the S.O.R. method - Successive Over-Relaxation with ω > 1 is given by B SOR = (D ωl) 1 [(1 ω)d + ωu]. The iteration matrix B SOR can be derived by splitting A in the following way: ( A = D L U = D 1 1 ) + 1 ω ω D L U, ω > 0. Thus Ax = b can be written as ( ) ( ( 1 ω D L x = 1 1 ) ) D + U x + b ω (D ωl)x = ((1 ω)d + ωu)x + ωb, so, B SOR = (D ωl) 1 [(1 ω)d + ωu] The aim is to choose ω such that the rate of convergence is maximised, that is the spectral radius, ρ(b SOR (ω)), is minimised. How do we find the value of ω that does this? There is no complete answer for general N N systems, but it is known that if, for each, 1 i N, a ii 0 then ρ(b SOR ) 1 ω. This means that for convergence we must have 0 < ω <. Example.6.1: We return once more to the linear system considered throughout this chapter in Examples (.1.1,.1.,.3.1 &.4.1) and recast it here in terms of the SOR iterative method. Recall, A = 10 1, 1 10 and A = D L U such that (1 ω)d + ωu = (1 ω) 10 0 + ω 0 1 10(1 ω) = ω, 0 10 0 0 0 10(1 ω) and Now (D ωl) = 10 0 ω 0 0 = 10 0 0 10 1 0 ω 10 (D ωl) 1 = 1/10 0, ω/100 1/10 34

thus the iteration matrix B SOR = (D ωl) 1 [(1 ω)d + ωu] = 1 ω ω(1 ω) 10 The eigenvalues of this matrix are given by ( ) ω [(1 ω) λ] 100 + 1 ω λ ω (1 ω) = 0, 100 ] ( ) λ λ [(1 ω) + ω ω 100 + 1 ω + (1 ω) 100 + 1 ω ] λ λ [(1 ω) + ω + (1 ω) = 0. 100 Solving this quadratic for λ gives ( λ = 1 (1 ω) + ω 100 ± = (1 ω) + ω 00 ± 1 [4(1 ω) ω = (1 ω) + ω 00 ± ω 0 [4(1 ω) + 4(1 ω) ω 100 + ω4 10 4 ] 1/. [4(1 ω) + ω 100 ] 1/ ω/10. ω 100 + 1 ω ω (1 ω) 100 = 0, ] 1/ ) 100 + ω4 4(1 ω) 104 1 When ω = 1 (the Gauss-Seidel Method), one root is 0 and the other is 100. Changing ω changes these roots. Suppose we select ω such that 4(1 ω) + ω 100 = 0, so there are equal roots to the equation. Then this implies, ω = (1 ω) and λ = ω 1. 00 The smallest value of ω (ω > 1) producing equal roots is ω = 1.0051579, which is not very different (ω 1) to Gauss-Seidel! However, the spectral radius of the SOR iteration matrix is just compared with ρ(b GS )=0.01. ρ(b SOR ) = 0.0051579 ρ(b) is very sensitive to ω. If you can hit the right value, the improvement in speed of convergence of the iteration method is significant. Although this example is only a matrix, the comments apply in general. For a larger set of equations, convergence of Gauss-Seidel can be slow and SOR with an optimum value of ω (if it can be found) can be a major improvement. 35

.6 Convergence of the SOR Method for Consistently Ordered Matrices In general, it is not easy to find an appropriate ω for the SOR method and so an ω is usually chosen which lies in the range 1 < ω < and leads to a spectral radius, ρ(b SOR ) which is as small as reasonably possible. However, there are a set of matrices for which it is relatively easy to find the optimum ω. Consider the linear system Ax = b and let A = D L U. If the eigenvalues of ( αd 1 L + 1 ) α D 1 U, α 0, are independent of α, then the matrix is said to be Consistently Ordered, and the optimum ω for the SOR iterative method is Explanation w = 1 + 1 ρ (B J ). First, we note that for such a matrix, consistently ordered (eigenvalues are the same for all α) implies that the eigenvalues of αd 1 L + 1 α D 1 U are the same as those for D 1 L + D 1 U = B J, the Jacobi iterative matrix (i.e. put α = 1). Now consider the eigenvalues of B SOR. They satisfy the polynomial det(b SOR λi) = 0 or det [ (D ωl) 1 ((1 ω)d + ωu) λi ] = 0, and hence, det(d ωl) 1 det[(1 ω)d + ωu λ(d ωl)] = 0, }{{} 0 so λ satisfy det[(1 ω λ)d + ωu + λωl] = 0. Since ω 0, the non-zero eigenvalues satisfy [( (1 ω λ) det ω D + 1 U + ) λl ω ] λ = 0, λ λ 36

and thus, [ λd det 1 L + 1 D 1 U λ When consistently ordered, the eigenvalues of are the same as those of D 1 (L + U) = B J. (λ + ω 1) ω λ λd 1 L + 1 λ D 1 U ] I = 0. Let the eigenvalues of B J be µ, then the non-zero eigenvalues of B SOR satisfy µ = λ + ω 1 ω λ If we put ω = 1 (i.e. recover Gauss-Seidel), then µ = λ λ, or λ = µ. (Recall Example.4.1 where this result was also found).. For ω 0, we have or, µ ω λ = λ + λ(ω 1) + (ω 1), λ + λ(ω µ ω ) + (ω 1) = 0. The eigenvalues λ of B SOR are then given by λ = (ω 1) + µ ω ± 1 4(ω 1) 4(ω 1)µ ω 4(ω 1) + µ 4 ω 4 = 1 ω + µ ω ± (1 ω)µ ω + µ4 ω 4 4 = 1 ω + µ ω ± µω (1 ω) + µ ω. 4 For each µ there are values of λ, these may be real or complex. If complex (note ω > 1), then λλ = (ω 1) or λ = ω 1. Hence, ρ(b SOR ) = ω 1. For the fastest convergence we require ρ(b SOR ) to be as small as possible. It can be shown that the best outcome is to make the roots of B SOR equal when µ = ρ(b J ), i.e. when µ is largest. This implies Solving for ω yields µ ω ω + 1 = 0. 4 ω = 1 ± 1 µ µ, / ( ) = 1 (1 µ ) µ 1, 1 µ = 1 1 µ. 37

We are looking for the smallest value of ω and so we take the positive root of the above equation. Hence, with µ = ρ(b J ), the best possible choice for ω is ω = 1 + 1 (ρ(b J )). Example.6.1: We again return to Example (..1,..,.3.1 &.4.1) and show that it is a consistently ordered matrix and determine the minimum ω, and hence, the fastest rate of convergence for the SOR method. As before we have then A = 10 1, 1 10 αd 1 L + 1 α D 1 U = 0 1 10α, α 10 0 and the eigenvalues are given by ( λ 1 ) ( α ) = 0 10α 10 so λ = 1 100, and hence, the matrix is consistently ordered. Then by applying the above formulae and recalling that the eigenvalues of B J are µ = 1/10 (Example.3.1) we have ω = 1 + 1 (ρ(b J )) = 1 + 1 (1/100), = 1.00516. This is essentially the same value as we found in Example.5.1. Thus the fastest rate of convergence for this particular system is ρ(b SOR ) = 0.00516, as shown in Example.5.1 38