Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II
|
|
- Abraham Beasley
- 7 years ago
- Views:
Transcription
1 Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011 Brooks/Cole, Cengage Learning
2 Outline 1 The Gauss-Seidel Method Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 2 / 38
3 Outline 1 The Gauss-Seidel Method 2 The Gauss-Seidel Algorithm Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 2 / 38
4 Outline 1 The Gauss-Seidel Method 2 The Gauss-Seidel Algorithm 3 Convergence Results for General Iteration Methods Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 2 / 38
5 Outline 1 The Gauss-Seidel Method 2 The Gauss-Seidel Algorithm 3 Convergence Results for General Iteration Methods 4 Application to the Jacobi & Gauss-Seidel Methods Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 2 / 38
6 Outline 1 The Gauss-Seidel Method 2 The Gauss-Seidel Algorithm 3 Convergence Results for General Iteration Methods 4 Application to the Jacobi & Gauss-Seidel Methods Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 3 / 38
7 The Gauss-Seidel Method Looking at the Jacobi Method A possible improvement to the Jacobi Algorithm can be seen by re-considering x (k) i = 1 n ( ) a ii a ij x (k 1) j + b i, for i = 1, 2,..., n j=1 j i Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 4 / 38
8 The Gauss-Seidel Method Looking at the Jacobi Method A possible improvement to the Jacobi Algorithm can be seen by re-considering x (k) i = 1 n ( ) a ii a ij x (k 1) j + b i, for i = 1, 2,..., n j=1 j i The components of x (k 1) are used to compute all the components x (k) i of x (k). Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 4 / 38
9 The Gauss-Seidel Method Looking at the Jacobi Method A possible improvement to the Jacobi Algorithm can be seen by re-considering x (k) i = 1 n ( ) a ii a ij x (k 1) j + b i, for i = 1, 2,..., n j=1 j i The components of x (k 1) are used to compute all the components x (k) i of x (k). But, for i > 1, the components x (k) 1,..., x (k) i 1 of x(k) have already been computed and are expected to be better approximations to the actual solutions x 1,..., x i 1 than are x (k 1) 1,..., x (k 1) i 1. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 4 / 38
10 The Gauss-Seidel Method Instead of using x (k) i = 1 a ii n ( j=1 j i a ij x (k 1) j ) + b i, for i = 1, 2,..., n it seems reasonable, then, to compute x (k) i using these most recently calculated values. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 5 / 38
11 The Gauss-Seidel Method Instead of using x (k) i = 1 a ii n ( j=1 j i a ij x (k 1) j ) + b i, for i = 1, 2,..., n it seems reasonable, then, to compute x (k) i using these most recently calculated values. The Gauss-Seidel Iterative Technique x (k) i = 1 i 1 n (a ij x (k) a j ) (a ij x (k 1) j ) + b i ii for each i = 1, 2,..., n. j=1 j=i+1 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 5 / 38
12 The Gauss-Seidel Method Example Use the Gauss-Seidel iterative technique to find approximate solutions to 10x 1 x 2 + 2x 3 = 6 x x 2 x 3 + 3x 4 = 25, 2x 1 x x 3 x 4 = 11 3x 2 x 3 + 8x 4 = 15 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 6 / 38
13 The Gauss-Seidel Method Example Use the Gauss-Seidel iterative technique to find approximate solutions to 10x 1 x 2 + 2x 3 = 6 starting with x = (0, 0, 0, 0) t x x 2 x 3 + 3x 4 = 25, 2x 1 x x 3 x 4 = 11 3x 2 x 3 + 8x 4 = 15 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 6 / 38
14 The Gauss-Seidel Method Example Use the Gauss-Seidel iterative technique to find approximate solutions to 10x 1 x 2 + 2x 3 = 6 x x 2 x 3 + 3x 4 = 25, 2x 1 x x 3 x 4 = 11 3x 2 x 3 + 8x 4 = 15 starting with x = (0, 0, 0, 0) t and iterating until x (k) x (k 1) x (k) < 10 3 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 6 / 38
15 The Gauss-Seidel Method Example Use the Gauss-Seidel iterative technique to find approximate solutions to 10x 1 x 2 + 2x 3 = 6 x x 2 x 3 + 3x 4 = 25, 2x 1 x x 3 x 4 = 11 3x 2 x 3 + 8x 4 = 15 starting with x = (0, 0, 0, 0) t and iterating until x (k) x (k 1) x (k) < 10 3 Note: The solution x = (1, 2, 1, 1) t was approximated by Jacobi s method in an earlier example. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 6 / 38
16 The Gauss-Seidel Method Solution (1/3) For the Gauss-Seidel method we write the system, for each k = 1, 2,... as x (k) 1 = 1 10 x (k 1) x (k 1) x (k) 2 = 1 11 x (k) x (k 1) x (k 1) x (k) 3 = 1 5 x (k) x (k) x (k 1) x (k) 4 = 3 8 x (k) x (k) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 7 / 38
17 The Gauss-Seidel Method Solution (2/3) When x (0) = (0, 0, 0, 0) t, we have x (1) = (0.6000, , , ) t. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 8 / 38
18 The Gauss-Seidel Method Solution (2/3) When x (0) = (0, 0, 0, 0) t, we have x (1) = (0.6000, , , ) t. Subsequent iterations give the values in the following table: k x (k) x (k) x (k) x (k) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 8 / 38
19 The Gauss-Seidel Method Solution (3/3) Because x (5) x (4) x (5) = = x (5) is accepted as a reasonable approximation to the solution. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 9 / 38
20 The Gauss-Seidel Method Solution (3/3) Because x (5) x (4) x (5) = = x (5) is accepted as a reasonable approximation to the solution. Note that, in an earlier example, Jacobi s method required twice as many iterations for the same accuracy. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 9 / 38
21 The Gauss-Seidel Method: Matrix Form Re-Writing the Equations To write the Gauss-Seidel method in matrix form, Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 10 / 38
22 The Gauss-Seidel Method: Matrix Form Re-Writing the Equations To write the Gauss-Seidel method in matrix form, multiply both sides of x (k) i = 1 i 1 n (a ij x (k) a j ) (a ij x (k 1) j ) + b i ii j=1 by a ii and collect all kth iterate terms, j=i+1 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 10 / 38
23 The Gauss-Seidel Method: Matrix Form Re-Writing the Equations To write the Gauss-Seidel method in matrix form, multiply both sides of x (k) i = 1 i 1 n (a ij x (k) a j ) (a ij x (k 1) j ) + b i ii j=1 j=i+1 by a ii and collect all kth iterate terms, to give a i1 x (k) 1 + a i2 x (k) a ii x (k) i = a i,i+1 x (k 1) i+1 a in x (k 1) n + b i for each i = 1, 2,..., n. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 10 / 38
24 The Gauss-Seidel Method: Matrix Form Re-Writing the Equations (Cont d) Writing all n equations gives a 11 x (k) 1 = a 12 x (k 1) 2 a 13 x (k 1) 3 a 1n x (k 1) n + b 1 a 21 x (k) 1 + a 22 x (k) 2 = a 23 x (k 1) 3 a 2n x (k 1) n + b 2. a n1 x (k) 1 + a n2 x (k) a nn x (k) n = b n Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 11 / 38
25 The Gauss-Seidel Method: Matrix Form Re-Writing the Equations (Cont d) Writing all n equations gives a 11 x (k) 1 = a 12 x (k 1) 2 a 13 x (k 1) 3 a 1n x (k 1) n + b 1 a 21 x (k) 1 + a 22 x (k) 2 = a 23 x (k 1) 3 a 2n x (k 1) n + b 2. a n1 x (k) 1 + a n2 x (k) a nn x (k) n = b n With the definitions of D, L, and U given previously, we have the Gauss-Seidel method represented by (D L)x (k) = Ux (k 1) + b Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 11 / 38
26 The Gauss-Seidel Method: Matrix Form (D L)x (k) = Ux (k 1) + b Re-Writing the Equations (Cont d) Solving for x (k) finally gives x (k) = (D L) 1 Ux (k 1) + (D L) 1 b, for each k = 1, 2,... Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 12 / 38
27 The Gauss-Seidel Method: Matrix Form (D L)x (k) = Ux (k 1) + b Re-Writing the Equations (Cont d) Solving for x (k) finally gives x (k) = (D L) 1 Ux (k 1) + (D L) 1 b, for each k = 1, 2,... Letting T g = (D L) 1 U and c g = (D L) 1 b, gives the Gauss-Seidel technique the form x (k) = T g x (k 1) + c g Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 12 / 38
28 The Gauss-Seidel Method: Matrix Form (D L)x (k) = Ux (k 1) + b Re-Writing the Equations (Cont d) Solving for x (k) finally gives x (k) = (D L) 1 Ux (k 1) + (D L) 1 b, for each k = 1, 2,... Letting T g = (D L) 1 U and c g = (D L) 1 b, gives the Gauss-Seidel technique the form x (k) = T g x (k 1) + c g For the lower-triangular matrix D L to be nonsingular, it is necessary and sufficient that a ii 0, for each i = 1, 2,..., n. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 12 / 38
29 Outline 1 The Gauss-Seidel Method 2 The Gauss-Seidel Algorithm 3 Convergence Results for General Iteration Methods 4 Application to the Jacobi & Gauss-Seidel Methods Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 13 / 38
30 Gauss-Seidel Iterative Algorithm (1/2) To solve Ax = b given an initial approximation x (0) : Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 14 / 38
31 Gauss-Seidel Iterative Algorithm (1/2) To solve Ax = b given an initial approximation x (0) : INPUT the number of equations and unknowns n; the entries a ij, 1 i, j n of the matrix A; the entries b i, 1 i n of b; the entries XO i, 1 i n of XO = x (0) ; tolerance TOL; maximum number of iterations N. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 14 / 38
32 Gauss-Seidel Iterative Algorithm (1/2) To solve Ax = b given an initial approximation x (0) : INPUT the number of equations and unknowns n; the entries a ij, 1 i, j n of the matrix A; the entries b i, 1 i n of b; the entries XO i, 1 i n of XO = x (0) ; tolerance TOL; maximum number of iterations N. OUTPUT the approximate solution x 1,..., x n or a message that the number of iterations was exceeded. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 14 / 38
33 Gauss-Seidel Iterative Algorithm (2/2) Step 1 Set k = 1 Step 2 While (k N) do Steps 3 6: Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 15 / 38
34 Gauss-Seidel Iterative Algorithm (2/2) Step 1 Set k = 1 Step 2 While (k N) do Steps 3 6: Step 3 For i = 1,..., n set x i = 1 a ii i 1 a ij x j n a ij XO j + b i j=1 j=i+1 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 15 / 38
35 Gauss-Seidel Iterative Algorithm (2/2) Step 1 Set k = 1 Step 2 While (k N) do Steps 3 6: Step 3 For i = 1,..., n set x i = 1 i 1 a ij x j a ii j=1 n j=i+1 a ij XO j + b i Step 4 If x XO < TOL then OUTPUT (x 1,..., x n ) (The procedure was successful) STOP Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 15 / 38
36 Gauss-Seidel Iterative Algorithm (2/2) Step 1 Set k = 1 Step 2 While (k N) do Steps 3 6: Step 3 For i = 1,..., n set x i = 1 i 1 a ij x j a ii j=1 n j=i+1 a ij XO j + b i Step 4 If x XO < TOL then OUTPUT (x 1,..., x n ) (The procedure was successful) STOP Step 5 Set k = k + 1 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 15 / 38
37 Gauss-Seidel Iterative Algorithm (2/2) Step 1 Set k = 1 Step 2 While (k N) do Steps 3 6: Step 3 For i = 1,..., n set x i = 1 i 1 a ij x j a ii j=1 n j=i+1 a ij XO j + b i Step 4 If x XO < TOL then OUTPUT (x 1,..., x n ) (The procedure was successful) STOP Step 5 Set k = k + 1 Step 6 For i = 1,..., n set XO i = x i Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 15 / 38
38 Gauss-Seidel Iterative Algorithm (2/2) Step 1 Set k = 1 Step 2 While (k N) do Steps 3 6: Step 3 For i = 1,..., n set x i = 1 i 1 a ij x j a ii j=1 n j=i+1 a ij XO j + b i Step 4 If x XO < TOL then OUTPUT (x 1,..., x n ) (The procedure was successful) STOP Step 5 Set k = k + 1 Step 6 For i = 1,..., n set XO i = x i Step 7 OUTPUT ( Maximum number of iterations exceeded ) STOP (The procedure was unsuccessful) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 15 / 38
39 Gauss-Seidel Iterative Algorithm Comments on the Algorithm Step 3 of the algorithm requires that a ii 0, for each i = 1, 2,..., n. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 16 / 38
40 Gauss-Seidel Iterative Algorithm Comments on the Algorithm Step 3 of the algorithm requires that a ii 0, for each i = 1, 2,..., n. If one of the a ii entries is 0 and the system is nonsingular, a reordering of the equations can be performed so that no a ii = 0. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 16 / 38
41 Gauss-Seidel Iterative Algorithm Comments on the Algorithm Step 3 of the algorithm requires that a ii 0, for each i = 1, 2,..., n. If one of the a ii entries is 0 and the system is nonsingular, a reordering of the equations can be performed so that no a ii = 0. To speed convergence, the equations should be arranged so that a ii is as large as possible. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 16 / 38
42 Gauss-Seidel Iterative Algorithm Comments on the Algorithm Step 3 of the algorithm requires that a ii 0, for each i = 1, 2,..., n. If one of the a ii entries is 0 and the system is nonsingular, a reordering of the equations can be performed so that no a ii = 0. To speed convergence, the equations should be arranged so that a ii is as large as possible. Another possible stopping criterion in Step 4 is to iterate until x (k) x (k 1) x (k) is smaller than some prescribed tolerance. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 16 / 38
43 Gauss-Seidel Iterative Algorithm Comments on the Algorithm Step 3 of the algorithm requires that a ii 0, for each i = 1, 2,..., n. If one of the a ii entries is 0 and the system is nonsingular, a reordering of the equations can be performed so that no a ii = 0. To speed convergence, the equations should be arranged so that a ii is as large as possible. Another possible stopping criterion in Step 4 is to iterate until x (k) x (k 1) x (k) is smaller than some prescribed tolerance. For this purpose, any convenient norm can be used, the usual being the l norm. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 16 / 38
44 Outline 1 The Gauss-Seidel Method 2 The Gauss-Seidel Algorithm 3 Convergence Results for General Iteration Methods 4 Application to the Jacobi & Gauss-Seidel Methods Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 17 / 38
45 Convergence Results for General Iteration Methods Introduction To study the convergence of general iteration techniques, we need to analyze the formula where x (0) is arbitrary. x (k) = T x (k 1) + c, for each k = 1, 2,... The following lemma and the earlier Theorem on convergent matrices provide the key for this study. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 18 / 38
46 Convergence Results for General Iteration Methods Lemma If the spectral radius satisfies ρ(t ) < 1, then (I T ) 1 exists, and (I T ) 1 = I + T + T 2 + = j=0 T j Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 19 / 38
47 Convergence Results for General Iteration Methods Lemma If the spectral radius satisfies ρ(t ) < 1, then (I T ) 1 exists, and (I T ) 1 = I + T + T 2 + = j=0 T j Proof (1/2) Because T x = λx is true precisely when (I T )x = (1 λ)x, we have λ as an eigenvalue of T precisely when 1 λ is an eigenvalue of I T. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 19 / 38
48 Convergence Results for General Iteration Methods Lemma If the spectral radius satisfies ρ(t ) < 1, then (I T ) 1 exists, and (I T ) 1 = I + T + T 2 + = j=0 T j Proof (1/2) Because T x = λx is true precisely when (I T )x = (1 λ)x, we have λ as an eigenvalue of T precisely when 1 λ is an eigenvalue of I T. But λ ρ(t ) < 1, so λ = 1 is not an eigenvalue of T, and 0 cannot be an eigenvalue of I T. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 19 / 38
49 Convergence Results for General Iteration Methods Lemma If the spectral radius satisfies ρ(t ) < 1, then (I T ) 1 exists, and (I T ) 1 = I + T + T 2 + = j=0 T j Proof (1/2) Because T x = λx is true precisely when (I T )x = (1 λ)x, we have λ as an eigenvalue of T precisely when 1 λ is an eigenvalue of I T. But λ ρ(t ) < 1, so λ = 1 is not an eigenvalue of T, and 0 cannot be an eigenvalue of I T. Hence, (I T ) 1 exists. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 19 / 38
50 Convergence Results for General Iteration Methods Proof (2/2) Let S m = I + T + T T m Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 20 / 38
51 Convergence Results for General Iteration Methods Proof (2/2) Let S m = I + T + T T m Then (I T )S m = (1 + T + T T m ) (T + T T m+1 ) = I T m+1 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 20 / 38
52 Convergence Results for General Iteration Methods Proof (2/2) Let Then S m = I + T + T T m (I T )S m = (1 + T + T T m ) (T + T T m+1 ) = I T m+1 and, since T is convergent, the Theorem on convergent matrices implies that lim m = lim ) = I m m Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 20 / 38
53 Convergence Results for General Iteration Methods Proof (2/2) Let Then S m = I + T + T T m (I T )S m = (1 + T + T T m ) (T + T T m+1 ) = I T m+1 and, since T is convergent, the Theorem on convergent matrices implies that lim m = lim ) = I m m Thus, (I T ) 1 = lim m S m = I + T + T 2 + = j=0 T j Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 20 / 38
54 Convergence Results for General Iteration Methods Theorem For any x (0) IR n, the sequence {x (k) } k=0 defined by x (k) = T x (k 1) + c, for each k 1 converges to the unique solution of x = T x + c if and only if ρ(t ) < 1. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 21 / 38
55 Convergence Results for General Iteration Methods Proof (1/5) First assume that ρ(t ) < 1. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 22 / 38
56 Convergence Results for General Iteration Methods Proof (1/5) First assume that ρ(t ) < 1. Then, x (k) = T x (k 1) + c Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 22 / 38
57 Convergence Results for General Iteration Methods Proof (1/5) First assume that ρ(t ) < 1. Then, x (k) = T x (k 1) + c = T (T x (k 2) + c) + c Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 22 / 38
58 Convergence Results for General Iteration Methods Proof (1/5) First assume that ρ(t ) < 1. Then, x (k) = T x (k 1) + c = T (T x (k 2) + c) + c = T 2 x (k 2) + (T + I)c Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 22 / 38
59 Convergence Results for General Iteration Methods Proof (1/5) First assume that ρ(t ) < 1. Then, x (k) = T x (k 1) + c = T (T x (k 2) + c) + c = T 2 x (k 2) + (T + I)c. = T k x (0) + (T k T + I)c Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 22 / 38
60 Convergence Results for General Iteration Methods Proof (1/5) First assume that ρ(t ) < 1. Then, x (k) = T x (k 1) + c = T (T x (k 2) + c) + c = T 2 x (k 2) + (T + I)c. = T k x (0) + (T k T + I)c Because ρ(t ) < 1, the Theorem on convergent matrices implies that T is convergent, and lim x (0) = 0 k Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 22 / 38
61 Convergence Results for General Iteration Methods Proof (2/5) The previous lemma implies that lim k x(k) = lim T k x (0) + k j=0 T j c Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 23 / 38
62 Convergence Results for General Iteration Methods Proof (2/5) The previous lemma implies that lim k x(k) = lim T k x (0) + k = 0 + (I T ) 1 c j=0 T j c Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 23 / 38
63 Convergence Results for General Iteration Methods Proof (2/5) The previous lemma implies that lim k x(k) = lim T k x (0) + k = 0 + (I T ) 1 c j=0 T j c = (I T ) 1 c Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 23 / 38
64 Convergence Results for General Iteration Methods Proof (2/5) The previous lemma implies that lim k x(k) = lim T k x (0) + k = 0 + (I T ) 1 c j=0 T j c = (I T ) 1 c Hence, the sequence {x (k) } converges to the vector x (I T ) 1 c and x = T x + c. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 23 / 38
65 Convergence Results for General Iteration Methods Proof (3/5) To prove the converse, we will show that for any z IR n, we have lim k T k z = 0. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 24 / 38
66 Convergence Results for General Iteration Methods Proof (3/5) To prove the converse, we will show that for any z IR n, we have lim k T k z = 0. Again, by the theorem on convergent matrices, this is equivalent to ρ(t ) < 1. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 24 / 38
67 Convergence Results for General Iteration Methods Proof (3/5) To prove the converse, we will show that for any z IR n, we have lim k T k z = 0. Again, by the theorem on convergent matrices, this is equivalent to ρ(t ) < 1. Let z be an arbitrary vector, and x be the unique solution to x = T x + c. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 24 / 38
68 Convergence Results for General Iteration Methods Proof (3/5) To prove the converse, we will show that for any z IR n, we have lim k T k z = 0. Again, by the theorem on convergent matrices, this is equivalent to ρ(t ) < 1. Let z be an arbitrary vector, and x be the unique solution to x = T x + c. Define x (0) = x z, and, for k 1, x (k) = T x (k 1) + c. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 24 / 38
69 Convergence Results for General Iteration Methods Proof (3/5) To prove the converse, we will show that for any z IR n, we have lim k T k z = 0. Again, by the theorem on convergent matrices, this is equivalent to ρ(t ) < 1. Let z be an arbitrary vector, and x be the unique solution to x = T x + c. Define x (0) = x z, and, for k 1, x (k) = T x (k 1) + c. Then {x (k) } converges to x. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 24 / 38
70 Convergence Results for General Iteration Methods Proof (4/5) Also, ( ) x x (k) = (T x + c) T x (k 1) + c = T (x x (k 1)) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 25 / 38
71 Convergence Results for General Iteration Methods Proof (4/5) Also, ( ) x x (k) = (T x + c) T x (k 1) + c = T (x x (k 1)) so x x (k) = T (x x (k 1)) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 25 / 38
72 Convergence Results for General Iteration Methods Proof (4/5) Also, ( ) x x (k) = (T x + c) T x (k 1) + c = T (x x (k 1)) so x x (k) = T (x x (k 1)) = T 2 ( x x (k 2)) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 25 / 38
73 Convergence Results for General Iteration Methods Proof (4/5) Also, ( ) x x (k) = (T x + c) T x (k 1) + c = T (x x (k 1)) so x x (k) = T (x x (k 1)) = T 2 ( x x (k 2)) =. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 25 / 38
74 Convergence Results for General Iteration Methods Proof (4/5) Also, ( ) x x (k) = (T x + c) T x (k 1) + c = T (x x (k 1)) so x x (k) = T (x x (k 1)) = T 2 ( x x (k 2)) =. = T k ( x x (0)) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 25 / 38
75 Convergence Results for General Iteration Methods Proof (4/5) Also, ( ) x x (k) = (T x + c) T x (k 1) + c = T (x x (k 1)) so x x (k) = T (x x (k 1)) = T 2 ( x x (k 2)) =. = T k ( x x (0)) = T k z Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 25 / 38
76 Convergence Results for General Iteration Methods Proof (5/5) Hence ( lim T k z = lim T k x x (0)) k k Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 26 / 38
77 Convergence Results for General Iteration Methods Proof (5/5) Hence ( lim T k z = lim T k x x (0)) k k ( = lim x x (k)) k Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 26 / 38
78 Convergence Results for General Iteration Methods Proof (5/5) Hence ( lim T k z = lim T k x x (0)) k k = lim = 0 k ( x x (k)) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 26 / 38
79 Convergence Results for General Iteration Methods Proof (5/5) Hence ( lim T k z = lim T k x x (0)) k k = lim = 0 k ( x x (k)) But z IR n was arbitrary, so by the theorem on convergent matrices, T is convergent and ρ(t ) < 1. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 26 / 38
80 Convergence Results for General Iteration Methods Corollary T < 1 for any natural matrix norm and c is a given vector, then the sequence {x (k) } k=0 defined by x (k) = T x (k 1) + c converges, for any x (0) IR n, to a vector x IR n, with x = T x + c, and the following error bounds hold: Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 27 / 38
81 Convergence Results for General Iteration Methods Corollary T < 1 for any natural matrix norm and c is a given vector, then the sequence {x (k) } k=0 defined by x (k) = T x (k 1) + c converges, for any x (0) IR n, to a vector x IR n, with x = T x + c, and the following error bounds hold: (i) x x (k) T k x (0) x Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 27 / 38
82 Convergence Results for General Iteration Methods Corollary T < 1 for any natural matrix norm and c is a given vector, then the sequence {x (k) } k=0 defined by x (k) = T x (k 1) + c converges, for any x (0) IR n, to a vector x IR n, with x = T x + c, and the following error bounds hold: (i) (ii) x x (k) T k x (0) x x x (k) T k 1 T x(1) x (0) Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 27 / 38
83 Convergence Results for General Iteration Methods Corollary T < 1 for any natural matrix norm and c is a given vector, then the sequence {x (k) } k=0 defined by x (k) = T x (k 1) + c converges, for any x (0) IR n, to a vector x IR n, with x = T x + c, and the following error bounds hold: (i) (ii) x x (k) T k x (0) x x x (k) T k 1 T x(1) x (0) The proof of the following corollary is similar to that for the Corollary to the Fixed-Point Theorem for a single nonlinear equation. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 27 / 38
84 Outline 1 The Gauss-Seidel Method 2 The Gauss-Seidel Algorithm 3 Convergence Results for General Iteration Methods 4 Application to the Jacobi & Gauss-Seidel Methods Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 28 / 38
85 Convergence of the Jacobi & Gauss-Seidel Methods Using the Matrix Formulations We have seen that the Jacobi and Gauss-Seidel iterative techniques can be written x (k) = T j x (k 1) + c j and x (k) = T g x (k 1) + c g Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 29 / 38
86 Convergence of the Jacobi & Gauss-Seidel Methods Using the Matrix Formulations We have seen that the Jacobi and Gauss-Seidel iterative techniques can be written using the matrices respectively. x (k) = T j x (k 1) + c j and x (k) = T g x (k 1) + c g T j = D 1 (L + U) and T g = (D L) 1 U Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 29 / 38
87 Convergence of the Jacobi & Gauss-Seidel Methods Using the Matrix Formulations We have seen that the Jacobi and Gauss-Seidel iterative techniques can be written using the matrices x (k) = T j x (k 1) + c j and x (k) = T g x (k 1) + c g T j = D 1 (L + U) and T g = (D L) 1 U respectively. If ρ(t j ) or ρ(t g ) is less than 1, then the corresponding sequence {x (k) } k=0 will converge to the solution x of Ax = b. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 29 / 38
88 Convergence of the Jacobi & Gauss-Seidel Methods Example For example, the Jacobi method has x (k) = D 1 (L + U)x (k 1) + D 1 b, Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 30 / 38
89 Convergence of the Jacobi & Gauss-Seidel Methods Example For example, the Jacobi method has x (k) = D 1 (L + U)x (k 1) + D 1 b, and, if {x (k) } k=0 converges to x, Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 30 / 38
90 Convergence of the Jacobi & Gauss-Seidel Methods Example For example, the Jacobi method has x (k) = D 1 (L + U)x (k 1) + D 1 b, and, if {x (k) } k=0 converges to x, then x = D 1 (L + U)x + D 1 b Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 30 / 38
91 Convergence of the Jacobi & Gauss-Seidel Methods Example For example, the Jacobi method has x (k) = D 1 (L + U)x (k 1) + D 1 b, and, if {x (k) } k=0 converges to x, then x = D 1 (L + U)x + D 1 b This implies that Dx = (L + U)x + b and (D L U)x = b Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 30 / 38
92 Convergence of the Jacobi & Gauss-Seidel Methods Example For example, the Jacobi method has x (k) = D 1 (L + U)x (k 1) + D 1 b, and, if {x (k) } k=0 converges to x, then x = D 1 (L + U)x + D 1 b This implies that Dx = (L + U)x + b and (D L U)x = b Since D L U = A, the solution x satisfies Ax = b. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 30 / 38
93 Convergence of the Jacobi & Gauss-Seidel Methods The following are easily verified sufficiency conditions for convergence of the Jacobi and Gauss-Seidel methods. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 31 / 38
94 Convergence of the Jacobi & Gauss-Seidel Methods The following are easily verified sufficiency conditions for convergence of the Jacobi and Gauss-Seidel methods. Theorem If A is strictly diagonally dominant, then for any choice of x (0), both the Jacobi and Gauss-Seidel methods give sequences {x (k) } k=0 that converge to the unique solution of Ax = b. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 31 / 38
95 Convergence of the Jacobi & Gauss-Seidel Methods Is Gauss-Seidel better than Jacobi? Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 32 / 38
96 Convergence of the Jacobi & Gauss-Seidel Methods Is Gauss-Seidel better than Jacobi? No general results exist to tell which of the two techniques, Jacobi or Gauss-Seidel, will be most successful for an arbitrary linear system. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 32 / 38
97 Convergence of the Jacobi & Gauss-Seidel Methods Is Gauss-Seidel better than Jacobi? No general results exist to tell which of the two techniques, Jacobi or Gauss-Seidel, will be most successful for an arbitrary linear system. In special cases, however, the answer is known, as is demonstrated in the following theorem. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 32 / 38
98 Convergence of the Jacobi & Gauss-Seidel Methods (Stein-Rosenberg) Theorem If a ij 0, for each i j and a ii > 0, for each i = 1, 2,..., n, then one and only one of the following statements holds: (i) 0 ρ(t g ) < ρ(t j ) < 1 (ii) 1 < ρ(t j ) < ρ(t g ) (iii) ρ(t j ) = ρ(t g ) = 0 (iv) ρ(t j ) = ρ(t g ) = 1 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 33 / 38
99 Convergence of the Jacobi & Gauss-Seidel Methods (Stein-Rosenberg) Theorem If a ij 0, for each i j and a ii > 0, for each i = 1, 2,..., n, then one and only one of the following statements holds: (i) 0 ρ(t g ) < ρ(t j ) < 1 (ii) 1 < ρ(t j ) < ρ(t g ) (iii) ρ(t j ) = ρ(t g ) = 0 (iv) ρ(t j ) = ρ(t g ) = 1 For the proof of this result, see pp of Young, D. M., Iterative solution of large linear systems, Academic Press, New York, 1971, 570 pp. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 33 / 38
100 Convergence of the Jacobi & Gauss-Seidel Methods Two Comments on the Thoerem For the special case described in the theorem, we see from part (i), namely 0 ρ(t g ) < ρ(t j ) < 1 Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 34 / 38
101 Convergence of the Jacobi & Gauss-Seidel Methods Two Comments on the Thoerem For the special case described in the theorem, we see from part (i), namely 0 ρ(t g ) < ρ(t j ) < 1 that when one method gives convergence, then both give convergence, Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 34 / 38
102 Convergence of the Jacobi & Gauss-Seidel Methods Two Comments on the Thoerem For the special case described in the theorem, we see from part (i), namely 0 ρ(t g ) < ρ(t j ) < 1 that when one method gives convergence, then both give convergence, and the Gauss-Seidel method converges faster than the Jacobi method. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 34 / 38
103 Convergence of the Jacobi & Gauss-Seidel Methods Two Comments on the Thoerem For the special case described in the theorem, we see from part (i), namely 0 ρ(t g ) < ρ(t j ) < 1 that when one method gives convergence, then both give convergence, and the Gauss-Seidel method converges faster than the Jacobi method. Part (ii), namely 1 < ρ(t j ) < ρ(t g ) indicates that when one method diverges then both diverge, and the divergence is more pronounced for the Gauss-Seidel method. Numerical Analysis (Chapter 7) Jacobi & Gauss-Seidel Methods II R L Burden & J D Faires 34 / 38
104 Questions?
105 Eigenvalues & Eigenvectors: Convergent Matrices Theorem The following statements are equivalent. (i) A is a convergent matrix. (ii) lim n A n = 0, for some natural norm. (iii) lim n A n = 0, for all natural norms. (iv) ρ(a) < 1. (v) lim n A n x = 0, for every x. The proof of this theorem can be found on p. 14 of Issacson, E. and H. B. Keller, Analysis of Numerical Methods, John Wiley & Sons, New York, 1966, 541 pp. Return to General Iteration Methods Introduction Return to General Iteration Methods Lemma Return to General Iteration Methods Theorem
106 Fixed-Point Theorem Let g C[a, b] be such that g(x) [a, b], for all x in [a, b]. Suppose, in addition, that g exists on (a, b) and that a constant 0 < k < 1 exists with g (x) k, for all x (a, b). Then for any number p 0 in [a, b], the sequence defined by p n = g(p n 1 ), n 1 converges to the unique fixed point p in [a, b]. Return to the Corrollary to the Fixed-Point Theorem
107 Functional (Fixed-Point) Iteration Corrollary to the Fixed-Point Convergence Result If g satisfies the hypothesis of the Fixed-Point Theorem then p n p k n 1 k p 1 p 0 Return to the Corollary to the Convergence Theorem for General Iterative Methods
Solutions of Equations in One Variable. Fixed-Point Iteration II
Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationGeneral Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
More informationIterative Methods for Solving Linear Systems
Chapter 5 Iterative Methods for Solving Linear Systems 5.1 Convergence of Sequences of Vectors and Matrices In Chapter 2 we have discussed some of the main methods for solving systems of linear equations.
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationInner product. Definition of inner product
Math 20F Linear Algebra Lecture 25 1 Inner product Review: Definition of inner product. Slide 1 Norm and distance. Orthogonal vectors. Orthogonal complement. Orthogonal basis. Definition of inner product
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationNumerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems
Numerical Methods I Solving Linear Systems: Sparse Matrices, Iterative Methods and Non-Square Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001,
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationLS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationA linear algebraic method for pricing temporary life annuities
A linear algebraic method for pricing temporary life annuities P. Date (joint work with R. Mamon, L. Jalen and I.C. Wang) Department of Mathematical Sciences, Brunel University, London Outline Introduction
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationHOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!
Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationContinuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationInner products on R n, and more
Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More information6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More information1 Introduction to Matrices
1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationA PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang
A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS XAVIER CABRÉ, MANEL SANCHÓN, AND JOEL SPRUCK In memory of Rou-Huai Wang 1. Introduction In this note we consider semistable
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationUNCOUPLING THE PERRON EIGENVECTOR PROBLEM
UNCOUPLING THE PERRON EIGENVECTOR PROBLEM Carl D Meyer INTRODUCTION Foranonnegative irreducible matrix m m with spectral radius ρ,afundamental problem concerns the determination of the unique normalized
More informationNumerical methods for American options
Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment
More informationNonlinear Algebraic Equations. Lectures INF2320 p. 1/88
Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have
More informationn k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...
6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationWe shall turn our attention to solving linear systems of equations. Ax = b
59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system
More informationNotes from February 11
Notes from February 11 Math 130 Course web site: www.courses.fas.harvard.edu/5811 Two lemmas Before proving the theorem which was stated at the end of class on February 8, we begin with two lemmas. The
More informationAbstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).
MAT 2 (Badger, Spring 202) LU Factorization Selected Notes September 2, 202 Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More information8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes
Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationChapter 5. Banach Spaces
9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More information(Quasi-)Newton methods
(Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting
More informationα = u v. In other words, Orthogonal Projection
Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v
More informationDERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
More informationSolving simultaneous equations using the inverse matrix
Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix
More informationFourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions
Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions Jennifer Zhao, 1 Weizhong Dai, Tianchan Niu 1 Department of Mathematics and Statistics, University of Michigan-Dearborn,
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationSHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
More informationTHE DIMENSION OF A VECTOR SPACE
THE DIMENSION OF A VECTOR SPACE KEITH CONRAD This handout is a supplementary discussion leading up to the definition of dimension and some of its basic properties. Let V be a vector space over a field
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationNonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
More informationLecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationMAT188H1S Lec0101 Burbulla
Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationDecision-making with the AHP: Why is the principal eigenvector necessary
European Journal of Operational Research 145 (2003) 85 91 Decision Aiding Decision-making with the AHP: Why is the principal eigenvector necessary Thomas L. Saaty * University of Pittsburgh, Pittsburgh,
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More information1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
More informationRandom graphs with a given degree sequence
Sourav Chatterjee (NYU) Persi Diaconis (Stanford) Allan Sly (Microsoft) Let G be an undirected simple graph on n vertices. Let d 1,..., d n be the degrees of the vertices of G arranged in descending order.
More informationLecture 4: Partitioned Matrices and Determinants
Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationSECTION 10-2 Mathematical Induction
73 0 Sequences and Series 6. Approximate e 0. using the first five terms of the series. Compare this approximation with your calculator evaluation of e 0.. 6. Approximate e 0.5 using the first five terms
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationMathematics Course 111: Algebra I Part IV: Vector Spaces
Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are
More informationNumerical Methods for Solving Systems of Nonlinear Equations
Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s
More informationMATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationSolution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationHøgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point
More information8 Hyperbolic Systems of First-Order Equations
8 Hyperbolic Systems of First-Order Equations Ref: Evans, Sec 73 8 Definitions and Examples Let U : R n (, ) R m Let A i (x, t) beanm m matrix for i,,n Let F : R n (, ) R m Consider the system U t + A
More informationMatrices 2. Solving Square Systems of Linear Equations; Inverse Matrices
Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices Solving square systems of linear equations; inverse matrices. Linear algebra is essentially about solving systems of linear equations,
More informationThe Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression
The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression The SVD is the most generally applicable of the orthogonal-diagonal-orthogonal type matrix decompositions Every
More informationThe Determinant: a Means to Calculate Volume
The Determinant: a Means to Calculate Volume Bo Peng August 20, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are
More informationHow To Prove The Dirichlet Unit Theorem
Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if
More informationMathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
More information