13 Classical Iterative Methods for the Solution of Linear Systems

Similar documents
8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by

v a 1 b 1 i, a 2 b 2 i,..., a n b n i.

Recurrence. 1 Definitions and main statements

PERRON FROBENIUS THEOREM

Support Vector Machines

n + d + q = 24 and.05n +.1d +.25q = 2 { n + d + q = 24 (3) n + 2d + 5q = 40 (2)

On the Solution of Indefinite Systems Arising in Nonlinear Optimization

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

An Alternative Way to Measure Private Equity Performance

Section 5.4 Annuities, Present Value, and Amortization

This circuit than can be reduced to a planar circuit

1 Example 1: Axis-aligned rectangles

where the coordinates are related to those in the old frame as follows.

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

Least Squares Fitting of Data

What is Candidate Sampling

Point cloud to point cloud rigid transformations. Minimizing Rigid Registration Errors

Formulating & Solving Integer Problems Chapter

Consider a 1-D stationary state diffusion-type equation, which we will call the generalized diffusion equation from now on:

Loop Parallelization

We are now ready to answer the question: What are the possible cardinalities for finite fields?

Face Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits

HÜCKEL MOLECULAR ORBITAL THEORY

Lecture 3: Force of Interest, Real Interest Rate, Annuity

Conversion between the vector and raster data structures using Fuzzy Geographical Entities

L10: Linear discriminants analysis

BERNSTEIN POLYNOMIALS

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

Ring structure of splines on triangulations

1. Fundamentals of probability theory 2. Emergence of communication traffic 3. Stochastic & Markovian Processes (SP & MP)

Logistic Regression. Steve Kroon

NON-CONSTANT SUM RED-AND-BLACK GAMES WITH BET-DEPENDENT WIN PROBABILITY FUNCTION LAURA PONTIGGIA, University of the Sciences in Philadelphia

SPEE Recommended Evaluation Practice #6 Definition of Decline Curve Parameters Background:

The Mathematical Derivation of Least Squares

Lecture 2: Single Layer Perceptrons Kevin Swingler

THE METHOD OF LEAST SQUARES THE METHOD OF LEAST SQUARES

Optimal monitoring in large networks by Successive c-optimal Designs

LECTURES on COMPUTATIONAL NUMERICAL ANALYSIS of PARTIAL DIFFERENTIAL EQUATIONS

CHAPTER 5 RELATIONSHIPS BETWEEN QUANTITATIVE VARIABLES

Figure 1. Inventory Level vs. Time - EOQ Problem

In the rth step, the computaton of the Householder matrx H r requres only the n ; r last elements of the rth column of A T r;1a r;1 snce we donothave

8 Algorithm for Binary Searching in Trees

Immersed interface methods for moving interface problems

1. Measuring association using correlation and regression

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT

Finite Math Chapter 10: Study Guide and Solution to Problems

A frequency decomposition time domain model of broadband frequency-dependent absorption: Model II

AN EFFECTIVE MATRIX GEOMETRIC MEAN SATISFYING THE ANDO LI MATHIAS PROPERTIES

Defining Point-Set Surfaces

Lecture 3: Annuity. Study annuities whose payments form a geometric progression or a arithmetic progression.

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

Section C2: BJT Structure and Operational Modes

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem

Numerical Methods 數 值 方 法 概 說. Daniel Lee. Nov. 1, 2006

QUESTIONS, How can quantum computers do the amazing things that they are able to do, such. cryptography quantum computers

Section 5.3 Annuities, Future Value, and Sinking Funds

NMT EE 589 & UNM ME 482/582 ROBOT ENGINEERING. Dr. Stephen Bruder NMT EE 589 & UNM ME 482/582

Solution: Let i = 10% and d = 5%. By definition, the respective forces of interest on funds A and B are. i 1 + it. S A (t) = d (1 dt) 2 1. = d 1 dt.

How To Calculate The Accountng Perod Of Nequalty

Using Series to Analyze Financial Situations: Present Value

How Sets of Coherent Probabilities May Serve as Models for Degrees of Incoherence

When Network Effect Meets Congestion Effect: Leveraging Social Services for Wireless Services

Implementation of Deutsch's Algorithm Using Mathcad

Computational Fluid Dynamics II

Stochastic epidemic models revisited: Analysis of some continuous performance measures

Parallel Algorithms for Big Data Optimization

Interest Rate Fundamentals

Generalizing the degree sequence problem

Number of Levels Cumulative Annual operating Income per year construction costs costs ($) ($) ($) 1 600,000 35, , ,200,000 60, ,000

Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification

J. Parallel Distrib. Comput.

Implied (risk neutral) probabilities, betting odds and prediction markets

In our example i = r/12 =.0825/12 At the end of the first month after your payment is received your amount in the account, the balance, is

Energies of Network Nastsemble

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

2008/8. An integrated model for warehouse and inventory planning. Géraldine Strack and Yves Pochet

INTERPRETING TRUE ARITHMETIC IN THE LOCAL STRUCTURE OF THE ENUMERATION DEGREES.

Trade Adjustment and Productivity in Large Crises. Online Appendix May Appendix A: Derivation of Equations for Productivity

An Analysis of Central Processor Scheduling in Multiprogrammed Computer Systems

The OC Curve of Attribute Acceptance Plans

arxiv: v1 [cs.dc] 11 Nov 2013

Finite difference method

Production. 2. Y is closed A set is closed if it contains its boundary. We need this for the solution existence in the profit maximization problem.

The Full-Wave Rectifier

An alternate point-wise scheme for Electric Field Integral Equations

Rate Monotonic (RM) Disadvantages of cyclic. TDDB47 Real Time Systems. Lecture 2: RM & EDF. Priority-based scheduling. States of a process

Calculation of Sampling Weights

GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

How Much to Bet on Video Poker

How To Know The Components Of Mean Squared Error Of Herarchcal Estmator S

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

21 Vectors: The Cross Product & Torque

Traffic State Estimation in the Traffic Management Center of Berlin

Rotation Kinematics, Moment of Inertia, and Torque

Transcription:

3 Classcal Iteratve Methods for the Soluton of Lnear Systems 3. Why Iteratve Methods? Vrtually all methods for solvng Ax = b or Ax = λx requre Om 3 ) operatons. In practcal applcatons A often has a certan structure and/or s sparse,.e., A contans many zeros. A typcal problem that arses n practce s the Posson problem mentoned at the begnnng of the class. We want to fnd u such that 2 ux, y) = [u xx x, y) + u yy x, y)] = fx, y), n Ω = [0, ] 2 ux, y) = 0, on Ω. One of the standard numercal algorthms s a fnte dfference approach. The Laplacan s dscretzed on a grd of n + ) 2 equally spaced ponts x, y ) = h, h),, = 0,..., n wth h = n. Ths results n the dscrete Laplacan 2 ux, y ) u, + u, + u +, + u,+ 4u, h 2, where u, = ux, y ). The boundary condtons of the PDE allow us to set the soluton at the ponts along the boundary as u,0 = u,n = u 0, = u n, = 0,, = 0,,..., n. At the n ) 2 nteror grd ponts we obtan the followng system of lnear equatons for the values of u there 4u, u, u, u +, u,+ = f,,, =,..., n n2 The system matrx s of sze m m, where m = n ) 2. Each row contans at most fve nonzero entres, and therefore s very sparse. Thus, specal methods are called for to take advantage of ths sparsty when we solve ths lnear system. Obvously, a full-blown LU or Cholesky factorzaton wll be much too costly f m s large typcal values for m are often 0 6 or even larger). 3.2 The Splttng Approach The basc teratve scheme to solve Ax = b wll be of the form = Gx k ) + c, k =, 2, 3,.... 38) Here we assume that A C m m, x 0) s an ntal guess for the soluton, and G and c are a constant teraton matrx and vector, respectvely, defnng the teratve scheme. Most classcal teratve methods are based on a splttng of the matrx A of the form A = M N 00

wth a nonsngular matrx M. One then defnes Then 38) becomes or G = M N and c = M b. = M Nx k ) + M b M = Nx k ) + b. 39) In practce we wll want to choose the splttng factors so that. 39) s easly solved, 2. 39) converges rapdly. Theorem 3. If G = M N < then 38) converges to a soluton of Ax = b for any ntal guess x 0). Proof 38) descrbes a fxed pont teraton.e., s of the form x = gx)), and the fxed pont of 38) s a soluton of Ax = b as can be seen from x = Gx + c x = M Nx + M b Mx = Nx + b M N) x = b. }{{} =A Now we let e k) = x, where x s the soluton of the fxed pont problem, and show that ths quantty goes to zero as k. Frst we observe that Takng norms we have e k) = x = Gx k ) Gx ) = G x k ) x = Ge k ). e k) = Ge k ) G e k ) G k e 0), where the last nequalty s obtaned by recurson. If now as we assume G <, then e k) 0 as k, and therefore x and the method converges. Wth some more effort one can show Theorem 3.2 The teraton 38) converges to the soluton of Ax = b for any ntal guess x 0) f and only f ρg) = ρm N) <. Here ρg) s the spectral radus of G,.e., the largest egenvalue of G n modulus). 0

3.3 How should we choose M and N? 3.3. The Jacob Method We formally decompose A = L + D + U nto a lower trangular, dagonal, and upper trangular part. Then we let M = D, N = L + U). Thus 39) becomes or D = L + U)x k ) + b = D [ b L + U)x k )]. By wrtng ths formula componentwse we have Ths means we have the followng algorthm. Algorthm Jacob method) Let x 0) be an arbtrary ntal guess for k =, 2,... = b a x k ), =,..., m. a for = : m = b = a x k ) m =+ a x k ) /a Example If we apply the Jacob method to the fnte dfference dscretzaton of the Posson problem then we can be more effcent by takng advantage of the matrx structure. The central part of the algorthm the loop for = : m) can then be replaced by for = : n for = : n u k), = u k ), + uk ), + uk ) +, + uk ),+ + f ), n 2 /4 02

Note that the unknowns are now u nstead of x. Ths algorthm can be mplemented n one lne of Matlab see homework). Remark Whle the Jacob method s not used that often n practce on seral computers t does l tself to a naturally parallel mplementaton. In order to get a convergence result for the Jacob method we need to recall the concept of dagonal domnance. We say a matrx A s strctly row dagonally domnant f a > a. Theorem 3.3 If A s strctly row dagonally domnant, then the Jacob method converges for any ntal guess x 0). Proof By the defnton of dagonal domnance above we have a > a We are done f we can show that G <. Here a a <. G = M N = D L + U). Snce we can take any norm, we pck. Then by the dagonal domnance. G = D L + U) = a max m a < Remark An analogous result holds f A s strctly column dagonally domnant defned analogously). As we wll see n some numercal examples, the convergence of the Jacob method s usually rather slow. A usually) faster method s dscussed next. 3.3.2 Gauss-Sedel Method To see how the Jacob method can be mproved we consder an example. Example For the system 2x + x 2 = 6 x + 2x 2 = 6 03

or [ 2 2 the Jacob method looks lke = 2 = ] [ x x 2 ] = [ 6 6 ] ) 6 x k ) 2 /2 ) 6 x k ) /2. In order to obtan an mprovement we notce that the value of x k ) used n the second equaton s actually outdated snce we already computed a newer verson,, n the frst equaton. Therefore, we mght consder = 2 = ) 6 x k ) 2 /2 ) 6 /2 nstead. Ths s known as the Gauss-Sedel method. The general algorthm s of the form Algorthm Gauss-Sedel method) Let x 0) be an arbtrary ntal guess for k =, 2,... for = : m = b = a m =+ a x k ) /a Example For one step of the fnte dfference soluton of the Posson problem we get for = : n for = : n u k), = u k), + uk), + uk ) +, + uk ),+ + f ), n 2 /4 04

Remark Note that the mplementaton of the Gauss-Sedel algorthm for ths example deps on the orderng of the grd ponts. We used the natural or typewrter) orderng,.e., we scan the grd ponts row by row from left to rght. Sometmes a red-black or chessboard) orderng s used. Ths s especally useful f the Gauss-Sedel method s to be parallelzed. In order to understand the matrx formulaton of the Gauss-Sedel method n the sprt of 39) we agan assume that A = L + D + U. Now the splttng matrces are chosen as and 39) becomes M = D + L N = U, M = Nx k ) + b D + L) = b Ux k ) or = D + L) b Ux k )). Note that ths s also equvalent to D = b L Ux k ) or = D b L Ux k )). Ths latter formula corresponds ncely to the algorthm above. The convergence crtera for Gauss-Sedel teraton are a lttle more general than those for the Jacob method. One can show Theorem 3.4 The Gauss-Sedel method converges for any ntal guess x 0) f. A s strctly dagonally domnant, or 2. A s symmetrc postve defnte. Remark For a generc problem the Gauss-Sedel method converges faster than the Jacob method see the Maple worksheet 473 IteratveSolvers.mws). However, ths does not mean that sometmes the Jacob method may not be faster. Remark A careful reader may notce that the suffcent condtons gven n the convergence theorems for the Jacob and Gauss-Sedel methods do not cover the matrx for our fnte-dfference Posson problem. However, there are varatons of the theorems that do cover ths mportant example. 05

3.3.3 Successve Over-Relaxaton SOR) One can accelerate the convergence of the Gauss-Sedel method by usng a weghted average of the new Gauss-Sedel value wth the one obtaned durng the prevous teraton: = ω)x k ) + ω GS. Here ω s the so-called relaxaton parameter. If ω = we smply have the Gauss-Sedel method. For ω > one speaks of over-relaxaton, and for ω < of under-relaxaton. The resultng algorthm known as successve over-relaxaton SOR) and s obvously) a varaton of the Gauss-Sedel algorthm. Algorthm SOR) Let x 0) be an arbtrary ntal guess for k =, 2,... for = : m = ω)x k ) + ω b = a m =+ a x k ) /a } {{ } =,GS Example For one step of the SOR algorthm appled to the fnte dfference soluton of the Posson problem we get for = : n for = : n u k), = ω)uk ), + ω u k), + uk), + uk ) +, + uk ),+ + f ), n 2 /4 Remark Just as for the Gauss-Sedel algorthm, the mplementaton of the SOR method deps on the orderng of the grd ponts. 06

To descrbe the SOR method n terms of splttng matrces we agan assume that A = L + D + U, and take Wth these choces 39) becomes M = ω D + L ) N = ω D U. M = Nx ) k ) + [ b ω D + L = ω ) ] D U x k ) + b. The rearrangements that show that ths formulaton s ndeed equvalent to the formula used n the algorthm above are: ) [ ) ] ω D + L = ω D U x k ) + b [ ) ] ω Dxk) = ω D U x k ) + b L [ ) ] = ωd ω D U x k ) + ωd b ωd L = x k ) ωx k ) [ ωd Ux k ) + ωd b ωd L = ω) x k ) ω D b L Ux k ))]. Note that the expresson nsde the square brackets on the last lne s ust what we had for the Gauss-Sedel method earler. One can prove a general convergence theorem that s smlar to those for the Jacob and Gauss-Sedel methods: Theorem 3.5 If A s symmetrc postve defnte then the SOR method wth 0 < ω < 2 converges for any startng value x 0). Remark Note that ths theorem says nothng about the speed of convergence. In fact, fndng a good value for the relaxaton parameter ω s qute dffcult. The value of ω that yelds the fastest convergence of the SOR method s known only n very specal cases. For example, f A s trdagonal then ω opt = 2 + ρg), where G = M N = D + L) U) s the teraton matrx for the Gauss-Sedel method. The convergence behavor of all three classcal methods s llustrated n the Maple worksheet 473 ItertveSolvers.mws. 07