A Multigrid Tutorial part two



Similar documents
FINITE DIFFERENCE METHODS

Verifying Numerical Convergence Rates

Tangent Lines and Rates of Change

Multigrid computational methods are

The EOQ Inventory Formula

Derivatives Math 120 Calculus I D Joyce, Fall 2013

Chapter 7 Numerical Differentiation and Integration

In other words the graph of the polynomial should pass through the points

Instantaneous Rate of Change:


OPTIMAL DISCONTINUOUS GALERKIN METHODS FOR THE ACOUSTIC WAVE EQUATION IN HIGHER DIMENSIONS

Lecture 10: What is a Function, definition, piecewise defined functions, difference quotient, domain of a function

Part II: Finite Difference/Volume Discretisation for CFD

Geometric Stratification of Accounting Data

Math 113 HW #5 Solutions

1 The Collocation Method

CHAPTER 7. Di erentiation

MATHEMATICS FOR ENGINEERING DIFFERENTIATION TUTORIAL 1 - BASIC DIFFERENTIATION

Optimized Data Indexing Algorithms for OLAP Systems

SAMPLE DESIGN FOR THE TERRORISM RISK INSURANCE PROGRAM SURVEY

A New Cement to Glue Nonconforming Grids with Robin Interface Conditions: The Finite Element Case

2 Limits and Derivatives

Research on the Anti-perspective Correction Algorithm of QR Barcode

Schedulability Analysis under Graph Routing in WirelessHART Networks

Comparison between two approaches to overload control in a Real Server: local or hybrid solutions?

The finite element immersed boundary method: model, stability, and numerical results

6.3 Multigrid Methods

Can a Lump-Sum Transfer Make Everyone Enjoy the Gains. from Free Trade?

An Intuitive Framework for Real-Time Freeform Modeling

ACT Math Facts & Formulas

TRADING AWAY WIDE BRANDS FOR CHEAP BRANDS. Swati Dhingra London School of Economics and CEP. Online Appendix

New Vocabulary volume

SAT Subject Math Level 1 Facts & Formulas

An inquiry into the multiplier process in IS-LM model

Projective Geometry. Projective Geometry

Chapter 6: Solving Large Systems of Linear Equations

- 1 - Handout #22 May 23, 2012 Huffman Encoding and Data Compression. CS106B Spring Handout by Julie Zelenski with minor edits by Keith Schwarz

The modelling of business rules for dashboard reporting using mutual information

Cyber Epidemic Models with Dependences

TESLA Report

How To Ensure That An Eac Edge Program Is Successful

OPTIMAL FLEET SELECTION FOR EARTHMOVING OPERATIONS

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Catalogue no XIE. Survey Methodology. December 2004

Distances in random graphs with infinite mean degrees

To motivate the notion of a variogram for a covariance stationary process, { Ys ( ): s R}

What is Advanced Corporate Finance? What is finance? What is Corporate Finance? Deciding how to optimally manage a firm s assets and liabilities.

1.6. Analyse Optimum Volume and Surface Area. Maximum Volume for a Given Surface Area. Example 1. Solution

Section 3.3. Differentiation of Polynomials and Rational Functions. Difference Equations to Differential Equations

Multigrid preconditioning for nonlinear (degenerate) parabolic equations with application to monument degradation

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Note nine: Linear programming CSE Linear constraints and objective functions. 1.1 Introductory example. Copyright c Sanjoy Dasgupta 1

by the matrix A results in a vector which is a reflection of the given

Parallel Smoothers for Matrix-based Multigrid Methods on Unstructured Meshes Using Multicore CPUs and GPUs


A system to monitor the quality of automated coding of textual answers to open questions

College Planning Using Cash Value Life Insurance

Pre-trial Settlement with Imperfect Private Monitoring

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Notes on Factoring. MA 206 Kurt Bryan

Notes: Most of the material in this chapter is taken from Young and Freedman, Chap. 12.

f(x) f(a) x a Our intuition tells us that the slope of the tangent line to the curve at the point P is m P Q =

Multivariate time series analysis: Some essential notions

Towards real-time image processing with Hierarchical Hybrid Grids

2.23 Gambling Rehabilitation Services. Introduction

Iterative Solvers for Linear Systems

M(0) = 1 M(1) = 2 M(h) = M(h 1) + M(h 2) + 1 (h > 1)

SAT Math Must-Know Facts & Formulas

Pressure. Pressure. Atmospheric pressure. Conceptual example 1: Blood pressure. Pressure is force per unit area:

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Code Verification and Numerical Accuracy Assessment for Finite Volume CFD Codes. Subrahmanya P. Veluri

2.1: The Derivative and the Tangent Line Problem

6. Differentiating the exponential and logarithm functions

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Introduction to the Finite Element Method

DATA ANALYSIS II. Matrix Algorithms

f(x + h) f(x) h as representing the slope of a secant line. As h goes to 0, the slope of the secant line approaches the slope of the tangent line.

1 Review of Newton Polynomials

A PARALLEL GEOMETRIC MULTIGRID METHOD FOR FINITE ELEMENTS ON OCTREE MESHES

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I

Solving partial differential equations (PDEs)

A strong credit score can help you score a lower rate on a mortgage

Theoretical calculation of the heat capacity

Average and Instantaneous Rates of Change: The Derivative

Spectral-collocation variational integrators

Shell and Tube Heat Exchanger

Strategic trading and welfare in a dynamic market. Dimitri Vayanos

LINEAR ALGEBRA W W L CHEN

Computation of geometric partial differential equations and mean curvature flow

Binary Search Trees. Adnan Aziz. Heaps can perform extract-max, insert efficiently O(log n) worst case

Tis Problem and Retail Inventory Management

Training Robust Support Vector Regression via D. C. Program

Transcription:

A Multigrid Tutorial part two William L. Briggs Department of Matematics University of Colorado at Denver Van Emden Henson Center for Applied Scientific Computing Lawrence Livermore National Laboratory Steve McCormick Department of Applied Matematics University of Colorado at Boulder Tis work was performed, in part, under te auspices of te United States Department of Energy by University of California Lawrence Livermore National Laboratory under contract number W-7405-Eng-48. UCRL-VG-3689 Pt

Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid of 04

Nonlinear Problems How sould we approac te nonlinear system A ( u) = and can we use multigrid to solve suc a system? f A fundamental relation we ve relied on, te residual equation A u Av = f Av Ae = does not old, since, if A(u) is a nonlinear operator, A ( u) A( v) A( e) r 3 of 04

Te Nonlinear Residual Equation We still base our development around te residual equation, now te nonlinear residual equation: A ( u) = A ( u) A( v) = f A( v) f A ( u) A( v) = r How can we use tis equation as te basis for a solution metod? 4 of 04

Let s consider Newton s Metod Te best known and most important metod for solving nonlinear equations! We wis to solve F(x) = 0. Expand F in a Taylor series about x : F( x + s) = F( x) + sf (x) + s F ( ξ) Dropping iger order terms, if x+s is a solution, 0 = F( s) + sf ( x) s = F( x) / F ( x) Hence, we develop an iteration F( x) x x F ( x) 5 of 04

6 of 04 Newton s metod for systems We wis to solve te system A(u) = 0. In vector form tis is Expanding A(v+e) in a Taylor series about v : u u u f u u u f u u u f u A = ),,, ( ),,, ( ),,, ( = ) ( N N N N 0 0 0 e v J v A e v A + ) ( + ) ( = ) + ( iger order terms

Newton for systems (cont.) Were J(v) is te Jacobian system f J( v ) = f N u u f u f u N f f f u u u N If u=v+e is a solution, 0 = A(v) + J(v) e and f N u f N u N u = v e = J( v) A( v) Leading to te iteration v v J( v) A( v) 7 of 04

Newton s metod in terms of te residual equation Te nonlinear residual equation is A ( v + e) A( v) = r Expanding A(v+e) in a two-term Taylor series about v : A ( v) + J( v) e A( v) = J ( v) e = r r Newton s metod is tus: r = f A( v) v v + J( v) r 8 of 04

How does multigrid fit in? One obvious metod is to use multigrid to solve J(v)e = r at eac iteration step. Tis metod is called Newton-multigrid and can be very effective. However, we would like to us multigrid ideas to treat te nonlinearity directly. Hence, we need to specialize te multigrid components (relaxation, grid transfers, coarsening) for te nonlinear case. 9 of 04

Wat is nonlinear relaxation? Several of te common relaxation scemes ave nonlinear counterparts. For A(u)=f, we describe te nonlinear Gauss-Seidel iteration: For eac =,,, N Set te t component of te residual to zero and solve for v. Tat is, solve (A(v)) = f. Equivalently, For eac =,,, N Find s R suc tat ( A( v + s ε ) ) = f were ε is te canonical t unit basis vector 0 of 04

of 04 How is nonlinear Gauss-Seidel done? Eac is a nonlinear scalar equation for v. We use te scalar Newton s metod to solve! Example: may be discretized so tat is given by Newton iteration for v is given by = ) ) ( ( f v A, = ) ( + ) ( f e x u x u x u ) ( = ) ) ( ( f v A = + + f e v v v v v + N v e f e v v v v v v v v ) + ( + + + +

How do we do coarsening for nonlinear multigrid? Recall te nonlinear residual equation A ( v + e) A( v) = In multigrid, we obtain an approximate solution v on te fine grid, ten solve te residual equation on te coarse grid. Te residual equation on appears as Ω A ( v + e ) A ( v ) = r r of 04

3 of 04 Look at te coarse residual equation We must evaluate te quantities on in Given v, a fine-grid approximation, we restrict te residual to te coarse grid For v we restrict v by Tus, Ω r v A e v A = ) ( ) + ( v A f I r ) ) ( ( = v I v = v A f I v I A e v I A ) ) ( ( + ) ( = ) + (

We ve obtained a coarse-grid equation of te form A ( u ) = f. Consider te coarse-grid residual equation: A I v e ( A I v I f A v + ) = ( ) + ( ( ) ) u f coarse-grid unknown All quantities are known We solve A ( u ) = f for = I v e and obtain e u = I v We ten apply te correction: v = v + I e u + 4 of 04

FAS, te Full Approximation Sceme, two grid form Perform nonlinear relaxation on A ( u ) = f to obtain an approximation v. Restrict te approximation and its residual v r = I ( f A( v ) ) = I v Solve te coarse-grid residual problem A ( u) = A ( v) + r Extract te coarse-grid error = u v e Interpolate and apply te correction v + = v I e 5 of 04

A few observations about FAS If A is a linear operator ten FAS reduces directly to te linear two-grid correction sceme. A fixed point of FAS is an exact solution to te finegrid problem and an exact solution to te fine-grid problem is a fixed point of te FAS iteration. 6 of 04

A few observations about FAS, continued Te FAS coarse-grid equation can be written as A u ( ) = f + τ were τ is te so-called tau correction. τ 0 u In general, since, te solution to te FAS coarse-grid equation is not te same as te solution to te original coarse-grid problem. A ( u Te tau correction may be viewed as a way to alter te coarse-grid equations to enance teir approximation properties. ) = f 7 of 04

Still more observations about FAS FAS may be viewed as an inner and outer iteration: te outer iteration is te coarse-grid correction, te inner iteration te relaxation metod. A true multilevel FAS process is recursive, using FAS to solve te nonlinear Ω problem using Ω 4. Hence, FAS is generally employed in a V- or W- cycling sceme. 8 of 04

And yet more observations about FAS! For linear problems we use FMG to obtain a good initial guess on te fine grid. Convergence of nonlinear iterations depends critically on aving a good initial guess. Wen FMG is used for nonlinear problems te interpolant I u is generally accurate enoug to be in te basin of attraction of te fine-grid solver. Tus, one FMG cycle, weter FAS, Newton, or Newton-multigrid is used on eac level, sould provide a solution accurate to te level of discretization, unless te nonlinearity is extremely strong. 9 of 04

Intergrid transfers for FAS Generally speaking, te standard operators (linear interpolation, full weigting) work effectively in FAS scemes. In te case of strongly nonlinear problems, te use of iger-order interpolation (e.g., cubic interpolation) may be beneficial. I For an FMG sceme, were u is te interpolation of a coarse-grid solution to become a fine-grid initial guess, iger-order interpolation is always advised. 0 of 04

Wat is in FAS? As in te linear case, tere are two basic possibilities: A ( u ) is determined by discretizing te nonlinear operator A(u) in te same fasion as was employed to obtain A ( u ), except tat te coarser mes spacing is used. A ( u ) A ( u ) is determined from te Galerkin condition A ( u ) = I A ( u ) I were te action of te Galerkin product can be captured in an implementable formula. Te first metod is usually easier, and more common. of 04

Nonlinear problems: an example Consider u( x, y) + γu( x, y) e u( x, y) = f ( x, y) on te unit square [0,] x [0,] wit omogeneous Diriclet boundary conditions and a regular Cartesian grid. Suppose te exact solution is u( x, y) = (x x 3 ) sin ( 3π y) of 04

Discretization of nonlinear example Te operator can be written (sloppily) as u 4 u u e + γ i,, i, = i i, f A ( u ) Te relaxation is given by u ( A ( u) ) i, u i, i, 4 + i, f i, u i, γ ( + u ) e 3 of 04

FAS and Newton s metod on u( x, y) + γu( x, y) e u( x, y) = f ( x, y) FAS γ 0 00 000 convergence factor 0.35 0.4 0.098 0.07 number of FAS cycles 0 Newton s Metod γ 0 00 000 convergence factor 4.00E-05 7.00E-05 3.00E-04.00E-04 number of Newton iterations 3 3 3 4 4 of 04

Newton, Newton-MG, and FAS on u( x, y) + γu( x, y) e u( x, y) = f ( x, y) Newton uses exact solve, Newton-MG is inexact Newton wit a fixed number of inner V(,)-cycles te Jacobian problem, 0 overall stopping criterion r < 0 Outer Inner Metod iterations iterations Megaflops Newton 3 660.6 Newton-MG 3 0 56.4 Newton-MG 4 0 38.5 Newton-MG 5 5 5. Newton-MG 0.3 Newton-MG 9 4.6 FAS 7. 5 of 04

Comparing FMG-FAS and FMG-Newton u( x, y) + γu( x, y) e u( x, y) = f ( x, y) We will do one FMG cycle using a single FAS V(,) - cycle as te solver at eac new level. We ten follow tat wit sufficiently many FAS V(,)-cycles as is necessary to obtain r < 0-0. Next, we will do one FMG cycle using a Newtonmultigrid step at eac new level (wit a single linear V(,)-cycle as te Jacobian solver. ) We ten follow tat wit sufficiently many Newton-multigrid steps as is necessary to obtain r < 0-0. 6 of 04

Comparing FMG-FAS and FMG-Newton u( x, y) + γu( x, y) e u( x, y) = f ( x, y) Cycle r e Mflops r e Mflops Cycle FMG-FAS.0E-0.00E-05 3..06E-0.50E-05.4 FMG-Newton FAS V 6.80E-04.40E-05 5.4 6.70E-04.49E-05 4. Newton-MG FAS V 5.00E-05.49E-05 7.6 5.0E-05.49E-05 5.8 Newton-MG FAS V 3.90E-06.49E-05 9.9 6.30E-06.49E-05 7.5 Newton-MG FAS V 3.0E-07.49E-05..70E-06.49E-05 9. Newton-MG FAS V 3.00E-08.49E-05 4.4 5.30E-07.49E-05 0.9 Newton-MG FAS V.90E-09.49E-05 6.7.70E-07.49E-05.6 Newton-MG FAS V 3.00E-0.49E-05 8.9 5.40E-08.49E-05 4.3 Newton-MG FAS V 3.0E-.49E-05..70E-08.49E-05 6.0 Newton-MG 5.50E-09.49E-05 7.7 Newton-MG.80E-09.49E-05 9.4 Newton-MG 5.60E-0.49E-05. Newton-MG.80E-0.49E-05.8 Newton-MG 5.70E-.49E-05 4.5 Newton-MG 7 of 04

Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 8 of 04

Neumann Boundary Conditions Consider te (-d) problem u ( x) = f ( x) 0 < x < u ( 0) = u ( ) = 0 We discretize tis on te interval [0,] wit grid spacing x = for =,,, N+. = N + We extend te interval wit two gost points 0 x - 0 3... N- N N+ N+ 9 of 04

We use central differences We approximate te derivatives wit differences, using te gost points: - 0 - + N N+ N+ u ( 0) u u Giving te system u ( x ) u + u u + u ( ) u N + u N u + u u + = f 0 N + u = 0 u un + u N = 0 30 of 04

Eliminating te gost points Use te boundary conditions to eliminate u, u N + u u = 0 u = u Eliminating te gost points in te =0 and =N+ equations gives te (N+)x(N+) system of equations: u N + u N = 0 u N + = u N u + u u + = f 0 N + u 0 u u N + un + = f = 0 f N + 3 of 04

We write te system in matrix form We can write, were A A = u = Note tat is (N+)x(N+), nonsymmetric, and te system involves unknowns and at te boundaries. f A u 0 u N + 3 of 04

We must consider a compatibility condition Te problem u ( x) = f ( x), for 0 < x < and wit u ( 0) = u ( ) = 0 is not well-posed! If u(x) is a solution, so is u(x)+c for any constant c. We cannot be certain a solution exists. If one does, it must satisfy 0 u (x) dx = 0 f ( x) dx [u () u (0)] = 0 f ( x) dx 0 = 0 f ( x) dx Tis integral compatibility condition is necessary! If f(x) doesn t satisfy it, tere is no solution! 33 of 04

Te well-posed system Te compatibility condition is necessary for a solution to exist. In general, it is also sufficient, wic can be proven tat is a well-beaved x operator in te space of functions u(x) tat ave zero mean. Tus we may conclude tat if f(x) satisfies te compatibility condition, tis problem is well-posed: u ( x) = f ( x) 0 < x < u ( 0) = u ( ) = 0 0 u( x) dx = 0 Te last says tat of all solutions u(x)+c we coose te one wit zero mean. 34 of 04

Te discrete problem is not well posed Since all row sums of A are zero, NS( A ) Putting into row-ecelon form sows tat A d im( NS( A ) ) = ence N S( A ) = span ( ) By te Fundamental Teorem of Linear Algebra, T as a solution if and only if f NS ( ( A ) ) A It is easy to sow tat T N S ( ( A ) ) = c( /,,,...,, / ) T Tus, A u = f as a solution if and only if T f c( /,,,...,, / ) Tat is, N f 0 + f + f N + = = 0 35 of 04

We ave two issues to consider Solvability. A solution exists iff f NS ( ( A ) T ) Uniqueness. If u solves A u = f so does u + c Note tat if A =(A ) T ten NS( A ) = NS((A ) T ) and solvability and uniqueness can be andled togeter Tis is easily done. Multiply st and last equations by /, giving A = 36 of 04

Te new system is symmetric We ave te symmetric system : Solvability is guaranteed by ensuring tat f is ortogonal to te constant vector : u 0 u u u N u N + = A u = f f 0 / f f f N f N + / f N +, = f = 0 = 0 37 of 04

Te well-posed discrete system Te (N+3)x(N+) system is: u + u u = f u 0 u f 0 = u N + u N + = N + u i = 0 i = 0 + or, more simply u f N + A u = f =, 0 0 N + (coose te zero mean solution) 38 of 04

Multigrid for te Neumann Problem We must ave te interval endpoints on all grids x x 0 x N/ x N x 0 x N / 4 x N / Relaxation is performed at all points, including endpoints: v0 v + f 0 v + + v + v + We add a global Gram-Scmidt step after relaxation on eac level to enforce te zero-mean condition v f v, v, v v N + N + f N + 39 of 04

Interpolation must include te endpoints We use linear interpolation: I = / / / / / / / / 40 of 04

4 of 04 Restriction also treats te endpoints For restriction, we use, yielding te values f f f 0 0 + = 4 f f f N N N + + + = 4 f f f f + + = 4 4 + I I T ) = (

Te coarse-grid operator We compute te coarse-grid operator using te Galerkin condition A = I A I = 0 I A I A ε 0 I ε 0 I ε 0 ε 0 4 0 0 0 4 A = 4 4 of 04

Coarse-grid solvability Assuming f satisfies f, = 0, te solvability condition, we can sow tat teoretically te coarsegrid problem A u = I ( f A v ) is also solvable. To be certain numerical round-off does not perturb solvability, we incorporate a Gram-Scmidt like step eac time a new rigt-and side f is generated for te coarse grid: f f f,, 43 of 04

Neumann Problem: an example Consider te problem u ( x) = x, 0 < x < u ( 0) = u ( ) = 0 x x 3 (c=-/ gives te zero mean solution). 3 wic as u ( x) = + c as a solution for any c grid size average number r e N conv. factor of cycles 3 6.30E- 0.079 9.70E-05 9 63.90E- 0.089.40E-05 0 7.60E- 0.093 5.90E-06 0 55 3.70E- 0.096.50E-06 0 5 5.70E- 0.00 3.70E-07 0 07 8.60E- 0.04 9.0E-08 0 047.0E- 0..30E-08 0 4095 5.0E- 0. 5.70E-09 0 44 of 04

Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 45 of 04

Anisotropic Problems All problems considered tus far ave ad as te off-diagonal entries. We consider two situations wen te matrix as two different constant on te off-diagonals. Tese situations arise wen te (-d) differential equation as constant, but different, coefficients for te derivatives in te coordinate directions te discretization as constant, but different, mas spacing in te different coordinate directions 46 of 04

We consider two types of anisotropy Different coefficients on te derivatives u xx α discretized on a uniform grid wit spacing. u yy = f Constant, but different, mes spacings: y x = = y = x α N x 47 of 04

48 of 04 Bot problems lead to te same stencil + α + + u u u u u u k k k k k k +,,,, +,, + + + u u u u u u k k k k k k α +,,,, +,, A α α + α =

Wy standard multigrid fails Note tat A = + α as weak connections in te α α y-direction. MG convergence factors degrade as α gets small. Poor performance at α = 0.. Consider α 0. A Tis is a collection of disconnected -d problems! Point relaxation will smoot oscillatory errors in te x- direction (strong connections), but wit no connections in te y-direction te errors in tat direction will generally be random, and no point relaxation will ave te smooting property in te y-direction. 0 + α 0 49 of 04

We analyze weigted Jacobi Te eigenvalues of te weigted Jacobi iteration matrix for tis problem are λ i, l = ω +α sin iπ N +αsin lπ N 0.9 0.9 0.8 0.8 0.8 0.6 0.7 0.7 0.4 0.6 0.6 0. 0.5 0.4 0.5 0.4 0 0 5 i 0 k 5 0 5 l 0 l 5 i 0.3 0 4 6 8 0 4 6 0.3 0 4 6 8 0 4 6 l 50 of 04

Two strategies for anisotropy Semicoarsening Because we expect MG-like convergence for te -d problems along lines of constant y, we sould coarsen te grid in te x- direction, but not in te y-direction. Line relaxation Because te te equations are strongly coupled in te x-direction it may be advantageous to solve simultaneously for entire lines of unknowns in te x-direction (along lines of constant y) 5 of 04

Semicoarsening wit point relaxation A α = Point relaxation on + α smoots in te x- α direction. Coarsen by removing every oter y-line. Ω Ω We do not coarsen along te remaining y-lines. Semicoarsening is not as fast as full coarsening. Te number of points on Ω is about alf te number of points on Ω, instead of te usual one-fourt. 5 of 04

Interpolation wit semicoarsening We interpolate in te -dimensional way along eac line of constant y. Te coarse-grid correction equations are v = v v, k, k +, k v +, k = v +, k + v, k + v +, k 53 of 04

Line relaxation wit full coarsening Te oter approac to tis problem is to do full coarsening, but to relax entire lines (constant y) of variables simultaneously. Write A in block form as were c A = D ci ci D ci ci D ci ci ci D α + α = and D = + α + α 54 of 04

Line relaxation One sweep of line relaxation consists of solving a tridiagonal system for eac line of constant y. Te kt suc system as te form D v g k = k were vk is te kt subvector of v wit entries ( v k ) = v, k and te kt rigt-and side subvector is α ( g ) = f + ( v k k + v, k,, k + Because D is tridiagonal, te kt system can be solved very efficiently. ) 55 of 04

Wy line relaxation works Te eigenvalues of te weigted block Jacobi iteration matrix are λ ω π π = sin i + α sin l i π N N sin + α N i, l 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0. 0.4 0.4 0-0. 0. 0. 5 0 l 5 0 0 5 i 0 5 0-0. 0 4 6 8 0 4 6 i 0-0. 0 4 6 8 0 4 6 l 56 of 04

Semicoarsening wit line relaxation We migt not know te direction of weak coupling or it migt vary. Suppose we want a metod tat can andle eiter A = α + α α or A = α + α α We could use semicoarsening in te x-direction to andle A and line relaxation in te y-direction to take care of. A 57 of 04

Semicoarsening wit line relaxation Te original grid Original grid viewed as a stack of pencils. Line relaxation is used to solve problem along eac pencil. Coarsening is done by deleting every oter pencil 58 of 04

An anisotropic example u α u = Consider xx yy wit u=0 on te boundaries of te unit square, and stencil given by A Suppose f ( x, y) = ( y y ) + α(x x ) so te exact solution is given by u( x, y) = (y y )(x x ) f α = + α α Observe tat if α is small, te x-direction dominates wile if α is large, te y-direction dominates 59 of 04

Wat is smoot error? Consider α=0.00 and suppose point Gauss-Seidel is applied to a random initial guess. Te error after 50 sweeps appears as: 0. 0. 0.08 0.06 0. 0.04 0.08 0.0 0 0 0. 0. 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.06 Error along line of constant x 0.04 0.0 0. 0 0. 0. 0 8 y 0.5 0 0 0. 0.4 0.6 x 0.8 0. 0 6 0. 0 4 0. 0 0 0 0. 0. 0.3 0. 4 0.5 0. 6 0. 7 0.8 0. 9 Error along line of constant y 60 of 04

We experiment wit 3 metods Standard V(,)-cycling, wit point Gauss-Seidel relaxation, full coarsening, and linear interpolation Semicoarsening in te x-direction. Coarse and fine grids ave te same number of points in te y- direction. -d full weigting and linear interpolation are used in te x-direction, tere is no y-coupling in te intergrid transfers Semicoarsening in te x-direction combined wit line relaxation in te y-direction. -d full weigting and interpolation. 6 of 04

Wit semicoarsening, te operator must cange To account for unequal mes spacing, te residual and relaxation operators must use a modified stencil A = α y x x + α y y Note tat as grids become coarser, grows wile y remains constant. x x 6 of 04

How do te 3 metods work for various values of α? Asymptotic convergence factors: sceme 000 00 0 0. 0.0 0.00 E-04 V(,)-cycles 0.95 0.94 0.58 0.3 0.58 0.90 0.95 0.95 emicoarsening in x 0.94 0.99 0.98 0.93 0.7 0.8 0.07 0.07 semic / line relax 0.04 0.08 0.08 0.08 0.07 0.07 0.08 0.08 α y-direction strong x-direction strong Note: semicoarsening in x works well for α <.00 but degrades noticeably even at α =. 63 of 04

A semicoarsening subtlety Suppose α is small, so tat semicoarsening in x is used. As we progress to coarser grids, x gets small but remains constant. y x If, on some coarse grid, becomes comparable to α y, te problem effectively becomes recoupled in te y-direction. Continued semicoarsening can produce artificial anisotropy, strong in te y-direction. Wen tis occurs, it is best to stop semicoarsening and continue wit full coarsening on any furter coarse grids. 64 of 04

Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 65 of 04

Variable Mes Problems Non-uniform grids are commonly used to accommodate irregularities in problem domains Consider ow we migt approac te -d problem u ( x) = f ( x) 0 < x < u( 0) = u( ) = 0 posed on tis grid: x = 0 x = x 0 x x x + x N 66 of 04

We need some notation for te mes spacing Let N be a positive integer. We define te spacing interval between and : x x + + / x + x = 0,,...,N + / x 0 x x x + x N 67 of 04

We define te discrete differential operator Using second order finite differences (and plugging troug a mess of algebra!) we obtain tis discrete representation for te problem: α u +(α +β ) u β u + u 0 = u N = 0 = f N were α = / ( / + + / ) β = + / ( / + + / ) 68 of 04

We modify standard multigrid to accommodate variable spacing We coose every second fine-grid point as a coarse-grid point x 0 x x N x 0 x x N / We use linear interpolation, modified for te spacing. If v = I, ten for v N/ v = v v + = + 3/ v + + / v + + / + + 3/ 69 of 04

We use te variational properties A to derive restriction and. A = I A I I = ( I ) T Tis produces a stencil on Ω tat is similar, but not identical, to te fine-grid stencil. If te resulting system is scaled by ( / + + / ), ten te Galerkin product is te same as te fine-grid stencil. For -d problems tis approac can be generalized readily to logically rectangular grids. However, for irregular grids tat are not logically rectangular, AMG is a better coice. 70 of 04

Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 7 of 04

Variable coefficient problems A common difficulty is te variable coefficient problem, given in -d by (a( x) u ( x)) = f ( x) 0 < x < u( 0) = u( ) = 0 were a(x) is a positive function on [0,] We seek to develop a conservative, or self-adoint, metod for discretizing tis problem. Assume we ave available to us te values of a(x) at te midpoints of te grid a + / a( x + / ) x 0 x x x + x N 7 of 04

We discretize using central differences We can use second-order differences to approximate te derivatives. To use a grid spacing of we evaluate a(x)u (x) at points midway between te gridpoints: ( a( x) u (x)) x ( au ) x + / (au ) x / + O( ) Points used to evaluate (au ) at x x 0 x x x + x N 73 of 04

We discretize using central differences To evaluate ( au ) x + / we must sample a(x) at te point and use second order differences: x + / ( au ) x + / a + / u + u were a + / a( x + / ) ( au ) x / a / u u Points used to evaluate u at x + / Points used to evaluate (au ) at x x 0 x x x + x N 74 of 04

Te basic stencil is given We combine te differences for u and for (au ) to obtain te operator (a( x ) u ( x )) ( x ) a + / u + u + a / u u and te problem becomes, for (a / u +(a / + a + / ) u a + / u + ) = f u 0 = u N = 0 N 75 of 04

Coarsening te variable coefficient problem A reasonable approac is to use a standard multigrid algoritm wit linear interpolation, full weigting, and te stencil were A = ( a ( ) / a / + a + / a + / a + / + a + 3/ a + / = ) Te same stencil can be obtained via te Galerkin relation 76 of 04

Standard multigrid degrades if a(x) is igly variable It can be sown tat te variable coefficient discretization is equivalent to using standard multigrid wit simple averaging on te Poisson problem on a certain variable-mes spacing. -(a(x)u ) = f -u (x) = f But simple averaging won t accurately represent smoot components if is close to but far from. x + x x + x x + x + x x + 77 of 04

One remedy is to apply operator interpolation Assume tat relaxation does not cange smoot error, so te residual is approximately zero. Applying at yields x + a + / e +(a + / + a + 3/ ) e + a + 3/ e + = 0 Solving for e + e + = a + / e + a + 3/ e + a + / + a + 3/ 78 of 04

Tus, te operator induced interpolation is v = v v + = a + / v + a + 3/ v + a + / + a + 3/ And, as usual, te restriction and coarse-grid operators are defined by te Galerkin relations A = I A I I = c( I ) T 79 of 04

A Variable coefficient example We use V(,) cycle, full weigting, linear interpolation. We use a( x) = ρ sin( k π x ) and a( x) = ρ rand ( k π x) ρ a( x) = ρ sin( k π x ) a( x) = ρ rand ( k π x) k=3 k=5 k=50 k=00 k=00 k=400 0 0.085 0.085 0.085 0.085 0.085 0.085 0.085 0.5 0.084 0.098 0.098 0.094 0.093 0.083 0.083 0.5 0.093 0.85 0.94 0.96 0.95 0.87 0.73 0.75 0.9 0.374 0.387 0.39 0.39 0.388 0.394 0.85 0.4 0.497 0.5 0.54 0.54 0.56 0.47 0.95 0.9 0.68 0.69 0.694 0.699 0.745 0.67 80 of 04

Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 8 of 04

Algebraic multigrid: for unstructured-grids Automatically defines coarse grid AMG as two distinct pases: setup pase: define MG components solution pase: perform MG cycles AMG approac is opposite of geometric MG fix relaxation (point Gauss-Seidel) coose coarse grids and prolongation, P, so tat error not reduced by relaxation is in range(p) define oter MG components so tat coarsegrid correction eliminates error in range(p) (i.e., use Galerkin principle) (in contrast, geometric MG fixes coarse grids, ten defines suitable operators and smooters) 8 of

AMG as two pases: Setup Pase Select Coarse grids, Ω m +, m =,,... Define interpolation, Im m +, m =,,... Define restriction and coarse-grid operators m + m T m + = m + m Im = ( Im + ) A Im A Im + Solve Pase Standard multigrid operations, e.g., V-cycle, W-cycle, FMG, FAS, etc Note: Only te selection of coarse grids does not parallelize well using existing tecniques! m 83 of

AMG fundamental concept: Smoot error = small residuals Consider te iterative metod error recurrence ek + = ( I Q A ) ek Error tat is slow to converge satisfies ( I Q A) e e Q A e 0 r 0 More precisely, it can be sown tat smoot error satisfies r e D «() A 84 of

AMG uses strong connection to determine MG components It is easy to sow from () tat smoot error satisfies () A e, e «De, Define i is strongly connected to by ai θ max {aik}, 0 < θ k i For M-matrices, we ave from () a i ei e «e ii i i a e implying tat smoot error varies slowly in te direction of strong connections 85 of

Some useful definitions Te set of strong connections of a variable u i, tat is, te variables upon wose values te value of u i depends, is defined as S i = : ai > θ ma x ai i Te set of points strongly connected to a variable is denoted: S i T = { : Si}. Te set of coarse-grid variables is denoted C. Te set of fine-grid variables is denoted F. Te set of coarse-grid variables used to interpolate te value of te fine-grid variable C i is denoted. u i u i 86 of

Coosing te Coarse Grid Two Criteria i F (C) For eac, eac point sould eiter be in C or sould be strongly connected to at least one point in C i C S i (C) sould be a maximal subset wit te property tat no two C-points are strongly connected to eac oter. Satisfying bot (C) and (C) is sometimes impossible. We use (C) as a guide wile enforcing (C). 87 of

Selecting te coarse-grid points C-point selected (point wit largest value ) Neigbors of C-point become F- points Next C-point selected (after updating values ) F-points selected, etc. 88 of

Examples: Laplacian Operator 5-pt FD, 9-pt FE (quads), and 9-pt FE (stretced quads) 8 4 4 8 4 5-pt FD 9-pt FE (quads) 9-pt FE (stretced quads) 89 of

Prolongation is based on smoot error, strong connections (from M-matrices) Smoot error is given by: F C F r i = aii ei + ai e 0 i N C i F C Prolongation : ei, i C ( Pe) = i ωik ek, i F k C i 90 of

F Prolongation is based on smoot error, strong connections (from M-matrices) C F Sets: C i D s i D w i Strongly connected C -pts. Strongly connected F-pts. Weakly connected points. C i F a ii e i C Gives: C Te definition of smoot error, i a a i ii e e i D i s i a i a i e e w i D a i e Strong C Strong F Weak pts. 9 of

Finally, te prolongation weigts are defined In te smoot-error relation, use e = ei for weak connections. For te strong F-points use : e = ak ek k C k i C i a k yielding te prolongation weigts: w i = a i a ii + D + n D a s i m C i w i ik a a k a in km 9 of

AMG setup costs: a bad rap Many geometric MG metods need to compute prolongation and coarse-grid operators Te only additional expense in te AMG setup pase is te coarse grid selection algoritm AMG setup pase is only 0-5% more expensive tan in geometric MG and may be considerably less tan tat! 93 of

AMG Performance: Sometimes a Success Story AMG performs extremely well on te model problem (Poisson s equation, regular grid)- optimal convergence factor (e.g., 0.4) and scalability wit increasing problem size. AMG appears to be bot scalable and efficient on diffusion problems on unstructured grids (e.g., 0.- 0.3). AMG andles anisotropic diffusion coefficients on irregular grids reasonably well. AMG andles anisotropic operators on structured and unstructured grids relatively well (e.g., 0.35). 94 of

So, wat could go wrong? Strong F-F connections: weigts are dependent on eac oter For point i te value e is interpolated from k, k, and is needed to make te interpolation weigts for approximating e. i For point te value e i is interpolated from k, k, and is needed to make te interpolation weigts for approximating e. It s an implicit system! k k C C i i k 3 C k 4 C i k k C C i 95 of

Is tere a fix? A Gauss-Seidel like iterative approac to weigt definition is implemented. Usually two passes suffice. But does it work? Frequently, it does: Convergence factors for Laplacian, stretced quadrilaterals x = 0 y teta Standard Iterative 0.5 0.47 0.4 0.5 0.4 0.4 x = 00 y 0.5 0.83 0.8 0.5 0.53 0.3 96 of

AMG for systems How can we do AMG on systems? Naïve approac: Block approac (block Gauss-Seidel, using scalar AMG to solve at eac cycle) u v A A ( A ( A A A ) ) Great Idea! Except tat it doesn t work! (relaxation does not evenly smoot errors in bot unknowns) u v ( f ( g = A A f g v) u ) 97 of

AMG for systems: a solution To solve te system problem, allow interaction between te unknowns at all levels: k k k k A A I k ( k + ) u A = k k and Ik + = A A 0 ( I 0 k k + ) v Tis is called te unknown approac. Results: -D elasticity, uniform quadrilateral mes: mes spacing 0.5 0.065 0.0335 0.0565 Convergence factor 0. 0.35 0.4 0.44 98 of

How s it perform (vol I)? Regular grids, plain, old, vanilla problems Te Laplace Operator: Convergence Time Setup Stencil per cycle Complexity per Cycle Times 5-pt 0.054. 0.9.63 5-pt skew 0.067. 0.7.5 9-pt (-,8) 0.078.30 0.6.83 9-pt (-,-4,0) 0.09.30 0.6.83 Anisotropic Laplacian: ε U xx U yy Epsilon 0.00 0.0 0. 0.5 0 00 000 Convergence/cycle 0.084 0.093 0.058 0.069 0.056 0.079 0.087 0.093 0.083 99 of

How s it perform (vol II)? Structured Meses, Rectangular Domains 5-point Laplacian on regular rectangular grids Convergence factor (y-axis) plotted against number of nodes (x-axis) 0.6 0.4 0. 0. 0.08 0.06 0.04 0.0 0 0.46 0.4 0.35 0.33 0.08 0.055 0 500000 000000 00 of

How s it perform (vol III)? Unstructured Meses, Rectangular Domains Laplacian on random unstructured grids (regular triangulations, 5-0% nodes randomly collapsed into neigboring nodes) Convergence factor (y-axis) plotted against number of nodes (x-axis) 0.3 0.5 0. 0.5 0. 0.75 0. 0.097 0.3 0.53 0.05 0 0 0000 40000 60000 0 of

How s it perform (vol IV)? Isotropic diffusion, Structured/Unstructured Grids ( grids 0. 4 0. 3 5 0. 3 0. 5 0. 0. 5 0. 0. 0 5 0 d( x, y) u) on structured, unstructured N=664 N=66049 N=3755 N=5458 Structured Structured Unstruct. Unstruct. Problems used: a means parameter c=0, b means c=000 6 a 6 b 7 a 7 b 8 a 8 b 9 a 9 b 6: d( x, y) =. 0 + c x y 8: d( x, y). 0 = c 0. 5 max { x 0. 5, y 0. 5 oterwise } 0. 5 7: d( x, y) =. 0 c x x > 0. 5 0. 5 9: d( x, y) =. 0 c 0. 5 ( x 0. 5) + ( y 0. 5) oterwise 0. 5 0 of

How s it perform (vol V)? Laplacian operator, unstructured Grids Convergence factor 0.3 0.754 0.5 0.98 0.37 0. 0.888 0.75 0.5 0. 0.00 0.05 0 0 50000 00000 50000 00000 50000 300000 350000 Gridpoints 03 of

Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 04 of 04

Multigrid Rules! We conclude wit a few observations: We ave barely scratced te surface of te myriad ways tat multigrid as been, and can be, employed. Wit diligence and care, multigrid can be made to andle many types of complications in a robust, efficient manner. Furter extensions to multigrid metodology are being sougt by many people working on many different problems.