A Multigrid Tutorial part two


 Silvester Horn
 1 years ago
 Views:
Transcription
1 A Multigrid Tutorial part two William L. Briggs Department of Matematics University of Colorado at Denver Van Emden Henson Center for Applied Scientific Computing Lawrence Livermore National Laboratory Steve McCormick Department of Applied Matematics University of Colorado at Boulder Tis work was performed, in part, under te auspices of te United States Department of Energy by University of California Lawrence Livermore National Laboratory under contract number W7405Eng48. UCRLVG3689 Pt
2 Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid of 04
3 Nonlinear Problems How sould we approac te nonlinear system A ( u) = and can we use multigrid to solve suc a system? f A fundamental relation we ve relied on, te residual equation A u Av = f Av Ae = does not old, since, if A(u) is a nonlinear operator, A ( u) A( v) A( e) r 3 of 04
4 Te Nonlinear Residual Equation We still base our development around te residual equation, now te nonlinear residual equation: A ( u) = A ( u) A( v) = f A( v) f A ( u) A( v) = r How can we use tis equation as te basis for a solution metod? 4 of 04
5 Let s consider Newton s Metod Te best known and most important metod for solving nonlinear equations! We wis to solve F(x) = 0. Expand F in a Taylor series about x : F( x + s) = F( x) + sf (x) + s F ( ξ) Dropping iger order terms, if x+s is a solution, 0 = F( s) + sf ( x) s = F( x) / F ( x) Hence, we develop an iteration F( x) x x F ( x) 5 of 04
6 6 of 04 Newton s metod for systems We wis to solve te system A(u) = 0. In vector form tis is Expanding A(v+e) in a Taylor series about v : u u u f u u u f u u u f u A = ),,, ( ),,, ( ),,, ( = ) ( N N N N e v J v A e v A + ) ( + ) ( = ) + ( iger order terms
7 Newton for systems (cont.) Were J(v) is te Jacobian system f J( v ) = f N u u f u f u N f f f u u u N If u=v+e is a solution, 0 = A(v) + J(v) e and f N u f N u N u = v e = J( v) A( v) Leading to te iteration v v J( v) A( v) 7 of 04
8 Newton s metod in terms of te residual equation Te nonlinear residual equation is A ( v + e) A( v) = r Expanding A(v+e) in a twoterm Taylor series about v : A ( v) + J( v) e A( v) = J ( v) e = r r Newton s metod is tus: r = f A( v) v v + J( v) r 8 of 04
9 How does multigrid fit in? One obvious metod is to use multigrid to solve J(v)e = r at eac iteration step. Tis metod is called Newtonmultigrid and can be very effective. However, we would like to us multigrid ideas to treat te nonlinearity directly. Hence, we need to specialize te multigrid components (relaxation, grid transfers, coarsening) for te nonlinear case. 9 of 04
10 Wat is nonlinear relaxation? Several of te common relaxation scemes ave nonlinear counterparts. For A(u)=f, we describe te nonlinear GaussSeidel iteration: For eac =,,, N Set te t component of te residual to zero and solve for v. Tat is, solve (A(v)) = f. Equivalently, For eac =,,, N Find s R suc tat ( A( v + s ε ) ) = f were ε is te canonical t unit basis vector 0 of 04
11 of 04 How is nonlinear GaussSeidel done? Eac is a nonlinear scalar equation for v. We use te scalar Newton s metod to solve! Example: may be discretized so tat is given by Newton iteration for v is given by = ) ) ( ( f v A, = ) ( + ) ( f e x u x u x u ) ( = ) ) ( ( f v A = + + f e v v v v v + N v e f e v v v v v v v v ) + (
12 How do we do coarsening for nonlinear multigrid? Recall te nonlinear residual equation A ( v + e) A( v) = In multigrid, we obtain an approximate solution v on te fine grid, ten solve te residual equation on te coarse grid. Te residual equation on appears as Ω A ( v + e ) A ( v ) = r r of 04
13 3 of 04 Look at te coarse residual equation We must evaluate te quantities on in Given v, a finegrid approximation, we restrict te residual to te coarse grid For v we restrict v by Tus, Ω r v A e v A = ) ( ) + ( v A f I r ) ) ( ( = v I v = v A f I v I A e v I A ) ) ( ( + ) ( = ) + (
14 We ve obtained a coarsegrid equation of te form A ( u ) = f. Consider te coarsegrid residual equation: A I v e ( A I v I f A v + ) = ( ) + ( ( ) ) u f coarsegrid unknown All quantities are known We solve A ( u ) = f for = I v e and obtain e u = I v We ten apply te correction: v = v + I e u + 4 of 04
15 FAS, te Full Approximation Sceme, two grid form Perform nonlinear relaxation on A ( u ) = f to obtain an approximation v. Restrict te approximation and its residual v r = I ( f A( v ) ) = I v Solve te coarsegrid residual problem A ( u) = A ( v) + r Extract te coarsegrid error = u v e Interpolate and apply te correction v + = v I e 5 of 04
16 A few observations about FAS If A is a linear operator ten FAS reduces directly to te linear twogrid correction sceme. A fixed point of FAS is an exact solution to te finegrid problem and an exact solution to te finegrid problem is a fixed point of te FAS iteration. 6 of 04
17 A few observations about FAS, continued Te FAS coarsegrid equation can be written as A u ( ) = f + τ were τ is te socalled tau correction. τ 0 u In general, since, te solution to te FAS coarsegrid equation is not te same as te solution to te original coarsegrid problem. A ( u Te tau correction may be viewed as a way to alter te coarsegrid equations to enance teir approximation properties. ) = f 7 of 04
18 Still more observations about FAS FAS may be viewed as an inner and outer iteration: te outer iteration is te coarsegrid correction, te inner iteration te relaxation metod. A true multilevel FAS process is recursive, using FAS to solve te nonlinear Ω problem using Ω 4. Hence, FAS is generally employed in a V or W cycling sceme. 8 of 04
19 And yet more observations about FAS! For linear problems we use FMG to obtain a good initial guess on te fine grid. Convergence of nonlinear iterations depends critically on aving a good initial guess. Wen FMG is used for nonlinear problems te interpolant I u is generally accurate enoug to be in te basin of attraction of te finegrid solver. Tus, one FMG cycle, weter FAS, Newton, or Newtonmultigrid is used on eac level, sould provide a solution accurate to te level of discretization, unless te nonlinearity is extremely strong. 9 of 04
20 Intergrid transfers for FAS Generally speaking, te standard operators (linear interpolation, full weigting) work effectively in FAS scemes. In te case of strongly nonlinear problems, te use of igerorder interpolation (e.g., cubic interpolation) may be beneficial. I For an FMG sceme, were u is te interpolation of a coarsegrid solution to become a finegrid initial guess, igerorder interpolation is always advised. 0 of 04
21 Wat is in FAS? As in te linear case, tere are two basic possibilities: A ( u ) is determined by discretizing te nonlinear operator A(u) in te same fasion as was employed to obtain A ( u ), except tat te coarser mes spacing is used. A ( u ) A ( u ) is determined from te Galerkin condition A ( u ) = I A ( u ) I were te action of te Galerkin product can be captured in an implementable formula. Te first metod is usually easier, and more common. of 04
22 Nonlinear problems: an example Consider u( x, y) + γu( x, y) e u( x, y) = f ( x, y) on te unit square [0,] x [0,] wit omogeneous Diriclet boundary conditions and a regular Cartesian grid. Suppose te exact solution is u( x, y) = (x x 3 ) sin ( 3π y) of 04
23 Discretization of nonlinear example Te operator can be written (sloppily) as u 4 u u e + γ i,, i, = i i, f A ( u ) Te relaxation is given by u ( A ( u) ) i, u i, i, 4 + i, f i, u i, γ ( + u ) e 3 of 04
24 FAS and Newton s metod on u( x, y) + γu( x, y) e u( x, y) = f ( x, y) FAS γ convergence factor number of FAS cycles 0 Newton s Metod γ convergence factor 4.00E E E04.00E04 number of Newton iterations of 04
25 Newton, NewtonMG, and FAS on u( x, y) + γu( x, y) e u( x, y) = f ( x, y) Newton uses exact solve, NewtonMG is inexact Newton wit a fixed number of inner V(,)cycles te Jacobian problem, 0 overall stopping criterion r < 0 Outer Inner Metod iterations iterations Megaflops Newton NewtonMG NewtonMG NewtonMG NewtonMG 0.3 NewtonMG FAS 7. 5 of 04
26 Comparing FMGFAS and FMGNewton u( x, y) + γu( x, y) e u( x, y) = f ( x, y) We will do one FMG cycle using a single FAS V(,)  cycle as te solver at eac new level. We ten follow tat wit sufficiently many FAS V(,)cycles as is necessary to obtain r < 00. Next, we will do one FMG cycle using a Newtonmultigrid step at eac new level (wit a single linear V(,)cycle as te Jacobian solver. ) We ten follow tat wit sufficiently many Newtonmultigrid steps as is necessary to obtain r < of 04
27 Comparing FMGFAS and FMGNewton u( x, y) + γu( x, y) e u( x, y) = f ( x, y) Cycle r e Mflops r e Mflops Cycle FMGFAS.0E0.00E E0.50E05.4 FMGNewton FAS V 6.80E04.40E E04.49E NewtonMG FAS V 5.00E05.49E E05.49E NewtonMG FAS V 3.90E06.49E E06.49E NewtonMG FAS V 3.0E07.49E E06.49E NewtonMG FAS V 3.00E08.49E E07.49E NewtonMG FAS V.90E09.49E E07.49E05.6 NewtonMG FAS V 3.00E0.49E E08.49E NewtonMG FAS V 3.0E.49E E08.49E NewtonMG 5.50E09.49E NewtonMG.80E09.49E NewtonMG 5.60E0.49E05. NewtonMG.80E0.49E05.8 NewtonMG 5.70E.49E NewtonMG 7 of 04
28 Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 8 of 04
29 Neumann Boundary Conditions Consider te (d) problem u ( x) = f ( x) 0 < x < u ( 0) = u ( ) = 0 We discretize tis on te interval [0,] wit grid spacing x = for =,,, N+. = N + We extend te interval wit two gost points 0 x N N N+ N+ 9 of 04
30 We use central differences We approximate te derivatives wit differences, using te gost points: N N+ N+ u ( 0) u u Giving te system u ( x ) u + u u + u ( ) u N + u N u + u u + = f 0 N + u = 0 u un + u N = 0 30 of 04
31 Eliminating te gost points Use te boundary conditions to eliminate u, u N + u u = 0 u = u Eliminating te gost points in te =0 and =N+ equations gives te (N+)x(N+) system of equations: u N + u N = 0 u N + = u N u + u u + = f 0 N + u 0 u u N + un + = f = 0 f N + 3 of 04
32 We write te system in matrix form We can write, were A A = u = Note tat is (N+)x(N+), nonsymmetric, and te system involves unknowns and at te boundaries. f A u 0 u N + 3 of 04
33 We must consider a compatibility condition Te problem u ( x) = f ( x), for 0 < x < and wit u ( 0) = u ( ) = 0 is not wellposed! If u(x) is a solution, so is u(x)+c for any constant c. We cannot be certain a solution exists. If one does, it must satisfy 0 u (x) dx = 0 f ( x) dx [u () u (0)] = 0 f ( x) dx 0 = 0 f ( x) dx Tis integral compatibility condition is necessary! If f(x) doesn t satisfy it, tere is no solution! 33 of 04
34 Te wellposed system Te compatibility condition is necessary for a solution to exist. In general, it is also sufficient, wic can be proven tat is a wellbeaved x operator in te space of functions u(x) tat ave zero mean. Tus we may conclude tat if f(x) satisfies te compatibility condition, tis problem is wellposed: u ( x) = f ( x) 0 < x < u ( 0) = u ( ) = 0 0 u( x) dx = 0 Te last says tat of all solutions u(x)+c we coose te one wit zero mean. 34 of 04
35 Te discrete problem is not well posed Since all row sums of A are zero, NS( A ) Putting into rowecelon form sows tat A d im( NS( A ) ) = ence N S( A ) = span ( ) By te Fundamental Teorem of Linear Algebra, T as a solution if and only if f NS ( ( A ) ) A It is easy to sow tat T N S ( ( A ) ) = c( /,,,...,, / ) T Tus, A u = f as a solution if and only if T f c( /,,,...,, / ) Tat is, N f 0 + f + f N + = = 0 35 of 04
36 We ave two issues to consider Solvability. A solution exists iff f NS ( ( A ) T ) Uniqueness. If u solves A u = f so does u + c Note tat if A =(A ) T ten NS( A ) = NS((A ) T ) and solvability and uniqueness can be andled togeter Tis is easily done. Multiply st and last equations by /, giving A = 36 of 04
37 Te new system is symmetric We ave te symmetric system : Solvability is guaranteed by ensuring tat f is ortogonal to te constant vector : u 0 u u u N u N + = A u = f f 0 / f f f N f N + / f N +, = f = 0 = 0 37 of 04
38 Te wellposed discrete system Te (N+3)x(N+) system is: u + u u = f u 0 u f 0 = u N + u N + = N + u i = 0 i = 0 + or, more simply u f N + A u = f =, 0 0 N + (coose te zero mean solution) 38 of 04
39 Multigrid for te Neumann Problem We must ave te interval endpoints on all grids x x 0 x N/ x N x 0 x N / 4 x N / Relaxation is performed at all points, including endpoints: v0 v + f 0 v + + v + v + We add a global GramScmidt step after relaxation on eac level to enforce te zeromean condition v f v, v, v v N + N + f N + 39 of 04
40 Interpolation must include te endpoints We use linear interpolation: I = / / / / / / / / 40 of 04
41 4 of 04 Restriction also treats te endpoints For restriction, we use, yielding te values f f f = 4 f f f N N N = 4 f f f f + + = I I T ) = (
42 Te coarsegrid operator We compute te coarsegrid operator using te Galerkin condition A = I A I = 0 I A I A ε 0 I ε 0 I ε 0 ε A = 4 4 of 04
43 Coarsegrid solvability Assuming f satisfies f, = 0, te solvability condition, we can sow tat teoretically te coarsegrid problem A u = I ( f A v ) is also solvable. To be certain numerical roundoff does not perturb solvability, we incorporate a GramScmidt like step eac time a new rigtand side f is generated for te coarse grid: f f f,, 43 of 04
44 Neumann Problem: an example Consider te problem u ( x) = x, 0 < x < u ( 0) = u ( ) = 0 x x 3 (c=/ gives te zero mean solution). 3 wic as u ( x) = + c as a solution for any c grid size average number r e N conv. factor of cycles E E E E E E E E E E E E E E E E of 04
45 Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 45 of 04
46 Anisotropic Problems All problems considered tus far ave ad as te offdiagonal entries. We consider two situations wen te matrix as two different constant on te offdiagonals. Tese situations arise wen te (d) differential equation as constant, but different, coefficients for te derivatives in te coordinate directions te discretization as constant, but different, mas spacing in te different coordinate directions 46 of 04
47 We consider two types of anisotropy Different coefficients on te derivatives u xx α discretized on a uniform grid wit spacing. u yy = f Constant, but different, mes spacings: y x = = y = x α N x 47 of 04
48 48 of 04 Bot problems lead to te same stencil + α + + u u u u u u k k k k k k +,,,, +,, u u u u u u k k k k k k α +,,,, +,, A α α + α =
49 Wy standard multigrid fails Note tat A = + α as weak connections in te α α ydirection. MG convergence factors degrade as α gets small. Poor performance at α = 0.. Consider α 0. A Tis is a collection of disconnected d problems! Point relaxation will smoot oscillatory errors in te x direction (strong connections), but wit no connections in te ydirection te errors in tat direction will generally be random, and no point relaxation will ave te smooting property in te ydirection. 0 + α 0 49 of 04
50 We analyze weigted Jacobi Te eigenvalues of te weigted Jacobi iteration matrix for tis problem are λ i, l = ω +α sin iπ N +αsin lπ N i 0 k l 0 l 5 i l 50 of 04
51 Two strategies for anisotropy Semicoarsening Because we expect MGlike convergence for te d problems along lines of constant y, we sould coarsen te grid in te x direction, but not in te ydirection. Line relaxation Because te te equations are strongly coupled in te xdirection it may be advantageous to solve simultaneously for entire lines of unknowns in te xdirection (along lines of constant y) 5 of 04
52 Semicoarsening wit point relaxation A α = Point relaxation on + α smoots in te x α direction. Coarsen by removing every oter yline. Ω Ω We do not coarsen along te remaining ylines. Semicoarsening is not as fast as full coarsening. Te number of points on Ω is about alf te number of points on Ω, instead of te usual onefourt. 5 of 04
53 Interpolation wit semicoarsening We interpolate in te dimensional way along eac line of constant y. Te coarsegrid correction equations are v = v v, k, k +, k v +, k = v +, k + v, k + v +, k 53 of 04
54 Line relaxation wit full coarsening Te oter approac to tis problem is to do full coarsening, but to relax entire lines (constant y) of variables simultaneously. Write A in block form as were c A = D ci ci D ci ci D ci ci ci D α + α = and D = + α + α 54 of 04
55 Line relaxation One sweep of line relaxation consists of solving a tridiagonal system for eac line of constant y. Te kt suc system as te form D v g k = k were vk is te kt subvector of v wit entries ( v k ) = v, k and te kt rigtand side subvector is α ( g ) = f + ( v k k + v, k,, k + Because D is tridiagonal, te kt system can be solved very efficiently. ) 55 of 04
56 Wy line relaxation works Te eigenvalues of te weigted block Jacobi iteration matrix are λ ω π π = sin i + α sin l i π N N sin + α N i, l l i i l 56 of 04
57 Semicoarsening wit line relaxation We migt not know te direction of weak coupling or it migt vary. Suppose we want a metod tat can andle eiter A = α + α α or A = α + α α We could use semicoarsening in te xdirection to andle A and line relaxation in te ydirection to take care of. A 57 of 04
58 Semicoarsening wit line relaxation Te original grid Original grid viewed as a stack of pencils. Line relaxation is used to solve problem along eac pencil. Coarsening is done by deleting every oter pencil 58 of 04
59 An anisotropic example u α u = Consider xx yy wit u=0 on te boundaries of te unit square, and stencil given by A Suppose f ( x, y) = ( y y ) + α(x x ) so te exact solution is given by u( x, y) = (y y )(x x ) f α = + α α Observe tat if α is small, te xdirection dominates wile if α is large, te ydirection dominates 59 of 04
60 Wat is smoot error? Consider α=0.00 and suppose point GaussSeidel is applied to a random initial guess. Te error after 50 sweeps appears as: Error along line of constant x y x Error along line of constant y 60 of 04
61 We experiment wit 3 metods Standard V(,)cycling, wit point GaussSeidel relaxation, full coarsening, and linear interpolation Semicoarsening in te xdirection. Coarse and fine grids ave te same number of points in te y direction. d full weigting and linear interpolation are used in te xdirection, tere is no ycoupling in te intergrid transfers Semicoarsening in te xdirection combined wit line relaxation in te ydirection. d full weigting and interpolation. 6 of 04
62 Wit semicoarsening, te operator must cange To account for unequal mes spacing, te residual and relaxation operators must use a modified stencil A = α y x x + α y y Note tat as grids become coarser, grows wile y remains constant. x x 6 of 04
63 How do te 3 metods work for various values of α? Asymptotic convergence factors: sceme E04 V(,)cycles emicoarsening in x semic / line relax α ydirection strong xdirection strong Note: semicoarsening in x works well for α <.00 but degrades noticeably even at α =. 63 of 04
64 A semicoarsening subtlety Suppose α is small, so tat semicoarsening in x is used. As we progress to coarser grids, x gets small but remains constant. y x If, on some coarse grid, becomes comparable to α y, te problem effectively becomes recoupled in te ydirection. Continued semicoarsening can produce artificial anisotropy, strong in te ydirection. Wen tis occurs, it is best to stop semicoarsening and continue wit full coarsening on any furter coarse grids. 64 of 04
65 Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 65 of 04
66 Variable Mes Problems Nonuniform grids are commonly used to accommodate irregularities in problem domains Consider ow we migt approac te d problem u ( x) = f ( x) 0 < x < u( 0) = u( ) = 0 posed on tis grid: x = 0 x = x 0 x x x + x N 66 of 04
67 We need some notation for te mes spacing Let N be a positive integer. We define te spacing interval between and : x x + + / x + x = 0,,...,N + / x 0 x x x + x N 67 of 04
68 We define te discrete differential operator Using second order finite differences (and plugging troug a mess of algebra!) we obtain tis discrete representation for te problem: α u +(α +β ) u β u + u 0 = u N = 0 = f N were α = / ( / + + / ) β = + / ( / + + / ) 68 of 04
69 We modify standard multigrid to accommodate variable spacing We coose every second finegrid point as a coarsegrid point x 0 x x N x 0 x x N / We use linear interpolation, modified for te spacing. If v = I, ten for v N/ v = v v + = + 3/ v + + / v + + / + + 3/ 69 of 04
70 We use te variational properties A to derive restriction and. A = I A I I = ( I ) T Tis produces a stencil on Ω tat is similar, but not identical, to te finegrid stencil. If te resulting system is scaled by ( / + + / ), ten te Galerkin product is te same as te finegrid stencil. For d problems tis approac can be generalized readily to logically rectangular grids. However, for irregular grids tat are not logically rectangular, AMG is a better coice. 70 of 04
71 Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 7 of 04
72 Variable coefficient problems A common difficulty is te variable coefficient problem, given in d by (a( x) u ( x)) = f ( x) 0 < x < u( 0) = u( ) = 0 were a(x) is a positive function on [0,] We seek to develop a conservative, or selfadoint, metod for discretizing tis problem. Assume we ave available to us te values of a(x) at te midpoints of te grid a + / a( x + / ) x 0 x x x + x N 7 of 04
73 We discretize using central differences We can use secondorder differences to approximate te derivatives. To use a grid spacing of we evaluate a(x)u (x) at points midway between te gridpoints: ( a( x) u (x)) x ( au ) x + / (au ) x / + O( ) Points used to evaluate (au ) at x x 0 x x x + x N 73 of 04
74 We discretize using central differences To evaluate ( au ) x + / we must sample a(x) at te point and use second order differences: x + / ( au ) x + / a + / u + u were a + / a( x + / ) ( au ) x / a / u u Points used to evaluate u at x + / Points used to evaluate (au ) at x x 0 x x x + x N 74 of 04
75 Te basic stencil is given We combine te differences for u and for (au ) to obtain te operator (a( x ) u ( x )) ( x ) a + / u + u + a / u u and te problem becomes, for (a / u +(a / + a + / ) u a + / u + ) = f u 0 = u N = 0 N 75 of 04
76 Coarsening te variable coefficient problem A reasonable approac is to use a standard multigrid algoritm wit linear interpolation, full weigting, and te stencil were A = ( a ( ) / a / + a + / a + / a + / + a + 3/ a + / = ) Te same stencil can be obtained via te Galerkin relation 76 of 04
77 Standard multigrid degrades if a(x) is igly variable It can be sown tat te variable coefficient discretization is equivalent to using standard multigrid wit simple averaging on te Poisson problem on a certain variablemes spacing. (a(x)u ) = f u (x) = f But simple averaging won t accurately represent smoot components if is close to but far from. x + x x + x x + x + x x + 77 of 04
78 One remedy is to apply operator interpolation Assume tat relaxation does not cange smoot error, so te residual is approximately zero. Applying at yields x + a + / e +(a + / + a + 3/ ) e + a + 3/ e + = 0 Solving for e + e + = a + / e + a + 3/ e + a + / + a + 3/ 78 of 04
79 Tus, te operator induced interpolation is v = v v + = a + / v + a + 3/ v + a + / + a + 3/ And, as usual, te restriction and coarsegrid operators are defined by te Galerkin relations A = I A I I = c( I ) T 79 of 04
80 A Variable coefficient example We use V(,) cycle, full weigting, linear interpolation. We use a( x) = ρ sin( k π x ) and a( x) = ρ rand ( k π x) ρ a( x) = ρ sin( k π x ) a( x) = ρ rand ( k π x) k=3 k=5 k=50 k=00 k=00 k= of 04
81 Outline Nonlinear Problems Neumann Boundary Conditions Anisotropic Problems Variable Mes Problems Variable Coefficient Problems Algebraic Multigrid 8 of 04
82 Algebraic multigrid: for unstructuredgrids Automatically defines coarse grid AMG as two distinct pases: setup pase: define MG components solution pase: perform MG cycles AMG approac is opposite of geometric MG fix relaxation (point GaussSeidel) coose coarse grids and prolongation, P, so tat error not reduced by relaxation is in range(p) define oter MG components so tat coarsegrid correction eliminates error in range(p) (i.e., use Galerkin principle) (in contrast, geometric MG fixes coarse grids, ten defines suitable operators and smooters) 8 of
83 AMG as two pases: Setup Pase Select Coarse grids, Ω m +, m =,,... Define interpolation, Im m +, m =,,... Define restriction and coarsegrid operators m + m T m + = m + m Im = ( Im + ) A Im A Im + Solve Pase Standard multigrid operations, e.g., Vcycle, Wcycle, FMG, FAS, etc Note: Only te selection of coarse grids does not parallelize well using existing tecniques! m 83 of
84 AMG fundamental concept: Smoot error = small residuals Consider te iterative metod error recurrence ek + = ( I Q A ) ek Error tat is slow to converge satisfies ( I Q A) e e Q A e 0 r 0 More precisely, it can be sown tat smoot error satisfies r e D «() A 84 of
85 AMG uses strong connection to determine MG components It is easy to sow from () tat smoot error satisfies () A e, e «De, Define i is strongly connected to by ai θ max {aik}, 0 < θ k i For Mmatrices, we ave from () a i ei e «e ii i i a e implying tat smoot error varies slowly in te direction of strong connections 85 of
86 Some useful definitions Te set of strong connections of a variable u i, tat is, te variables upon wose values te value of u i depends, is defined as S i = : ai > θ ma x ai i Te set of points strongly connected to a variable is denoted: S i T = { : Si}. Te set of coarsegrid variables is denoted C. Te set of finegrid variables is denoted F. Te set of coarsegrid variables used to interpolate te value of te finegrid variable C i is denoted. u i u i 86 of
87 Coosing te Coarse Grid Two Criteria i F (C) For eac, eac point sould eiter be in C or sould be strongly connected to at least one point in C i C S i (C) sould be a maximal subset wit te property tat no two Cpoints are strongly connected to eac oter. Satisfying bot (C) and (C) is sometimes impossible. We use (C) as a guide wile enforcing (C). 87 of
88 Selecting te coarsegrid points Cpoint selected (point wit largest value ) Neigbors of Cpoint become F points Next Cpoint selected (after updating values ) Fpoints selected, etc. 88 of
89 Examples: Laplacian Operator 5pt FD, 9pt FE (quads), and 9pt FE (stretced quads) pt FD 9pt FE (quads) 9pt FE (stretced quads) 89 of
90 Prolongation is based on smoot error, strong connections (from Mmatrices) Smoot error is given by: F C F r i = aii ei + ai e 0 i N C i F C Prolongation : ei, i C ( Pe) = i ωik ek, i F k C i 90 of
91 F Prolongation is based on smoot error, strong connections (from Mmatrices) C F Sets: C i D s i D w i Strongly connected C pts. Strongly connected Fpts. Weakly connected points. C i F a ii e i C Gives: C Te definition of smoot error, i a a i ii e e i D i s i a i a i e e w i D a i e Strong C Strong F Weak pts. 9 of
92 Finally, te prolongation weigts are defined In te smooterror relation, use e = ei for weak connections. For te strong Fpoints use : e = ak ek k C k i C i a k yielding te prolongation weigts: w i = a i a ii + D + n D a s i m C i w i ik a a k a in km 9 of
93 AMG setup costs: a bad rap Many geometric MG metods need to compute prolongation and coarsegrid operators Te only additional expense in te AMG setup pase is te coarse grid selection algoritm AMG setup pase is only 05% more expensive tan in geometric MG and may be considerably less tan tat! 93 of
94 AMG Performance: Sometimes a Success Story AMG performs extremely well on te model problem (Poisson s equation, regular grid) optimal convergence factor (e.g., 0.4) and scalability wit increasing problem size. AMG appears to be bot scalable and efficient on diffusion problems on unstructured grids (e.g., ). AMG andles anisotropic diffusion coefficients on irregular grids reasonably well. AMG andles anisotropic operators on structured and unstructured grids relatively well (e.g., 0.35). 94 of
95 So, wat could go wrong? Strong FF connections: weigts are dependent on eac oter For point i te value e is interpolated from k, k, and is needed to make te interpolation weigts for approximating e. i For point te value e i is interpolated from k, k, and is needed to make te interpolation weigts for approximating e. It s an implicit system! k k C C i i k 3 C k 4 C i k k C C i 95 of
96 Is tere a fix? A GaussSeidel like iterative approac to weigt definition is implemented. Usually two passes suffice. But does it work? Frequently, it does: Convergence factors for Laplacian, stretced quadrilaterals x = 0 y teta Standard Iterative x = 00 y of
Multigrid computational methods are
M ULTIGRID C OMPUTING Wy Multigrid Metods Are So Efficient Originally introduced as a way to numerically solve elliptic boundaryvalue problems, multigrid metods, and teir various multiscale descendants,
More informationThe Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationEvaluating probabilities under highdimensional latent variable models
Evaluating probabilities under igdimensional latent variable models Iain Murray and Ruslan alakutdinov Department of Computer cience University of oronto oronto, ON. M5 3G4. Canada. {murray,rsalaku}@cs.toronto.edu
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More information50 Years of Time Parallel Time Integration
5 Years of Time Parallel Time Integration Martin J. Gander Abstract Time parallel time integration methods have received renewed interest over the last decade because of the advent of massively parallel
More informationHighDimensional Image Warping
Chapter 4 HighDimensional Image Warping John Ashburner & Karl J. Friston The Wellcome Dept. of Imaging Neuroscience, 12 Queen Square, London WC1N 3BG, UK. Contents 4.1 Introduction.................................
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationOptimization by Direct Search: New Perspectives on Some Classical and Modern Methods
SIAM REVIEW Vol. 45,No. 3,pp. 385 482 c 2003 Society for Industrial and Applied Mathematics Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods Tamara G. Kolda Robert Michael
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationSampling 50 Years After Shannon
Sampling 50 Years After Shannon MICHAEL UNSER, FELLOW, IEEE This paper presents an account of the current state of sampling, 50 years after Shannon s formulation of the sampling theorem. The emphasis is
More informationSpaceTime Approach to NonRelativistic Quantum Mechanics
R. P. Feynman, Rev. of Mod. Phys., 20, 367 1948 SpaceTime Approach to NonRelativistic Quantum Mechanics R.P. Feynman Cornell University, Ithaca, New York Reprinted in Quantum Electrodynamics, edited
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationObject Detection with Discriminatively Trained Part Based Models
1 Object Detection with Discriminatively Trained Part Based Models Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester and Deva Ramanan Abstract We describe an object detection system based on mixtures
More informationFrom Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
SIAM REVIEW Vol. 51,No. 1,pp. 34 81 c 2009 Society for Industrial and Applied Mathematics From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein David
More informationA Survey of OutofCore Algorithms in Numerical Linear Algebra
DIMACS Series in Discrete Mathematics and Theoretical Computer Science A Survey of OutofCore Algorithms in Numerical Linear Algebra Sivan Toledo Abstract. This paper surveys algorithms that efficiently
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationLevel set equations on surfaces via the Closest Point Method
Level set equations on surfaces via the Closest Point Method Colin B. Macdonald 1, Steven J. Ruuth 2 1 Department of Mathematics, Simon Fraser University, Burnaby, BC, V5A1S6 Canada. email: cbm@sfu.ca
More informationIndexing by Latent Semantic Analysis. Scott Deerwester Graduate Library School University of Chicago Chicago, IL 60637
Indexing by Latent Semantic Analysis Scott Deerwester Graduate Library School University of Chicago Chicago, IL 60637 Susan T. Dumais George W. Furnas Thomas K. Landauer Bell Communications Research 435
More informationON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION. A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT
ON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT A numerical study of the distribution of spacings between zeros
More informationA Case Study in Approximate Linearization: The Acrobot Example
A Case Study in Approximate Linearization: The Acrobot Example Richard M. Murray Electronics Research Laboratory University of California Berkeley, CA 94720 John Hauser Department of EESystems University
More informationFigure 2.1: Center of mass of four points.
Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would
More informationCOSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES D NEEDELL AND J A TROPP Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationMesh Parameterization Methods and Their Applications
Foundations and Trends R in Computer Graphics and Vision Vol. 2, No 2 (2006) 105 171 c 2006 A. Sheffer, E. Praun and K. Rose DOI: 10.1561/0600000011 Mesh Parameterization Methods and Their Applications
More informationA Googlelike Model of Road Network Dynamics and its Application to Regulation and Control
A Googlelike Model of Road Network Dynamics and its Application to Regulation and Control Emanuele Crisostomi, Steve Kirkland, Robert Shorten August, 2010 Abstract Inspired by the ability of Markov chains
More informationAn Introduction to Regression Analysis
The Inaugural Coase Lecture An Introduction to Regression Analysis Alan O. Sykes * Regression analysis is a statistical tool for the investigation of relationships between variables. Usually, the investigator
More informationA Tutorial on Support Vector Machines for Pattern Recognition
c,, 1 43 () Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. A Tutorial on Support Vector Machines for Pattern Recognition CHRISTOPHER J.C. BURGES Bell Laboratories, Lucent Technologies
More informationFrom Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. Abstract
To Appear in the IEEE Trans. on Pattern Analysis and Machine Intelligence From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose Athinodoros S. Georghiades Peter
More information