Multigrid computational methods are


 Solomon Lee
 1 years ago
 Views:
Transcription
1 M ULTIGRID C OMPUTING Wy Multigrid Metods Are So Efficient Originally introduced as a way to numerically solve elliptic boundaryvalue problems, multigrid metods, and teir various multiscale descendants, ave since been developed and applied to various problems in many disciplines. Tis introductory article provides te basic concepts and metods of analysis and outlines some of te difficulties of developing efficient multigrid algoritms /06/$ IEEE Copublised by te IEEE CS and te AIP IRAD YAVNEH Multigrid computational metods are well known for being te fastest numerical metods for solving elliptic boundaryvalue problems. Over te past 30 years, multigrid metods ave earned a reputation as an efficient and versatile approac for oter types of computational problems as well, including oter types of partial differential equations and systems and some integral equations. In addition, researcers ave successfully developed generalizations of te ideas underlying multigrid metods for problems in various disciplines. (See te Multigrid Metods Resources sidebar for more details.) Tis introductory article presents te fundamentals of multigrid metods, including explicit algoritms, and points out some of te main pitfalls using elementary model problems. Tis material is mostly intended for readers wo ave a practical interest in computational metods but little or no acquaintance wit multigrid tecniques. Te article also provides some background for tis special issue and oter, more advanced publications. Tecnion Israel Institute of Tecnology Basic Concepts Let s begin wit a simple example based on a problem studied in a different context. Global versus Local Processes Te ometown team as won te regional cup. Te players are to receive medals, so it s up to te coac to line tem up, equally spaced, along te goal line. Alas, te field is muddy. Te coac, wo abors mud, tries to accomplis te task from afar. He begins by numbering te players 0 to N and orders players 0 and N to stand by te left and rigt goal posts, respectively, wic are a distance L apart. Now, te coac could, for example, order player i, i =,, N, to move to te point on te goal line tat is at a distance il/n from te left goal post. Tis would be a global process tat would solve te problem directly. However, it would require eac player to recognize te leftand goal post, perform some fancy aritmetic, and gauge long distances accurately. Suspecting tat is players aren t up to te task, te coac reasons tat if eac player were to stand at te center point between is two neigbors (wit players 0 and N fixed at te goal posts), is task would be accomplised. From te local rule, a global order will emerge, e pilosopizes. Wit tis in mind, te coac devises te following simple iterative local process. At eac iteration, e blows is wistle, and player i, i =,, N, COMPUTING IN SCIENCE & ENGINEERING
2 MULTIGRID METHODS RESOURCES Radii Petrovic Fedorenko, and Nikolai Sergeevitc Bakvalov 3 first conceived multigrid metods in te 960s, but Aci Brandt first acieved teir practical utility and efficiency for general problems in is pioneering work in te 970s. 4 6 Classical texts on multigrid metods include an excellent collection of papers edited by Wolfgang Hackbusc and Ulric Trottenberg, 7 Brandt s guide to multigrid metods, 8 and te classical book by Hackbusc. 9 Notable recent textbooks on multigrid include te introductory tutorial by William Briggs and is colleagues 0 and a compreensive textbook by Trottenberg and is colleagues. Many oter excellent books and numerous papers are also available, mostly devoted to more specific aspects of multigrid tecniques, bot teoretical and practical. References. R.P. Fedorenko, A Relaxation Metod for Solving Elliptic Difference Equations, USSR Computational Matematics and Matematical Pysics, vol., no. 4, 96, pp R.P. Fedorenko, Te Speed of Convergence of One Iterative Process, USSR Computational Matematics and Matematical Pysics, vol. 4, no. 3, 964, pp N.S. Bakvalov, On te Convergence of a Relaxation Metod wit Natural Constraints on te Elliptic Operator, USSR Computational Matematics and Matematical Pysics, vol. 6, no. 5, 966, pp A. Brandt, A MultiLevel Adaptive Tecnique (MLAT) for Fast Numerical Solution to Boundary Value Problems, Proc. 3rd Int l Conf. Numerical Metods in Fluid Mecanics, H. Cabannes and R. Temam, eds., Lecture Notes in Pysics 8, Springer, 973, pp A. Brandt, MultiLevel Adaptive Tecniques (MLAT): Te Multigrid Metod, researc report RC 606, IBM T.J. Watson Researc Ctr., A. Brandt, MultiLevel Adaptive Solutions to BoundaryValue Problems, Matematics of Computation, vol. 3, no. 38, 977, pp W. Hackbusc and U. Trottenberg, eds., Multigrid Metods, Springer Verlag, A. Brandt, 984 Multigrid Guide wit Applications to Fluid Dynamics, GMD Studie 85, GMDFIT, Postfac 40, D505, St. Augustin, West Germany, W. Hackbusc, MultiGrid Metods and Applications, Springer, W.L. Briggs, V.E. Henson, and S.F. McCormick, A Multigrid Tutorial, nd ed., SIAM Press, U. Trottenberg, C.W. Oosterlee, and A. Scüller, Multigrid, Academic Press, 00. (a) (b) (c) (d) (e) (f) Figure. Jacobi s playeralignment iteration. Red disks sow te current position, and blue disks sow te previous position, before te last wistle blow: (a) initial position, (b) one wistle blow, (c) slow convergence, (d) fast convergence, (e) oversoot, and (f) damped iteration. moves to te point tat s at te center between te positions of player i and player i + just before te wistle was blown. Tese iterations are repeated until te errors in te players positions are so small tat te mayor, wo will be pinning te medals, won t notice. Tis is te convergence criterion. Undeniably, te coac deserves a name; we ll call im Mr. Jacobi. Figure a sows te players initial apazard position, and Figure b te position after one iteration (wistle blow). (For clarity, we assume tat te players aren t even on te goal line initially. Tis pessimistic view does not alter our analysis significantly. We can envision purely lateral movements by projecting te players positions onto te goal line.) As I ll sow, Jacobi s metod is con NOVEMBER/DECEMBER 006 3
3 (a) (b) (c) Figure. Multiscale player alignment. Red disks sow te current position, and blue disks sow te previous position, before te last wistle blow: (a) large, (b) medium, and (c) small scales. vergent tat is, te errors in te players positions will eventually be as small as we wis to make tem. However, tis migt take a long time. Suppose te players positions are as sown in Figure c. In tis case, te players move sort distances at eac iteration. Sadly, tis slow crawl toward convergence will persist for te duration of te process. In contrast, convergence is obtained in a single iteration in te position sown in Figure d. Wat is te essential property tat makes te convergence slow in te position of Figure c, yet fast in Figure d? In Figure c, we ave a largescale error. But Jacobi s process is local, employing only smallscale information tat is, eac player s new position is determined by is near neigborood. Tus, to eac player, it seems (from is myopic viewpoint) tat te error in position is small, wen in fact it isn t. Te point is tat we cannot correct wat we cannot detect, and a largescale error can t be detected locally. In contrast, Figure d sows a smallscale error position, wic can be effectively detected and corrected by using te local process. Finally, consider te position in Figure e. It clearly as a smallscale error, but Jacobi s process doesn t reduce it effectively; rater, it introduces an oversoot. Not every local process is suitable. However, tis problem is easily overcome witout compromising te process s local nature by introducing a constant damping factor. Tat is, eac player moves only a fraction of te way toward is designated target at eac iteration; in Figure f, for example, te players movements are damped by a factor /3. Multiscale Concept Te main idea beind multigrid metods is to use te simple local process but to do so at all scales of te problem. For convenience, assume N to be a power of two. In our simple example, te coac begins wit a largescale view in wic te problem consists only of te players numbered 0, N/, and N. A wistle blow puts player N/ alfway between is largescale neigbors (see Figure a). Next, addressing te intermediate scale, players N/4 and 3N/4 are activated, and tey move at te sound of te wistle to te midpoint between teir midscale neigbors (see Figure b). Finally, te small scale is solved by Jacobi s iteration at te remaining locations (Figure c). Tus, in just tree wistle blows generally, log (N) and only N individual moves, te problem is solved, even toug te simple local process as been used. Because te players can move simultaneously at eac iteration in parallel, so to speak te number of wistle blows is important in determining te overall time, and log (N) is encouraging. For million players, for example, Jacobi s process would require a mere 0 wistle blows. Researcers ave successfully exploited te idea of applying a local process at different scales in many fields and applications. Our simple scenario displays te concept but ides te difficulties. In general, developing a multiscale solver for a given problem involves four main tasks: coosing an appropriate local process, coosing appropriate coarse (largescale) variables, coosing appropriate metods for transferring information across scales, and developing appropriate equations (or processes) for te coarse variables. Depending on te application, eac of tese tasks can be simple, moderately difficult, or extremely callenging. Tis is partly te reason wy multigrid continues to be an active and triving researc field. We can examine tese four tasks for developing a multiscale solver by using a sligtly less trivial problem. Before doing tis, let s analyze Jacobi s playeralignment algoritm and give some matematical substance to our intuitive notions on te dependence of its efficiency on te scale of te error. 4 COMPUTING IN SCIENCE & ENGINEERING
4 (a) (b) (c) Figure 3. Tree eigenvectors. We associate scale wit te wavelengt, wic is inversely proportional to k: (a) v (), (b) v (3), and (c) v (6). Analyzing Jacobi s PlayerAlignment Algoritm First, observe tat Jacobi s iteration acts similarly and independently in te directions parallel and perpendicular to te goal line. Terefore, it suffices to consider te convergence wit respect to just one of tese directions. Accordingly, let te column ( 0) ( 0) ( 0) T vector e = [ e,..., e N ] denote te players initial errors, eiter in te direction perpendicular or parallel to te goal line. We can define te error as te player s present coordinate, subtracted from te correct coordinate (to wic e sould eventually converge.) For convenience, we don t include players 0 and N in tis vector, but we keep in mind ( 0) ( 0) tat e 0 = e N = 0. For n =,,, te new error in te location of player i after iteration n is given by te average of te errors of is two neigbors just before te iteration namely, ( n) ( n ) ( e e e n ) i = i + + i. () We can write tis in vectormatrix form as e (n) = Re (n ), () were R is an N by N tridiagonal matrix wit onealf on te upper and lower diagonals and 0 on te main diagonal. For example, for N = 6, we ave R = (3) Te beavior of te iteration and its relation to scale is most clearly observed for an error, e, wic is an eigenvector of R, satisfying Re = e, for some (scalar) eigenvalue,. Te matrix R as N eigenvalues; for k =,, N, tey are given by ( k) kπ λ = cos. (4) Te N corresponding eigenvectors are given by kπ sin kπ ( k) sin v =. N k sin ( ) π (To verify tis, use te trigonometric identity, (5) ki ( + ) π ki sin sin + π = kπ kiπ cos sin to sow tat Rv (k) = (k) v (k).) Figure 3 sows te first, tird, and sixt eigenvectors. We associate scale wit te wavelengt, wic is inversely proportional to k. Tus, small k corresponds to largescale eigenvectors, and vice versa. Te eigenvectors v (k) are linearly independent, spanning te space R N, so we can write any initial error as a linear combination of te eigenvectors, N ( 0) ( k) e = c k v, (6) k= for some real coefficients, c k. Repeated multiplication by te iteration matrix, R, yields te error after n iterations: N ( n) n ( 0) n ( k) e = R e = R ckv k= N N n ( k) ( k) n ( k) = ckr v = ck( λ ) v k= k= N n kπ k = ck cos N v. k= (7) Because (k) = cos(k /N) < for k N, NOVEMBER/DECEMBER 006 5
5 (a) (b) (c) Figure 4. Te smooting property of te damped Jacobi relaxation. (a) Initial position, (b) after four relaxations, and (c) after eigt relaxations. we conclude tat Jacobi s metod converges because (k) n tends to zero as n tends to infinity. It s illuminating to examine te rate of convergence for different values of k. For k/n <<, we ave ( k) n kπ ( λ ) = cos n n kπ N e, (8) were we can verify te approximation by using a Taylor series expansion. We find tat te coefficients of eigenvectors corresponding to small k values cange only sligtly in eac iteration (see Figure c). Indeed, we see ere tat n = O(N ) iterations are required to reduce suc an error by an order of magnitude. For million players, for example, undreds of billions of wistle blows are required, compared to just 0 using te multiscale approac. For k = N/, (k) = cos(k /N) = 0, and convergence is immediate (see Figure d). As k approaces N, te scale of te error becomes even smaller, and cos(k /N) approaces, wic is consistent wit te oversoot we saw in Figure e tat is, at eac iteration, te error mainly canges sign, wit ardly any amplitude cange. Damped Jacobi Iteration s Smooting Factor To overcome te oversoot problem, we introduce a damping factor. If we damp te players movements by a factor /3, we obtain ( n) ( n ) ( n ) ( ei ei ei e n ) = i 3 3 and in vectormatrix form,, (9) ( n) ( n ) ( n ) e = Rde I + e, (0) R 3 3 wit R d denoting te damped Jacobi iteration matrix and I te identity matrix. Te eigenvectors clearly remain te same, but te eigenvalues of R d are given by ( k) ( k) kπ λd = + λ = + cos. () We can now quantify te claim tat Jacobi s iteration, damped by a factor /3, is efficient in reducing smallscale errors. To do tis, let s (somewat arbitrarily at tis point) split te spectrum down te middle, rougly, and declare all eigenvectors wit k < N/ smoot (large scale), and eigenvectors wit N/ k < N roug (small scale). Observe tat, as we vary k from N/ to N, spanning te roug eigenvectors, cos(k /N) ( k) varies from 0 to (nearly), and terefore λ d varies from /3 to (nearly) /3. Te fact tat ( k) λ d /3 for all te roug eigenvectors implies tat te size of te coefficient of eac roug eigenvector is reduced in eac iteration to at most /3 its size prior to te iteration independently of N. We say tat te iteration s smooting factor is /3. If we begin wit an initial error tat s some apazard combination of te eigenvectors, ten after just a few iterations, te coefficients of all te roug eigenvectors will be greatly reduced. Tus, te damped Jacobi iteration (usually called relaxation) is efficient at smooting te error, altoug, as we ve seen, it is inefficient at reducing te error once it as been smooted. Figure 4 exibits tese properties. Relaxation, one observes, irons out te wrinkles but leaves te fat. D Model Problem Te idea of using a local rule to obtain a global order is, of course, ancient. Peraps te most common example in matematics is differential equations, wic ave a global solution determined by a local rule on te derivatives (plus boundary conditions). To advance our discussion, we require an example tat s sligtly less trivial tan player alignment. Consider te D boundaryvalue problem: 6 COMPUTING IN SCIENCE & ENGINEERING
6 d u Lu = f( x), x in ( 0, ), dx u( 0) = ul, u( ) = u r, () were f is a given function, u and u r are te boundary conditions, and u is te solution we re seeking. We adopt te common numerical approac of finite difference discretization. A uniform mes is defined on te interval [0, ], wit mes size = /N and N + mes points given by x i + i for i = 0,,..., N. Our vector of unknowns te discrete approximation to u is denoted as u = [ u,... un ] T, were we exclude te boundary values for convenience but keep in mind tat u0 = ul, un = u. Te rigtand side vector is denoted by f r = [ f,..., fn ] T, wit fi = f( xi ). Alternatively, we could use some local averaging of f. Te standard secondorder accurate finite difference discretization of te second derivative now yields a system of equations for u, wic we can write as ui ui + ui + ( Lu ) i = fi, i =,..., N, u0 = ul, un = ur. (3) We can also write tis system more concisely as L u = f. Te system is tridiagonal and easy to solve by standard metods, but it s useful for examining te multigrid solver s aforementioned tasks. Observe te similarity to te playeralignment problem: if we set f 0, te solution u is obviously a straigt line connecting te point (0, u ) wit (, u r ). Te discrete solution is ten a set of values along tis line, eac equal to te average of its two neigbors. We can easily adapt Jacobi s algoritm to Equation 3. Let u (0) denote an initial approximation to u tat satisfies te boundary conditions. n Ten, at te nt iteration, we set u i, i =,, N, to satisfy te it equation in te system in Equation 3, wit te neigboring values taken from te n st iteration: n u n i u ( ) n i u ( ) = + + i fi, i =,..., N. (4) Te convergence beavior of Jacobi s algoritm is identical to its beavior in te playeralignment problem. Denote te error in our approximation after te nt iteration by e n u u n =, (5) n n wit e0 = en = 0 because all our approximations satisfy te boundary conditions. Now, subtract Equation 4 from te equation ui ui ui = + + fi, wic we get by isolating u i in Equation 3. Tis yields n n ( ) ei ei e n ( ) = + + i, (6) wic is exactly like Equation. Similarly, damped Jacobi relaxation, wit a damping factor of /3, is defined by n ( ) n ( ) n n ui + u + i ui = ui +, (7) 3 3 fi and its error propagation properties are identical to tose of te playeralignment problem in Equations 9 troug. Naïve Multiscale Algoritm We next naïvely extend te multiscale algoritm of te playeralignment problem to te D model problem. Te main point of tis exercise is to sow tat in order to know wat to do at te large scales, we must first propagate appropriate information from te small scales to te large scales. For convenience, we still assume N is a power of two. Similarly to te playeralignment process, we begin by performing Jacobi s iteration for te problem defined at te largest scale, wic consists of te variables [ u0, u. Te mes size at tis scale is 0.5, and u N /, un ] 0 and u N are prescribed by te boundary conditions. Te Jacobi step tus fixes u N / : un / = u0 + un 05. fn / Next, based on te values of te largest scale, we perform a Jacobi iteration at te secondlargest scale, obtaining u N /4 and u3 N / 4. We continue until we ave determined all variables, just as in te playeralignment problem. Tis algoritm is obviously computationally efficient. Unfortunately, it fails to give te correct answer (unless f is trivial) because we aren t solving te rigt equations on te coarse grids. More precisely, te rigtand sides are wrong.. NOVEMBER/DECEMBER 006 7
7 (a) (b) Figure 5. Aliasing. Red disks correspond to te coarse grid: (a) v (3) and (b) v (3). Aliasing Figure 5a sows a smoot eigenvector (k = 3) sampled on a grid wit N = 6 and also on a twicecoarser grid, N = 8, consisting of te fine grid s evenindexed mes points. Tis repeats in Figure 5b, but for a roug eigenvector (k = 3 on te finer grid), wit coefficient. Evidently, te two are indistinguisable on te coarse grid, wic demonstrates te wellknown penomenon of aliasing. Te coarse grid affords rougly alf as many eigenvectors as te fine grid, so some pairs of finegrid eigenvectors must alias. Specifically, by Equation 5, te evenindexed terms in eigenvectors v (k) and v (N k) are equal to te negative of eac oter. Tus, if f contains some rougness tat is, if we write f as a linear combination of eigenvectors, it will ave roug eigenvectors wit nonzero coefficients it won t be represented correctly on te coarse grid. As a somewat extreme example, if f is a linear combination of v (k) and v (N k) for some k, wit weigts of equal magnitude but opposite sign, it will vanis wen sampled on te coarser grid. Te definition of te smooting factor, wic was based on a partitioning of te spectrum into smoot and roug parts, foresadowed our use of a twicecoarser grid. Smoot eigenvectors are tose tat can be represented accurately on te twicecoarser grid, wereas roug eigenvectors alias wit smoot ones. We can conclude tat constructing correct equations for coarse grids requires information from te fine grid. For te D model problem, we can get around te aliasing problem by successive local averaging. However, in more general problems, we can t do tis witout spending a substantial computational effort, wic would defeat te purpose of te multiscale approac. We must come to grips wit te fact tat coarse grids are useful only for representing smoot functions. Tis means tat we must take care in coosing our coarsegrid variables. Taxonomy of Errors We ve seen tat damped Jacobi relaxation smootes te error. In tis context, it is important to distinguis between two distinct types of error. Te discretization error, given at mes point i by ux ( i) ui, is te difference between te solution of te differential equation (sampled on te grid) and tat of te discrete system. Tis error is determined by te problem, te discretization, and te mes size. Iteration as no effect on tis error, of course, because it only affects our approximation to u. Only te algebraic error, wic is te difference between te discrete solution and our current approximation to it (Equation 5), is smooted. Our plan, terefore, is to first apply damped Jacobi relaxation, wic smootes te algebraic error, and ten construct a linear system for tis error vector and approximate it on a coarser grid. Tis can continue recursively, suc tat, on eac coarser grid, we solve approximate equations for te nextfiner grid s algebraic error. Te motivation for using te algebraic error as our coarsegrid variable is twofold. Because we re limited in our ability to compute an accurate approximation to u on te coarse grid due to aliasing, we need to resort to iterative improvement. We sould terefore use te coarse grid to approximately correct te finegrid algebraic error, rater tan to dictate te finegrid solution. Moreover, relaxation as smooted te algebraic error, and tus we can approximate it accurately on te coarse grid. Let ũ denote our current approximation to u after performing several damped Jacobi relaxations, and let e = u u denote te corres ponding algebraic error, wic is known to be smoot (see Figure 4). Subtract from Equation 3 te trivial relation = = Lu Lu, i,..., N, i i u 0 = ul, u N = ur, to obtain a system for te algebraic error, 8 COMPUTING IN SCIENCE & ENGINEERING
8 = = Le fi Lu ri, i,... N, i i e0 = 0, en = 0, were r i is te it residual. Tis is te problem we must approximate on te coarser grid. We can write it concisely as L e = r. Once we compute a good approximation to e, we ten add it to ũ, and tus improve our approximation to u. Finest grid Relaxation Multigrid Algoritm To transfer information between grids, we need suitable operators. Consider just two grids: te fine grid, wit mes size and wit N mes points excluding boundary points, and te coarse grid, wit mes size and N/ mes points. We use a restriction operator, denoted I, for finetocoarse data transfer. Common coices are injection, defined for a finegrid vector, g, by = =, N I g gi, i,..., i tat is, te coarsegrid value is simply given by te finegrid value at te corresponding location, or full weigting, wic is defined by = ( + + ) I g gi gi i +, i 4 N i =,...,, were we apply a local averaging. For coarsetofine data transfer, we use a prolongation operator, denoted by I. A common coice is te usual linear interpolation, defined for a coarsegrid vector g by gi /, if i is even, ( I g ) = i g i ( ( + )/ + g( i )/ ), if i is odd, i =,..., N. Additionally, we must define an operator on te coarse grid, denoted L, tat approximates te fine grid s L. A common coice is to simply discretize te differential equation on te coarse grid using a sceme similar to te one applied on te fine grid. Armed wit tese operators, and using recursion to solve te coarsegrid problem approximately, we can now define te multigrid VCycle (see Figure 6). We require two parameters, commonly denoted by and, tat designate, respectively, te number of damped Jacobi relaxations performed before and after we apply te coarsegrid correction. Te multigrid algoritm is defined as Coarsest grid Algoritm VCycle: u = VCycle(u,, N, f,, ). If N ==, solve L u = f and return u ;. u = Relax(u,, N, f, ); relax times. 3. r = f L u ; compute residual vector. 4. f = I r ; restrict residual. 5. u = 0; set u to zero, including boundary conditions. 6. u = VCycle(u,, N/, f,, ); treat coarsegrid problem recursively. 7. u = u + I u ; interpolate and add correction. 8. u = Relax(u,, N, f, ); relax times; 9. Return u. Let s examine te algoritm line by line. In line, we ceck weter we ve reaced te coarsest grid, N =. If so, we solve te problem. Because tere s only a single variable tere, one Jacobi iteration witout damping does te job. In line, damped Jacobi iterations are applied. We compute te residual in line 3, and restrict it to te coarse grid in line 4. We now need to tackle te coarsegrid problem and obtain te coarsegrid approximation to te finegrid error. Noting tat te coarsegrid problem is similar to te finegrid one, we apply a recursive call in line 6. Note tat te boundary conditions on te coarse grid are zero (line 5) because tere s no error in te finegrid boundary values, wic are prescribed. Also, te Restriction Prolongation Figure 6. A scematic description of te VCycle. Te algoritm proceeds from left to rigt and top (finest grid) to bottom (coarsest grid) and back up again. On eac grid but te coarsest, we relax times before transferring (restricting) to te nextcoarser grid and times after interpolating and adding te correction. NOVEMBER/DECEMBER 006 9
9 initial approximation to u must be zero. After returning from te recursion, we interpolate and add te correction to te finegrid approximation (line 7). In line 8, we relax te finegrid equations times, and te resulting approximation to te solution is returned in line 9. Generally, te VCycle doesn t solve te problem exactly, and it needs to be applied iteratively. However, as I ll demonstrate later, eac suc iteration typically reduces te algebraic error dramatically. D Model Problem Te multiscale algoritm isn t restricted to one dimension, of course. Consider te D boundaryvalue problem: u u Lu + = x y u = b( x, y), ( x, y) on Ω, f( x, y), ( x, y) inω, (8) were f and b are given functions and u is te solution we re seeking. For simplicity, our domain is te unit square, = (0, ), wit denoting te boundary of. Tis is te wellknown Poisson problem wit Diriclet boundary conditions. To discretize te problem, we define a uniform grid wit mes size and mes points given by ( xi, yj ) = ( i, j), i, j = 0,,..., N, were = /N. Our vector of unknowns te discrete approximation to u is again denoted u, b denotes te discrete boundary conditions and f is te discrete rigtand side function. Te standard secondorderaccurate finite difference discretization now yields a system of equations for u, wic we can write as ui, j + ui+, j + ui, j + u i, j+ 4ui, j ( Lu ) i, j = fi, j, i, j =,... N, ui, j= bi, j, i = 0 or i = N or j = 0 or j = N, (9) or more concisely as L u = f. We can easily adapt Jacobi s algoritm to te D problem: at eac iteration, we cange our approximation to u i, j, i, j =,, N, to satisfy te (i, j)t equation in te system in Equation 9, wit te neigboring values taken from te previous iteration. We can generalize our smooting analysis of damped Jacobi relaxation to tis case. By minimizing te smooting factor wit respect to te damping, we ten find tat te best acievable smooting factor is 0.6, obtained wit a damping factor of 0.8. To apply te multigrid VCycle to te D model problem, we still need to define our coarse grids, along wit appropriate intergrid transfer operators. We coarsen naturally by eliminating all te mes points wit eiter i or j odd, suc tat rougly onefourt of te mes points remain. For restriction, we can once again use simple injection, defined for a finegrid vector, g, by = = N I g gi, j, i, j,...,, i, j tat is, te coarsegrid value is simply given by te finegrid value at te corresponding location, or full weigting, defined by = ( + I g gi, j gi, j+ i, j 6 + gi +, j + gi+, j+ gi, j + g i+, j + + gi, j + gi, j+, + 4g i, j N i, j =,...,, were a local averaging is applied. For coarsetofine data transfer, a common coice is bilinear interpolation, defined for a coarsegrid vector g by I g = i, j g i /, j /, if i is even and j is even, g ( (i +)/, j / + g (i )/, j / ), if i is odd and j is even, g ( i /,( j +)/ + g i /,( j )/ ), if i is even and j is odd, 4 g ( (i +)/,( j +)/ + g (i )/, (j+)/ + g (i +)/,( j )/ + g (i )/,( j )/ ) if i is odd and j is odd, for i =,, N. 0 COMPUTING IN SCIENCE & ENGINEERING
10 Numerical Tests To demonstrate te multigrid algoritm s effectiveness, we test it on te D and D model problems wit N = 8 and compare its convergence beavior to tat of simple Jacobi relaxation. We apply VCycles wit =, = denoted V(, ) to a problem wit a known solution, beginning wit a random initial guess for te solution. (It s easy to sow tat te asymptotic convergence beavior is independent of te particular solution.) We use (bi)linear interpolation and fullweigting restriction and plot te finegrid algebraic error s root mean square (RMS) versus te number of iterations. Figure 7 sows te results for te D and D model problems. Te V Cycle s computational cost is similar to tat of a few Jacobi iterations, as I will sow in te next section. Hence, for fairness, we consider every 0 Jacobi relaxations as a single iteration in te comparison, so tat 00 iterations marked in te plot actually correspond to,000 Jacobi relaxations. Tus, a VCycle in tese plots is no more costly tan a single Jacobi iteration. We can see tat eac VCycle reduces te error by rougly an order of magnitude, wereas Jacobi relaxation converges slowly. Te smooting analysis can approximately predict te fast multigrid convergence: on te fine grid, te reduction factor of roug errors per eac relaxation sweep is bounded from above by te smooting factor, wereas te smoot errors are nearly eliminated altogeter by te coarsegrid correction. Computational Complexity Te number of operations performed in te V Cycle on eac grid is clearly proportional to te number of mes points because tis olds for eac subprocess relaxation, residual computation, restriction, and prolongation. Because te number of variables at eac grid is approximately a constant fraction of tat of te next finer grid (onealf in D and onefourt in D), te total number of operations per VCycle is greater tan te number of operations performed on just te finest grid only by a constant factor. In D, tis factor is bounded by + / + /4 + =, and in D by + /4 + /6 + = 4/3. Te amount of computation required for a single Jacobi relaxation on te finest grid (or a calculation of te residual, wic is approximately te same) is called a work unit (WU). Our VCycle requires tree WUs for relaxation on te finest grid and one more for computing te residual. Te prolongation plus te restriction operations togeter require no more tan one WU. Tus, te finegrid work is rougly Error norm (a) Error norm (b) Iterations Iterations five WUs per VCycle. Terefore, te entire work is approximately 0 WUs per VCycle in D and seven in D. V(,) cycles Ten Jacobi relaxations V(,) cycles Ten Jacobi relaxations Figure 7. Convergence of te algebraic error. Multigrid V(, ) cycles versus Jacobi iterations comprised of 0 relaxations eac, wit N = 8: (a) D and (b) D. Finest grid Coarsest grid Relaxation No relaxation Restriction Prolongation Figure 8. A scematic description of te full multigrid algoritm for =. Te algoritm proceeds from left to rigt and top (finest grid) to bottom (coarsest grid.) Initially, we transfer te problem down to te coarsest grid. We solve te problem on te coarsest grid and ten interpolate tis solution to te secondcoarsest grid and perform a VCycle. Tese two steps are repeated recursively to finer and finer grids, ending wit te finest grid. NOVEMBER/DECEMBER 006
Steering User Behavior with Badges
Steering User Behavior with Badges Ashton Anderson Daniel Huttenlocher Jon Kleinberg Jure Leskovec Stanford University Cornell University Cornell University Stanford University ashton@cs.stanford.edu {dph,
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to
More informationON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION. A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT
ON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT A numerical study of the distribution of spacings between zeros
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationScalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights
Seventh IEEE International Conference on Data Mining Scalable Collaborative Filtering with Jointly Derived Neighborhood Interpolation Weights Robert M. Bell and Yehuda Koren AT&T Labs Research 180 Park
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationFrom Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
SIAM REVIEW Vol. 51,No. 1,pp. 34 81 c 2009 Society for Industrial and Applied Mathematics From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein David
More informationAn Introduction to Regression Analysis
The Inaugural Coase Lecture An Introduction to Regression Analysis Alan O. Sykes * Regression analysis is a statistical tool for the investigation of relationships between variables. Usually, the investigator
More informationFrom Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose. Abstract
To Appear in the IEEE Trans. on Pattern Analysis and Machine Intelligence From Few to Many: Illumination Cone Models for Face Recognition Under Variable Lighting and Pose Athinodoros S. Georghiades Peter
More informationOptimization by Direct Search: New Perspectives on Some Classical and Modern Methods
SIAM REVIEW Vol. 45,No. 3,pp. 385 482 c 2003 Society for Industrial and Applied Mathematics Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods Tamara G. Kolda Robert Michael
More informationThe Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationIntellectual Need and ProblemFree Activity in the Mathematics Classroom
Intellectual Need 1 Intellectual Need and ProblemFree Activity in the Mathematics Classroom Evan Fuller, Jeffrey M. Rabin, Guershon Harel University of California, San Diego Correspondence concerning
More informationDiscovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow
Discovering Value from Community Activity on Focused Question Answering Sites: A Case Study of Stack Overflow Ashton Anderson Daniel Huttenlocher Jon Kleinberg Jure Leskovec Stanford University Cornell
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationObject Detection with Discriminatively Trained Part Based Models
1 Object Detection with Discriminatively Trained Part Based Models Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester and Deva Ramanan Abstract We describe an object detection system based on mixtures
More informationDecoding by Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
More informationIndexing by Latent Semantic Analysis. Scott Deerwester Graduate Library School University of Chicago Chicago, IL 60637
Indexing by Latent Semantic Analysis Scott Deerwester Graduate Library School University of Chicago Chicago, IL 60637 Susan T. Dumais George W. Furnas Thomas K. Landauer Bell Communications Research 435
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More informationModelling with Implicit Surfaces that Interpolate
Modelling with Implicit Surfaces that Interpolate Greg Turk GVU Center, College of Computing Georgia Institute of Technology James F O Brien EECS, Computer Science Division University of California, Berkeley
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationAn efficient reconciliation algorithm for social networks
An efficient reconciliation algorithm for social networks Nitish Korula Google Inc. 76 Ninth Ave, 4th Floor New York, NY nitish@google.com Silvio Lattanzi Google Inc. 76 Ninth Ave, 4th Floor New York,
More informationDIFFERENT people with similar behaviors induce completely
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 9, NO., NOVEMBER 007 045 SpaceTime BehaviorBased Correlation OR How to Tell If Two Underlying Motion Fields Are Similar without Computing
More informationRECENTLY, there has been a great deal of interest in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth KreutzDelgado, Senior Member,
More informationCOSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES D NEEDELL AND J A TROPP Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect
More informationTHE adoption of classical statistical modeling techniques
236 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 28, NO. 2, FEBRUARY 2006 Data Driven Image Models through Continuous Joint Alignment Erik G. LearnedMiller Abstract This paper
More informationRevisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations
Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations Melanie Mitchell 1, Peter T. Hraber 1, and James P. Crutchfield 2 In Complex Systems, 7:8913, 1993 Abstract We present
More informationThe Gödel Phenomena in Mathematics: A Modern View
Chapter 1 The Gödel Phenomena in Mathematics: A Modern View Avi Wigderson Herbert Maass Professor School of Mathematics Institute for Advanced Study Princeton, New Jersey, USA 1.1 Introduction What are
More informationhow to use dual base log log slide rules
how to use dual base log log slide rules by Professor Maurice L. Hartung The University of Chicago Pickett The World s Most Accurate Slide Rules Pickett, Inc. Pickett Square Santa Barbara, California 93102
More informationMaximizing the Spread of Influence through a Social Network
Maximizing the Spread of Influence through a Social Network David Kempe Dept. of Computer Science Cornell University, Ithaca NY kempe@cs.cornell.edu Jon Kleinberg Dept. of Computer Science Cornell University,
More information