Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Chapter 3 One-Step Methods 3. Introduction The explicit and implicit Euler methods for solving the scalar IVP y = f(t y) y() = y : (3..) have an O(h) global error which istoolow to make themofmuch practical value. With such a low order of accuracy, they will be susceptible to round o error accumulation. Additionally, the region of absolute stability of the explicit Euler method is too small. Thus, we seek higher-order methods which should provide greater accuracy than either the explicit or implicit Euler methods for the same step size. Unfortunately, there is generally a trade o between accuracy and stability and we will typically obtain one at the expense of the other. Since we were successful in using Taylor's series to derive a method, let us proceed along the same lines, this time retaining terms of O(h k ). y(t n )=y(t n; )+hy (t n; )+ h y (t n; )+:::+ hk k! y(k) (t n; )+O(h k+ ): (3..) Clearly, methods of this type will be explicit. Using (3..) y (t n; )=f(t n; y(t n; )): (3..3a) Dierentiating (3..) y (t n; )=[f t + f y y ] (t n; y(tn;)) =[f t + f y f] (t n; y(tn;)): (3..3b)

2 Continuing in the same manner y (t n; )=[f tt +f ty f + f t f y + f yy f + f y f] (t n; y(tn;)) (3..3c) etc. Specic methods are obtained by truncating the Taylor's series at dierent values of k. For example, if k =we get the method y n = y n; + hf(t n; y n; )+ h [f t + f y f] (t n; yn;): (3..4a) From the Taylor's series expansion (3..), the local error of this method is d n = h3 6 y ( n ): (3..4b) Thus, we succeeded in raising the order of the method. Unfortunately, methods of this type are of little practical value because the partial derivatives are dicult to evaluate for realistic problems. Any software would also have to be problem dependent. By way of suggesting an alternative, consider the special case of (3..) when f is only a function of t, i.e., y = f(t) y() = y : This problem, which is of little interest, can be solved by quadrature to yield y(t) =y + Z t f()d: We can easily construct high-order approximate methods for this problem by using numerical integration. Thus, for example, the simple left-rectangular rule would lead to Euler's method. The midpoint rule with a step size of h would give us y(h) =y + hf(h=) + O(h 3 ): Thus, by shifting the evaluation point to the center of the interval we obtained a higherorder approximation. Neglecting the local error term and generalizing the method to the interval t n; <t t n yields y n = y n; + hf(t n; + h=):

3 Runge [] sought to extend this idea to true dierential equations having the form of (3..). Thus, we might consider y n = y n; + hf(t n ; h= y n;= ) as an extension of the simple midpoint rule to (3..). The question of how to dene the numerical solution y n;= at the center of the interval remains unanswered. A simple possibility that immediately comes to mind is to evaluate it by Euler's method. This gives y n;= = y n; + h f(t n; y n; ) however, we must verify that this approximation provides an improved order of accuracy. After all, Euler's method has an O(h ) local error and not an O(h 3 ) error. Let's try to verify that the combined scheme does indeed have an O(h 3 ) local error by considering the slightly more general scheme y n = y n; + h(b k + b k ) (3..5a) where k = f(t n; y n; ) (3..5b) k = f(t n; + ch y n; + hak ): (3..5c) Schemes of this form are an example of Runge-Kutta methods. We see that the proposed midpoint scheme is recovered by selecting b =, b =, c = =, and a = =. We also see that the method does not require any partial derivatives of f(t y). Instead, (the potential) high-order accuracy is obtained by evaluating f(t y) atanadditional time. The coecients a, b, b, and c will be determined so that a Taylor's series expansion of (3..5) using the exact ODE solution matches the Taylor's series expansion (3.., 3..3) of the exact ODE solution to as high a power in h as possible. To this end, recall the formula for the Taylor's series of a function of two variables F (t + y + ) =F (t y)+[f t + F y ] ( t y) + [ F tt +F ty + F yy ] ( t y) + ::: (3..6) 3

4 The expansion of (3..5) requires substitution of the exact solution y(t) into the formula and the use of (3..6) to construct an expansion about (t n; y(t n; )). The only term that requires any eort is k, which, upon insertion of the exact ODE solution, has the form k = f(t n; + ch y(t n; )+haf(t n; y(t n; ))): To construct an expansion, we use (3..6) with F (t y) =f(t y), t = t n;, y = y(t n; ), = ch, and = haf(t n; y(t n; )). This yields k = f + chf t + haff y + [(ch) f tt +ach ff ty +(ha) f f yy ]+O(h 3 ): All arguments of f and its derivatives are (t n; y(t n; )). We have suppressed these to simplify writing the expression. Substituting the above expansion into (3..5a) while using (3..5b) with the exact ODE solution replacing y n; yields y(t n )=y(t n; )+h[b f + b (f + chf t + haff y + O(h ))]: (3..7) Similarly, substituting (3..3) into (3..), the Taylor's series expansion of the exact solution is y(t n )=y(t n; )+hf + h (f t + ff y )+O(h 3 ): (3..8) All that remains is a comparison of terms of the two expansions (3..7) and (3..8). The constant terms agree. The O(h) terms will agree provided that b + b =: (3..9a) The O(h ) terms of the two expansions will match if cb = ab ==: (3..9b) A simple analysis would reveal that higher order terms in (3..7) and (3..8) cannot be matched. Thus, we have three equations (3..9) to determine the four parameters a, b, b, and c. Hence, there is a one parameter family of methods and we'll examine two specic choices. 4

5 . Select b =, then a = c = = and b =. Using (3..5), this Runge-Kutta formula is y n = y n; + hk (3..a) with k = f(t n; y n; ) k = f(t n; + h= y n; + hk =): (3..b) Eliminating k and k,we can write (3..) as y n = y n; + hf(t n; + h= y n; + hf(t n; y n; )=) (3..a) or ^y n;= = y n; + h f(t n; y n; ) (3..b) y n = y n; + hf(t n; + h= ^y n;= ): (3..c) This is the midpoint rule integration formula that we discussed earlier. The ^ on y n;= indicates that it is an intermediate rather than a nal solution. As shown in Figure 3.., we can regard the two-stage process (3..b,c) as the result of two explicit Euler steps. The intermediate solution ^y n;= is computed at t n + h= in the rst (predictor) step and this value is used to generate an approximate slope f(t n + h= ^y n;= ) for use in the second (corrector) Euler step. According to Gear [5], this method has been called the Euler-Cauchy, improved polygon, Heun, or modied Euler method. Since there seems to be some disagreement about its name and because of its similarity to midpoint rule integration, we'll call it the midpoint rule predictor-corrector.. Select b = =, then a = c = and b = =. According to (3..5), this Runge- Kutta formula is y n = y n; + h (k + k ) (3..a) 5

6 with k = f(t n; y n; ) k = f(t n; + h y n; + hk ): (3..b) Again, eliminating k and k, y n = y n; + h [f(t n; y n; )+f(t n y n; + hf(t n; y n; ))]: (3..3a) This too, can be written as a two-stage formula ^y n = y n; + hf(t n; y n; ) (3..3b) y n = y n; + h [f(t n; y n; )+f(t n ^y n )]: (3..3c) The formula (3..3a) is reminiscent of trapezoidal rule integration. The combined formula (3..3b,c) can, once again, be interpreted as a predictor-corrector method. Thus, as shown in Figure 3.., the explicit Euler method is used to predict a solution at t n and the trapezoidal rule is used to correct it there. We'll call (3.., 3..3) the trapezoidal rule predictor-corrector however, it is also known as the improved tangent, improved polygon, modied Euler, or Euler-Cauchy method ([5], Chapter ). Using Denition..3, we see that the Taylor's series method (3..4) and the Runge- Kutta methods (3..) and (3..3) are consistent to order two since their local errors are all O(h 3 ) (hence, their local discretization errors are O(h )). Problems. Solve the IVP y = f(t y) =y ; t + y() = = on < t using the explicit Euler method and the midpoint rule. Use several step sizes and compare the error at t = as a function of the number of evaluations of f(t y). The midpoint rule has twice the number of function evaluations of the Euler method but is higher order. Which method is preferred? 6

7 y n y n- slope f(t + h/,y ^ ) n- n-/ ^y n-/ h/ h/ t t t n- n Figure 3..: Midpoint rule predictor-corrector (3..b,c) for one time step. average slope y n slope f(t,y ^ ) n n y n- y^ n h t t t n- n Figure 3..: Trapezoidal rule predictor-corrector (3..3b,c) for one time step. 3. Explicit Runge-Kutta Methods We would like to generalize the second order Runge-Kutta formulas considered in Section 3. to higher order. As usual, we will apply them to the scalar IVP (3..). Runge-Kutta methods belong to a class called one-step methods that only require information about the solution at time t n; to calculate it at t n. This being the case, it's possible to write 7

8 them in the general form y n = y n; + h(t n; y n; h): (3..) This representation is too abstract and we'll typically consider an s-stage Runge-Kutta formula for the numerical solution of the IVP (3..) in the form where y n = y n; + h k i = f(t n; + c i h y n; + h j= i= b i k i (3..a) a ij k j ) i = ::: s: (3..b) These formulas are conveniently expressed as a tableau or a \Butcher diagram" or more compactly as where We can also write (3..) in the form Y i = y n; + h c a a a s c a a a s c s a s a s a ss b b b s y n = y n; + h j= c i= A b b i f(t n; + c i h Y i ) (3..3a) a ij f(t n; + c j h Y j ) i = ::: s: (3..3b) In this form, Y i, i = ::: s, are approximations of the solution at t = t n + c i h that typically do not have as high an order of accuracy as the nal solution y n. An explicit Runge-Kutta formula results when a ij = for j i. Historically, all Runge-Kutta formulas were explicit however, implicit formula are very useful for sti systems and problems where solutions oscillate rapidly. We'll study explicit methods in this section and take up implicit methods in the next. Runge-Kutta formulas are derived in the same manner as the second-order methods of Section 3.. Thus, we 8

9 . expand the exact solution of the ODE in a Taylor's series about, e.g., t n;. substitute the exact solution of the ODE into the Runge-Kutta formula and expanded the result in ataylor's series about, e.g., t n; and 3. match the two Taylor's series expansions to as high an order as possible. The coecients are usually not uniquely determined by this process thus, there are families of methods having a given order. A Runge-Kutta method that is consistent to order k (or simply of order k) will match the terms of order h k in both series. Clearly the algebra involved in obtaining these formulas increases combinatorically with increasing order. A symbolic manipulation system, such as MAPLE or MATHEMATICA, can be used to reduce complexity. Fortunately, the derivation is adequately demonstrated by the second-order methods presented in Section 3. and, for the most part, we will not need to present detailed derivations of higher-order methods. There are three one-parameter families of three-stage, third-order explicit Runge- Kutta methods [3, 6]. However, the most popular explicit methods are of order four. Their tableau has the general form c a c 3 a 3 a 3 c 4 a 4 a 4 a 43 b b b 3 b 4 The Taylor's series produce eleven equations for the thirteen nonzero parameters listed above. The classical Runge-Kutta method has the following form: y n = y n; + h 6 (k +k +k 3 + k 4 ) (3..4a) where k = f(t n; y n; ) (3..4b) k = f(t n; + h= y n; + hk =) (3..4c) 9

10 Order, k Min. Fn. Evals Table 3..: Minimum number of function evaluations for explicit Runge-Kutta methods of various orders. k 3 = f(t n; + h= y n; + hk =) (3..4d) k 4 = f(t n; + h y n; + hk 3 ): (3..4e) Some observations about this method follow:. The local error of (3..4) is O(h 5 ). In order to get an a priori estimate of the local error we have to subtract the two Taylor's series representations of the solution. This is very tedious and typically does not yield a useful result. Runge-Kutta methods do not yield simple a priori error estimates.. Four function evaluations are required per time step. 3. In the (unlikely) case when f is a function of t only, (3..4) reduces to y n = y n; + h 6 [f(t n;)+4f(t n;= )+f(t n )] which is the same as Simpson's rule integration. Our limited experience with Runge-Kutta methods would suggest that the number of function evaluations increases linearly with the order of the method. Unfortunately, Butcher [8] showed that this is not the case. Some key results are summarized in Table 3... The popularity of the four-stage, fourth-order Runge-Kutta methods are now clear. From Table 3.., we see that a fth order Runge-Kutta method requires an additional two function evaluations per step. Additionally, Butcher [8] showed that an explicit s-stage Runge-Kutta method will have an order of at least s ;. Although Runge-Kutta formulas are tedious to derive, we can make a few general observations. An order one formula must be exact when the solution of the ODE is a linear polynomial. Were this not true, it wouldn't annihilate the constant and linear

11 terms in a Taylor's series expansion of the exact ODE solution and, hence, could not have the requisite O(h ) local error to be rst-order accurate. Thus, the Runge-Kutta method should produce exact solutions of the dierential equations y = and y =. The constant-solution condition is satised identically by construction of the Runge- Kutta formulas. Using (3..3a), the latter (linear-solution) condition with y(t) = t and f(t y) = implies t n = t n; + h or i= b i =: i= b i (3..5a) If we also require the intermediate solutions Y i to be rst order, then the use of (3..3b) with Y i = t n; + c i h gives c i = j= a ij i = ::: s: (3..5b) This condition does not have to be satised for low-order Runge-Kutta methods [6] however, its satisfaction simplies the task of obtaining order conditions for higher-order methods. Methods that satisfy (3..5b) also treat autonomous and non-autonomous systems in a symmetric manner (Problem ). We can continue this process to higher orders. Thus, the Runge-Kutta method will be of order p if it is exact when the dierential equation and solution are y =(t ; t n; ) l; y(t) = l (t ; t n;) l l = ::: p: (The use of t ; t n; as a variable simplies the algebraic manipulations.) Substituting these solutions into (3..3a) implies that or i= h l l = h j= b i (c i h) l; b i c l; i = l = ::: p: (3..5c) l

12 Conditions (3..5c) are necessary for a method to be order p, butmay not be sucient. Note that there is no dependence on the coecients a ij i j = ::: s, in formulas (3..5a,c). This is because our strategy of examining simple dierential equations is not matching all possible terms in a Taylor's series expansion of the solution. This, as noted, is a tedious operation. Butcher developed a method of simplifying the work by constructing rooted trees that present the order conditions in a graphical way. They are discussed in many texts (e.g., [, 6]) however, they are still complex and we will not pursue them here. Instead, we'll develop additional necessary order conditions by considering the simple ODE y = y: Replacing f(t y) in(3..3) by y yields y n = y n; + h Y i = y n; + h It's simpler to use vector notation j= a ij Y j i= b i Y i i = ::: s: y n = y n; + hb T Y Y = y n; l + hay where Y =[Y Y ::: Y s ] T (3..6a) A = l =[ ::: ] T 6 4 a a ::: a s a a ::: a s a s a s ::: a ss 3 7 (3..6b) 5 (3..6c) and b =[b b ::: b s ] T : (3..6d)

13 Eliminating Y, we have Y = y n; (I ; ha) ; l and y n = y n; + hy n; b T (I ; ha) ; l: Assuming that y n; is exact, the exact solution of this test equation is y n = e h y n;. Expanding this solution and (I ; ha) ; in series +h + :::+ hk k! + :::=+hbt (I + ha + :::+ h k A k + :::)l: Equating like powers of h yields the order condition b T A k; l = k = ::: p: (3..7) k! We recognize that this condition with k = is identical to (3..5a). Letting c = [c c ::: c s ] T, we may write (3..5c) with l = in the form b T c = =. The vector form of (3..5b) is Al = c. Thus, b T Al ==, which is the same as (3..7) with k =. Beyond k =,the order conditions (3..5c) and (3..7) are independent. Although conditions (3..5) and (3..7) are only necessary for a method to be of order p, they are sucient in many cases. The actual number of conditions for a Runge-Kutta method of order p are presented in Table 3.. [6]. These results assume that (3..5b) has been satised. Order, p No. of Conds Table 3..: The number of conditions for a Runge-Kutta method of order p [6]. Theorem 3... The necessary and sucient conditions for a Runge-Kutta method (3..3) to be of second order are (3..5c), l =, and (3..7), k =. If (3..5b) is satised then (3..5), k =, are necessary and sucient for second-order accuracy. Proof. We require numerous Taylor's series expansions. To begin, we expand f(t n; + c i h Y i ) using (3..6) to obtain f(t n; + c i h Y i )=f + f t c i h + f y (Y i ; y(t n; )) + [f tt(c i h) +f ty (c i h)(y i ; y(t n; ))+ 3

14 f yy (Y i ; y(t n; )) ]+O(h 3 ): All arguments of f and its derivatives are at (t n; y(t n; )). They have been suppressed for simplicity. Substituting the exact ODE solution and the above expression into (3..3a) yields y(t n )=y(t n; )+h i= b i [f + f t c i h + f y (Y i ; y(t n; )) + O(h )]: The expansion of Y i ; y(t n; ) will, fortunately, only require the leading term thus, using (3..3b) Y i ; y(t n; )=h Hence, we have y(t n )=y(t n; )+h j= a ij f + O(h ): b i [f + f t c i h + hff y i= j= a ij + O(h )]: Equating terms of this series with the Taylor's series (3..8) of the exact solution yields (3..5c) with l =, (3..5c) with l =,and (3..7) with k =. We have demonstrated the equivalence of these conditions when (3..5b) is satised. Remark. The results of Theorem 3.. and conditions (3..5) and (3..7) apply to both explicit and implicit methods. Let us conclude this section with a brief discussion of the absolute stability of explicit methods. We will present a more detailed analysis in Section 3.4 however, the present material will serve to motivate the need for implicit methods. Thus, consider an s-stage explicit Runge-Kutta method applied to the test equation y = y: (3..8) Using (3..8) in (3..3) with the simplication that a ij =, j i, for explicit methods yields y n = y n; + z i= b i Y i = y n; + zb T Y 4 (3..9a)

15 where Y i = y n; + z i; X j= a ij Y j i = ::: s (3..9b) and z = h: (3..9c) The vector form of (3..9) is Y = y n; l + zay: (3..9d) Using this to eliminate Y in (3..9a), we have y n = y n; [ + zb T (I ; za) ; l]: Expanding the inverse y n = y n; [ + zb T (I + za + :::+ z k A k + :::)l] Using (3..7) y n = R(z)y n; (3..a) where R(z) =+z + z + :::+ zp p! + X j=p+ z j b T A j; l: The matrix A is strictly lower triangular for an s-stage explicit Runge-Kutta method thus, A j; =, j >s. Therefore, R(z) =+z + z + :::+ zp p! + j=p+ z j b T A j; l: In particular, for explicit s-stage methods with p = s 4, we have (3..b) R(z) =+z + z zp + :::+ s = p 4: (3..c) p! The exact solution of the test equation (3..8) is y(t n )=e h y(t n; ) 5

16 thus, as expected, a pth-order Runga-Kutta formula approximates a Taylor's series expansion of the exact solution through terms of order p. Using Denition..7 and (3..), the region of absolute stability of an explicit Runge-Kutta method is jr(z)j = z +z + + :::+ zp p! + j=p+ z j b T A j; l : (3..a) In particular, jr(z)j = z +z + zp + :::+ p! s = p 4: (3..b) Since no Runge-Kutta coecients appear in (3..b), we have the interesting result. Lemma 3... All p-stage explicit Runge-Kutta methods of order p 4 have the same region of absolute stability. Since je i j =,, we can determine the boundary of the absolute stability regions (3..a,b) by solving the nonlinear equation R(z) =e i : (3..) Clearly, (3..) implies that jy n =y n; j =. For p = (i.e., for Euler's method), the boundary of the absolute-stability region is determined as +z = e i which can easily be recognized as the familiar unit circle centered at z = ; + i. For real values of z the intervals of absolute stability formethods with p = s 4 are shown in Table Absolute stability regions for complex values of z are illustrated for the same methods in Figure 3... Methods are stable within the closed regions shown. The regions of absolute stability grow with increasing p. When p = 3 4, they also extend slightly into the right half of the complex z-plane. Problems. Instead of solving the IVP (3..), many software systems treat an autonomous ODE y = f(y). Non-autonomous ODEs can be written as autonomous systems 6

17 Order, p Interval of Absolute Stability (-,) (-,) 3 (-.5,) 4 (-.78,) Table 3..3: Interval of absolute stability for p-stage explicit Runge-Kutta methods of order p = Im(z) Re(z) Figure 3..: Region of absolute stability for p-stage explicit Runge-Kutta methods of order p = 3 4 (interiors of smaller closed curves to larger ones). 7

18 by letting t be a dependent variable satisfying the ODE t =. A Runge-Kutta method for an autonomous ODE can be obtained from, e.g., (3..3) by dropping the time terms, i.e., y n = y n; + h with Y i = y n; + h j= i= a ij f(y j ) b i f(y i ) i = ::: s: The Runge-Kutta evaluation points c i, i = ::: s, do not appear in this form. Show that Runge-Kutta formulas (3..3) and the one above will handle autonomous and non-autonomous systems in the same manner when (3..5b) is satised. 3.3 Implicit Runge-Kutta Methods We'll begin this section with a negative result that will motivate the need for implicit methods. Lemma No explicit Runge-Kutta method can have an unbounded region of absolute stability. Proof. Using (3..), the region of absolute stability of an explicit Runge-Kutta method satises jy n =y n; j = jr(z)j z = h where R(z) is a polynomial of degree s, thenumber of stages of the method. Since R(z) is a polynomial, jr(z)j!as jzj!and, thus, the stability region is bounded. Hence, once again, we turn to implicit methods as a means of enlarging the region of absolute stability. Necessary order conditions for s-stage implicit Runge-Kutta methods are given by (3..5c, 3..7) (with sucient conditions given in Hairer et al. [6], Section II.). A condition on the maximum possible order follows. Theorem The maximum order of an implicit s-stage Runge-Kutta method iss. 8

19 Proof. cf. Butcher [7]. The derivations of implicit Runge-Kutta methods follow those for explicit methods. We'll derive the simplest method and then give afew more examples. Example Consider the implicit -stage method obtained from (3..3) with s = as y n = y n; + hb f(t n; + c h Y ) (3.3.a) Y = y n; + ha f(t n; + c h Y ): (3.3.b) To determine the coecients c, b, and a, we substitute the exact ODE solution into (3.3.a,b) and expand (3.3.a) in a Taylor's series y(t n )=y(t n; )+hb [f + c hf t + f y (Y ; y(t n; )) + O(h )] where f := f(t n; y(t n; )), etc. Expanding (3.3.b) in a Taylor's series and substituting the result into the above expression yields y(t n )=y(t n; )+hb [f + c hf t + ha ff y + O(h )]: Comparing the terms of the above series with the Taylor's series of the exact solution yields y(t n )=y(t n; )+hf + h (f t + ff y )+O(h 3 ) b = a = c = : Substituting these coecients into (3.3.), we nd the method to be an implicit midpoint rule y n = y n; + hf(t n; + h= Y ) (3.3.a) Y = y n; + h f(t n; + h= Y ): (3.3.b) The tableau for this method is 9

20 The formula has similarities to the midpoint rule predictor-corrector (3..) however, there are important dierences. Here, the backward Euler method (rather than the forward Euler method) may be regarded as furnishing a predictor (3.3.b) with the midpoint rule providing the corrector (3.3.a). However, formulas (3.3.a) and (3.3.b) are coupled and must be solved simultaneously rather than sequentially. Example The two-stage method having maximal order four presented in the following tableau was developed by Hammer and Hollingsworth [8]. ; p p p ; p This method is derived in Gear [5], Section.5. Example Let us examine the region of absolute stability of the implicit midpoint rule (3.3.). Thus, applying (3.3.) to the test equation (3..8) we nd Y = y n; + h Y and y n = y n; + hy : Solving for Y Y = y n; ; h= and eliminating it in order to explicitly determine y n as y n = + y n; = h ; h= +h= ; h= Thus, the region of absolute stability is interior to the curve +z= ; z= = ei z = h: y n; : Solving for z z = ; ei ;ei= +e = ; e ;i= i e i= + e ;i= = ;i tan =:

21 Im(h λ) Re(h λ) Figure 3.3.: Region of absolute stability for the implicit midpoint rule (3.3.). Since z is imaginary, the implicit midpoint rule is absolutely stable in the entire negative half of the complex z plane (Figure 3.3.). Let us generalize the absolute stability analysis presented in Example before considering additional methods. This analysis will be helpful since we will be interested in developing methods with very large regions of absolute stability. Thus, we apply the general method (3..3) to the test equation (3..8) to obtain y n = y n; + zb T Y (3.3.3a) where Y, l, A, and b and are dened by (3..6) and z = h. Eliminating Y in (3.3.3a) by using (3.3.3b) we nd (I ; za)y = y n; l (3.3.3b) y n = R(z)y n; (3.3.4a) where R(z) =+zb T (I ; za) ; l: (3.3.4b) The region of absolute stability is the set of all complex z where jr(z)j. While R(z) is a polynomial for an explicit method, it is a rational function for an implicit method.

22 Hence, the region of absolute stability can be unbounded. As shown in Section 3., a method of order p will satisfy R(z) =e z + O(z p+ ): Rational-function approximations of the exponential are called Pade approximations. Denition The (j k) Pade approximation R jk (z) is the maximum-order approximation of e z having the form R jk (z) = P k(z) Q j (z) = p + p z + :::+ p k z k q + q z + :::+ q j z j (3.3.5a) where P k and Q j have no common factors, Q j () = q = (3.3.5b) and R jk (z) =e z + O(z k+j+ ): (3.3.5c) With R jk normalized by (3.3.5b), there are k + j + undetermined parameters in (3.3.5a) that can be determined by matching the rst k + j +terms in the Taylor's series expansion of e z. Thus, the error of the approximation should be O(z k+j+ ). Using (3.3.5c), we have k+j X i= z i i! = P k i= p iz i P j i= q iz i + O(zk+j+ ) (3.3.6) Equating the coecients of like powers of z determines the parameters p i, i = ::: k and q i, i = ::: j. Example Find the (,) Pade approximation of e z. Setting j =andk =in (3.3.6) gives ( + z + z )( + q z + q z )=p : Equating the coecients of z i, i =, gives p = +q = + q + q =

23 Thus, p = q = ; q ==: Using (3.3.5), the (,) Pade approximation is R (z) = ; z + z = : Additionally, e z = R (z)+o(z 3 ): Some other Pade approximations are presented in Table We recognize that the (,) approximation corresponds to Euler's method, the (,) method corresponds to the backward Euler method, and the (,) approximation corresponds to the midpoint rule. (The (,) approximation also corresponds to the trapezoidal rule.) Methods corresponding to the (s s) diagonal Pade approximations are Butcher's maximum order implicit Runge-Kutta methods (Theorem 3.3.). k = j = +z +z + z = ;z +z= ;z= +z=3+z =6 ;z=3 ;z+z = +z=3 ;z=3+z =6 +z=+z = ;z=+z = Table 3.3.: Some Pade approximations of e z. Theorem There is one and only one s-order s-stage implicit Runge-Kutta formula and it corresponds to the (s s) Pade approximation. Proof. cf. Butcher [7]. We'll be able to construct several implicit Runge-Kutta methods having unbounded absolute-stability regions. We'll want to characterize these methods according to their behavior as jzj!and this requires some additional notions of stability. Denition A numerical method is A-stable if its region of absolute stability includes the entire left-half plane Re(h). 3

24 The relationship between A-stability and the Pade approximations is established by the following theorem. Theorem Methods that lead to a diagonal or one of the rst two sub-diagonals of the Pade table for e z are A-stable. Proof. The proof appears in Ehle [3]. Without introducing additional properties of Pade approximations, we'll make some observations using the results of Table We have shown that the regions of absolute stability of the backward Euler method and the midpoint rule include the entire left-half of the h plane hence, they are A-stable.. The coecients of the highest-order terms of P s (z) and Q s (z) are the same for diagonal Pade approximations R ss (z) hence, jr ss (z)j! as jzj! and these methods are A-stable (Table 3.3.). 3. For the sub-diagonal (,) and (,) Pade approximations, jr(z)j! as jzj! and these methods will also be A-stable. It is quite dicult to nd high-order A-stable methods. Implicit Runge-Kutta methods provide the most viable approach. Examining Table 3.3., we see that we canintro- duce another stability notion. Denition A numerical method is L-stable if it is A-stable and if jr(z)j! as jzj!. The backward Euler method and, more generally, methods corresponding to subdiagonal Pade approximations in the rst two bands are L-stable ([7], Section IV.4). L-stable methods are preferred for sti problems where Re() but methods where jr(z)j! are more suitable when Re() but jim()j, i.e., when solutions oscillate rapidly. Explicit Runge-Kutta methods are easily solved, but implicit methods will require an iterative solution. Since implicit methods will generally be used for sti systems, 4

25 Newton's method will be preferred to functional iteration. To emphasize the diculty, we'll illustrate Runge-Kutta methods of the form (3..3) for vector IVPs y = f(t y) y() = y (3.3.7) where y, etc. are m-vectors. The application of (3..3) to vector systems just requires the use of vector arithmetic thus, Y i = y n; + h j= a ij f(t n; + c j h Y j ) i = ::: s (3.3.8a) y n = y n; + h i= b i f(t n; + c i h Y i ): (3.3.8b) Once again, y n etc. are m-vectors. To use Newton's method, we write the nonlinear system (3.3.8a) in the form F i (Y Y ::: Y s )=Y i ; y n; ; h and get 6 4 j= I ; a J () ;ha J () ;ha s J () s ;ha J () I ; ha J () ;ha s J () s ;ha s J () ;ha s J () I ; ha ss J () s a ij f(t n; + hc j Y j )= Y () Y (). Y () s = ; 6 4 F () F (). F () s j = ::: s 3 7 (3.3.9a) 5 (3.3.9b) Y (+) i = Y () i + Y () i i = ::: s = ::: (3.3.9c) where J () j = f y (t n; + hc j Y () j ) F () j = F j (Y () Y () ::: Y s () ) j = ::: s: (3.3.9d) For an s-stage Runge-Kutta method applied to an m-dimensional system (3.3.7), the Jacobian in (3.3.9b) has dimension sm sm. This will be expensive for high-order methods and high-dimensional ODEs and will only be competitive with, e.g., implicit 5

26 multistep methods (Chapter 5) under special conditions. Some simplications are possible and these can reduce the work. For example, we can approximate all of the Jacobians as J = f y (t n; y n; ): (3.3.a) In this case, we can even shorten the notation by introducing the Kronecker or direct product of two matrices as A J = 6 4 Then, (3.3.9b) can be written concisely as 3 7 a J a J a s J a J a J a s J : (3.3.b) a s J a s J a ss J (I ; ha J)Y () = ;F () (3.3.c) where A was given by (3..6c) and Y () = 6 4 Y () Y (). Y () s F() = 6 4 F () F (). F () s : (3.3.d) The approximation of the Jacobian does not change the accuracy of the computed solution, only the convergence rate of the iteration. As long as convergence remains good, the same Jacobian can be used for several time step and only be re-evaluated when convergence of the Newton iteration slows. Even with this simplication, with m ranging into the thousands, the solution of (3.3.) is clearly expensive and other ways of reducing the computational cost are necessary. Diagonally implicit Runge-Kutta (DIRK) methods oer one possibility. A DIRK method is one where a ij =, i < j and at least one a ii 6=, i j = ::: s. If, in addition, a = a = :::= a ss = a, the technique is known as a singly diagonally implicit Runge-Kutta (SDIRK) method. Thus, the coecient matrix of an SDIRK method has 6

27 the form A = 6 4 a a a..... a s a s a : (3.3.) Thus, with the approximation (3.3.), the system Jacobian in (3.3.c) is (I ; ha J) = I ; haj ;ha J I ; haj : ;ha s J ;ha s J I ; haj The Newton system (3.3.) is lower block triangular and can be solved by forward substitution. Thus, the rst block of (3.3.c) is solved for Y (). Knowing Y the second equation is solved for Y (), etc. The Jacobian J is the same for all stages thus, the diagonal blocks need only be factored once by Gaussian elimination and forward and backward substitution may be used for each solution. The implicit midpoint rule(3.3.) is a one-stage, second-order DIRK method. We'll examine a two-stage DIRK method momentarily, but rst we note that the maximum order of an s-stage DIRK method is s + []. Example A two-stage DIRK formula has the tableau c a c a a b b and it could be of third order. According to Theorem 3.., the conditions for secondorder accuracy are (3..5c) with l = when(3..5b) is satised, i.e., b + b = c = a c = a + a b c + b c ==: (As noted earlier, satisfaction of (3..5b) is not necessary, but it simplies the algebraic manipulations.) We might guess that the remaining conditions necessary for third order accuracy are (3..5c) with l =3and (3..7) with k =3,i.e., b c + b c ==3 and b T A l = bac = b a c + b (a c + a c )==6 7

28 where (3..5b) was used to simplify the last expression. After some eort, this system of six equations in seven unknowns can be solved to yield c = ; 3c b = c ; = b = = ; c 3 ; 6c c ; c c ; c a = c a = =6 ; b c b (c ; c ) a = c ; a : As written, the solution is parameterized by c. Choosing c ==3 gives Using (3..3), the method is /3 /3 / / 3/4 /4 Y = y n; + h 3 f(t n; + h 3 Y ) Y = y n; + h [f(t n; + h 3 Y )+f(t n Y )] y n = y n; + h 4 [3f(t n; + h 3 Y )+f(t n Y )]: We can check by constructing a Taylor's series that this method is indeed third order. Hairer et al. [6], Section II., additionally show that our necessary conditions for thirdorder accuracy are also sucient in this case. The computation of Y can be recognized as the backward Euler method for one-third of the time step h. The computation of Y and y n are not recognizable in terms of simple quadrature rules. Since the method is third-order, its local error is O(h 4 ). We can also construct an SDIRK method by insisting that a = a. Enforcing this condition and using the previous relations gives two methods having the tableau where ; ; = = = ( p 3 ): The method with = ( + = p 3)= is A-stable while the other method has a bounded stability region. Thus, this would be the method of choice. 8

29 Let us conclude this Section by noting a relationship between implicit Runge-Kutta and collocation methods. With u(t) a polynomial of degree s in t for t t n;,acollocation method for the IVP y = f(t y) y(t n; )=y n; (3.3.a) consists of solving u(t n; )=y n; (3.3.b) u (t n; + c i h)=f(t n; + c i h u(t n; + c i h)) i = ::: s (3.3.c) where c i, i = ::: s, are non-negative parameters. Thus, the collocation method consists of satisfying the ODE exactly at s points. The solution u(t n; + h) maybeused as the initial condition y n for the next time step. Usually, the collocation points t n; + c i h are such that c i [ ], i = ::: s,but this need not be the case [6,, ]. Generally, the c i, i = ::: s, are distinct and we shall assume that this is the case here. (The coecients need not be distinct when the approximation u(t) interpolates some solution derivatives, e.g., as with Hermite interpolation.) t t n;, by a Lagrange interpolating polynomial of degree s ;, we have u (t) = j= Approximating u (t), k j L j ( t ; t n; ) (3.3.3a) h where L j () = sy i= i6=j ; c i c j ; c i (3.3.3b) = t ; t n; : (3.3.3c) h The polynomials L j (), j = ::: s, are a product of s ; linear factors and are, hence, of degree s ;. They satisfy L j (c i )= ji j i = ::: s (3.3.3d) 9

30 where ji is the Kronecker delta. Using (3.3.3a), we see that u (t) satises the interpolation conditions u (t n; + c i h)=k i i = ::: s: (3.3.3e) Transforming variables in (3.3.3a) using (3.3.3c) u(t n; + h)=y n; + h Z u (t n; + h)d: (3.3.4) By construction, (3.3.4) satises (3.3.b). Substituting (3.3.3e) and (3.3.4) into (3.3.c), we have k i = f(t n; + c i h y n; + h Z c i k j L j ()d): j= This formula is identical to the typical Runge-Kutta formula (3..b) provided that a ij = Z c i L j ()d: (3.3.5a) Similarly, using (3.3.3a) in (3.3.4) and evaluating the result at = yields u(t n; + h) =y n = y n; + h This formula is identical to (3..a) provided that Z k j L j ()d: j= b j = Z L j ()d: (3.3.5b) This view of a Runge-Kutta method as a collocation method is useful in many situations. Let us illustrate one result. Theorem A Runge-Kutta method with distinct c i, i = ::: s, and of order at least s is a collocation method satisfying (3.3.), (3.3.5) if and only if it satises the order conditions j= a ij c q; j = cq i i q = ::: s: (3.3.6) q Remark. The order conditions (3.3.5) are related to the previous conditions (3..5c, 3..7) (cf. [6], Section II.7). 3

31 Proof. We use the Lagrange interpolating polynomial (3.3.3) to represent any polynomial P () of degree s ; as P () = j= P (c j )L j (): Regarding P () as u (t n + h), integrate to obtain u(t n; + c i h) ; y n; = Z c i P ()d = P (c j ) Z c i j= L j ()d i = ::: s: Assuming that (3.3.5a) is satised, we have Z c i P ()d = j= a ij P (c j ) i = ::: s: Now choose P () = q;, q = ::: s, to obtain (3.3.6). The proof of the converse follows the same arguments (cf. [6], Section II.7). Now, we might ask if there is an optimal way of selecting the collocation points. Appropriate strategies would select them so that accuracy and/or stability are maximized. Let's handle accuracy rst. The following theorems discuss relevant accuracy issues. Theorem (Alekseev and Grobner) Let x, y, and z satisfy x (t z()) = f(t x(t z())) x( z()) = z() (3.3.7a) y (t) =f(t y(t)) y() = y (3.3.7b) with f y (t y) C, t>. Then, z (t) =f(t z(t)) + g(t z(t)) z() = y (3.3.7c) z(t) ; y(t) = Z z()) g( (3.3.7d) Remark. Formula (3.3.7d) is often called the nonlinear variation of parameters. Remark 3. The parameter identies the time that the initial conditions are applied in (3.3.7a). A prime, as usual, denotes t dierentiation. Remark 4. Observe that y(t) =x(t y ). 3

32 Proof. cf. Hairer et al. [6], Section I.4, and Problem. Theorem makes it easy for us to associate the collocation error with a quadrature error as indicated below. Theorem Consider the quadrature rule where Z t n tn; F (t)dt = h Z F (t n; + h)d = h i= b i F (t n; + c i h)+e p (3.3.8a) E p = Ch p+ F (p) ( n ) n (t n; t n ) (3.3.8b) F C p (t n; t n ),andc is a constant. Then the collocation method (3.3.) has order p. Proof. Consider the identity u = f(t u)+[u ; f(t u)] and use Theorem on [t n; t n ] with z(t) =u(t) andg(t u) =u ; f(t u) to obtain u(t n ) ; y(t n )= Z t n x u (t n u())[u () ; f( u())]d: tn; Replace this integral by the quadrature rule (3.3.8) to obtain u(t n ) ; y(t n )=h i= b i x u (t n t n; + c i h u(t n; + c i h))[u (t n; + c i h); f(t n; + c i h u(t n; + c i h))] + E p : All terms in the summation vanish upon use of the collocation equations (3.3.) thus, ju(t n ) ; y(t n )j = je p jjcjh p+ max [tn; p x u(t n u())[u () ; f( u)]j: It remains to show that the derivatives in the above expression are bounded as h!. We'll omit this detail which is proven in Hairer et al. [6], Section II.7. Thus, and the collocation method (3.3.) is of order p. jy(t n ) ; u(t n )j ^Ch p+ (3.3.9) 3

33 At last, our task is clear. We should select the collocation points c i, i = ::: s, to maximize the order p of the quadrature rule (3.3.8). We'll review some of the details describing the derivation of (3.3.8). Additional material appears in most elementary numerical analysis texts [4]. Let ^F () =F (t n; + h) and approximate it by a Lagrange interpolating polynomial of degree s ; to obtain ^F () = j= ^F (c j )L j ()+ M s() s! ^F (s) () ( ) (3.3.a) where M s () = sy i= ( ; c i ): (3.3.b) (Dierentiation in (3.3.a) is with respect to, not t.) Integrate (3.3.a) and use (3.3.5b) to obtain Z ^F ()d = j= b j ^F (cj )+ ^E s (3.3.a) where ^E s = s! Z M s () ^F (s) (())d = s! Z sy i= ( ; c i ) ^F (s) (())d: (3.3.b) In Newton-Cotes quadrature rules, such as the trapezoidal and Simpson's rules, the evaluation points c i, i = ::: s, are specied a priori. With Gaussian quadrature, however, the points are selected to maximize the order of the rule. This can be done by expanding ^F (s) (()) in a Taylor's series and selecting the c i, i = ::: s, to annihilate as many terms as possible. Alternatively, and equivalently, the quadrature rule can be designed to integrate polynomials exactly to as high a degree as possible. The actual series expansion is complicated by the fact that ^F (s) is evaluated at () in (3.3.b). Isaacson and Keller [9] provide additional details on this matter however, we'll sidestep the subtleties by assuming that all derivatives of () are bounded so that ^F (s) has an expansion in powers of of the form ^F (s) () = + + :::+ r; r; + O( r ): 33

34 d P d (x) x x ; 3 3 x 3 ; 3x 5 4 x 4 ; 6x x 5 ; + 5x x3 9 Table 3.3.: Legendre polynomials P d (x) of degree d [ 5] on ; x. The rst r terms of this series will be annihilated by (3.3.b) if M s () is orthogonal to polynomials of degree r ;, i.e., if Z M() q; d = q = ::: r: (3.3.) Under these conditions, were we to transform the integrals in (3.3.) and (3.3.) back to t dependence using (3.3.3c), we would obtain the error of (3.3.8b) with p = s + r. With the s coecients c i, i = ::: s, we would expect the maximum value of r to be s. According to Theorem 3.3.6, this choice would lead to a collocation method of order s, i.e., a method having p = r + s = s and an O(h s+ ) local error. These are Butcher's maximal order formulas (Theorem 3.3.) corresponding to the diagonal Pade approximations. The maximum-order coecients identied above are the roots of the s th-degree Legendre polynomial scaled to the interval ( ). The rst six Legendre polynomials are listed in Table Additional polynomials and their roots appear in Abromowitz and Stegun [], Chapter. Example According to Table 3.3., the roots of P (x) are x = = p 3 on [; ]. Mapping these to [ ] by the linear transformation = ( + x)=, we obtain the collocation points for the maximal-order two-stage method as c = ( ; p 3 ) c = ( + p 3 ): 34

35 Since this is our rst experience with these techniques, let us verify our results by a direct evaluation of (3.3.) using (3.3.b) thus, Integrating Z ( ; c )( ; c )d = 3 ; c + c + c c = Z ( ; c )( ; c )d =: 4 ; c + c + c c 3 =: These may easily be solved to conrm the collocation points obtained by using the roots of P (x). In this case, we recognize c and c as the evaluation points of the Hammer- Hollingsworth formula of Example With the collocation points c i, i = ::: s, determined, the coecients a ij and b j, i j = ::: s, may be determined from (3.3.5a,b). These maximal order collocation formulas are A-stable since they correspond to diagonal Pade approximations (Theorem 3.3.3). We may not want to impose the maximal order conditions to obtain, e.g., better stability and computational properties. With Radau quadrature, we x one of the coef- cients at an endpoint thus, we set either c =or c s =. The choice c =leads to methods with bounded regions of absolute stability. Thus, the methods of choice have c s =. They correspond to the subdiagonal Pade approximations and are, hence, A- and L-stable (Theorem 3.3.3). They have orders of p = s ; [7], Section IV.5. Such excellent stability and accuracy properties makes these methods very popular for solving sti systems. The Radau polynomial of degree s on ; x is R s (x) =P s (x) ; s s ; P s;(x): The roots of R s transformed to [ ] (using =(+x)=) are the c i, i + ::: s. All values of c i, i = ::: s, are on ( ] with, as designed, c s =. The one-stage Radau method is the backward Euler method. The tableau of the two-stage Radau method is (Problem )

36 We'll conclude this Section with a discussion of singly implicit Runge-Kutta (SIRK) methods. These methods are of order s, which is less than the Legendre (s), Radau (s ; ), and DIRK (s + ) techniques. They still have excellent A- and L-stability properties and, perhaps, oer a computational advantage. A SIRK method is one where the coecient matrix A has a single s-fold real eigenvalue. These collocation methods were Originally developed by Butcher [9] and have been subsequently extended [5,, 6, ]. Collocating, as described, leads to the system ( ). The intermediate solutions Y i, i = ::: s,have the vector form specied by (3..9d) with the elements of A given by (3.3.5a). Multiplying (3..9d) by a nonsingular matrix T ;,we obtain T ; Y = y n T ; l + ht ; ATT ; f where Y, l, A, and f are, respectively, given by (3..6a-c) and Let f = 6 4 f(t n; + c h) f(t n; + c h). f(t n; + c s h) : (3.3.3) ^Y = T ; Y ^l = T ; l ^A = T ; AT ^f = T ; f: (3.3.4) Butcher [9] chose the collocation points c i = i, i = ::: s, where i is the i th root of the sth-degree Laguerre polynomial L s (t) and is chosen so that the numerical method has favorable stability properties. butcher also selected T to have elements T ij = L i; ( j ): Then ^A = : (3.3.5) 36

37 Thus, ^A is lower bidiagonal with the single eigenvalue. The linearized system (3.3.9) is easily solved in the transformed variables. (A similar transformation also works with Radau methods [7].) Butcher [9] and Burrage [5] show that it is possible to nd A- stable SIRK methods for s 8. These methods are also L-stable with the exception of the seven-stage method. Problems. Verify that (3.3.7d) is correct when f(t y) =ay with a a constant.. Consider the method y n = y n; + h[( ; )f(t n; y n; )+f(t n y n )] with [ ]. The method corresponds to the Euler method with =, the trapezoidal rule with ==, and the backward Euler method and when =... Write the Runge-Kutta tableau for this method... For what values of is the method A-stable? Justify your answer. 3. Radau or Lobatto quadrature rules have evaluation points at one or both endpoints of the interval of integration, respectively. Consider the twotwo-stage Runge-Kutta methods based on collocation at Radau points. In one, the collocation point c = and in the other the collocation point c =. In each case, the other collocation point (c for the rst method and c for the second method) is to be determined so that the resulting method has as high an order of accuracy as possible. 3.. Determine the parameters a ij, b j, and c i, i j = for the two collocation methods and identify their orders of accuracy. 3.. To which elements of the Pade table do these methods correspond? 3.3. Determine the regions of absolute stability for these methods? Are the methods A- and/or L-stable? 37

38 3.4 Convergence, Stability, Error Estimation The concepts of convergence, stability, anda priori error estimation introduced in Chapter readily extend to a general class of (explicit or implicit) one-step methods having the form y n = y n; + h(t n; y n; h): (3.4.a) Again, consider the scalar IVP y = f(t y) y() = y (3.4.b) and, to begin, we'll show that one-step methods are stable when satises a Lipschitz condition on y. Theorem If (t y h) satises a Lipschitz condition on y then the one-step method (3.4.a) is stable. Proof. The analysis follows the lines of Theorem... Let y n and z n satisfy method (3.4.) and z n = z n; + h(t n; z n; h) z = y + (3.4.) respectively. Subtracting (3.4.) from (3.4.) y n ; z n = y n; ; z n; + h[(t n; y n; h) ; (t n; z n; h)]: Using the Lipschitz condition jy n ; z n j( + hl)jy n; ; z n; j: Iterating the above inequality leads to jy n ; z n j( + hl) n jy ; z j: Using (..) jy n ; z n je nhl j je LT k since nh T and j j. 38

39 Example The function satises a Lipschitz condition whenever f does. Consider, for example, the explicit midpoint rule which has the form of (3.4.a) with (t y h) =f(t + h= y+ hf(t y)=): Then, j(t y h) ; (t z h)j = jf(t + h= y+ hf(t y)=) ; f(t + h= z+ hf(t z)=)j Using the Lipschitz condition on f j(t y h) ; (t z h)j Ljy + hf(t y)= ; z ; hf(t z)=j or or j(t y h) ; (t z h)j L[jy ; zj +(h=)jf(t y) ; f(t z)j] j(t y h) ; (t z h)j L( + hl=)jy ; zj: Thus, we can take the Lipschitz constant for to be L( + ^hl=) for h ( ^h]. In addition to a Lipschitz condition, convergence of the one-step method (3.4.a) requires consistency. Recall (Denition..3), that consistency implies that the local discretization error lim h! n =. Consistency is particularly simple for a one-step method. Lemma The one-step method (3.4.a) is consistent with the ODE y = f(t y) if (t y ) = f(t y): (3.4.3) Proof. The local discretization error of (3.4.a) satises Letting h tend to zero n = y(t n) ; y(t n; ) h ; (t n; y(t n; ) h): lim n = y (t n; ) ; (t n; y(t n; ) ): h! Using the ODE to replace y yields the result. 39

40 Theorem Let (t y h) be a continuous function of t, y, and h on t T, ; < y <, and h ^h, respectively, and satisfy a Lipschitz condition on y. Then the one-step method (3.4.a) converges to the solution of (3.4.b) if and only if it is consistent. Proof. Let z(t) satisfy the IVP z = (t z ) z() = y (3.4.4) and let z n, n, satisfy z n = z n; + h(t n; z n; h) n z = y : (3.4.5) Using the mean value theorem and (3.4.4) z(t n ) ; z(t n; )=hz (t n; + h n )=h(t n; + h n z(t n; + h n ) ) (3.4.6) where n ( ). Let e n = z(t n ) ; z n (3.4.7) and subtract (3.4.5) from (3.4.6) to obtain e n = e n; + h[(t n; + h n z(t n; + h n ) ) ; (t n; z n; h)]: Adding and subtracting similar terms e n = e n; + h[(t n; + h n z(t n; + h n ) ) ; (t n; z(t n; ) ) + (t n; z(t n; ) h) ; (t n; z n; h) + (t n; z(t n; ) ) ; (t n; z(t n; ) h)]: (3.4.8a) Using the Lipschitz condition j(t n; z(t n; ) h) ; (t n; z n; h)j Lje n j: (3.4.8b) Since (t y h) C, it is uniformly continuous on the compact set t [ T], y = z(t), h [ ^h] thus, (h) = max j(t n; z(t n; ) ) ; (t n; z(t n; ) h)j = O(h): t[ T ] (3.4.8c) 4

41 Similarly, (h) = max j(t n; + h n z(t n; + h n ) ) ; (t n; z(t n; ) )j = O(h): t[ T ] (3.4.8d) Substituting (3.4.8b,c,d) into (3.4.8a) je n jje n; j + h[lje n; j + (h)+(h)]: (3.4.9) Equation (3.4.9) is a rst order dierence inequality with constant (independent of n) coecients having the general form je n jaje n; j + B (3.4.a) where, in this case, A =+hl (3.4.b) B = h[(h)+(h)]: (3.4.c) The solution of (3.4.a) is je n ja n je j + A n ; B n : A ; Since e =,we have ( + hl) n ; je n j h[(h)+(h)] hl or, using (..) e LT ; je n j [(h)+(h)]: L Both (h) and (h) approach zero as h! therefore, lim z n = z(t n ): h! n! Nh=T Thus, z n converges to z(t n ), where z(t) is the solution of (3.4.4). If the one-step method satises the consistency condition (3.4.3), then z(t) =y(t). Thus, y n converges to y(t n ), n. This establishes suciency of the consistency condition for convergence. 4

42 In order to show that consistency is necessary for convergence, assume that the onestep method (3.4.a) converges to the solution of the IVP (3.4.b). Then, y n! y(t n ) for all t [ T] as h! and N!. Now, z n, dened by (3.4.5), is identical to y n, so z n must also converge to y(t n ). Additionally, we have proven that z n converges to the solution z(t) of the IVP (3.4.4). Uniqueness of the solutions of (3.4.4) and (3.4.b) imply that z(t) = y(t). This is impossible unless the consistency condition (3.4.3) is satised. Global error bounds for general one-step methods (3.4.) have the same form that we saw in Chapter for Euler's method. Thus, a method of order p will converge globally as O(h p ). Theorem Let satisfy the conditions of Theorem 3.4. and let the one-step method be of order p. Then, the global error e n = y(t n ) ; y n is bounded by je n j Chp L (elt ; ): (3.4.) Proof. Since the one-step method is of order p, there exists a positive constant C such that the local error d n satises jd n jch p+ : The remainder of the proof follows the lines of Theorem.... Prove Theorem Problems 3.5 Implementation: Error and Step Size Control We would like to design software that automatically adjusts the step size so that some measure of the error, ideally the global error, is less than a prescribed tolerance. While automatic variation of the step size is easy with one-step methods, it is very dicult to compute global error measures. A priori bounds, such as (3.4.), tend to be too conservative and, hence, use very small step sizes (cf. [6], Section II.3). Other more accurate procedures (cf. [5], pp. 3-4) tend to be computationally expensive. Controlling a 4

43 measure of the local (or local discretization) error, on the other hand, is fairly straight forward and this is the approach that we shall study in this section. A pseudo-code segment illustrating the structure of a one-step method y n = y n; + h(t n; y n; h) (3.5.a) that performs a single integration step of the vector IVP y = f(t y) y() = y (3.5.b) is shown in Figure On input, y contains an approximation of the solution at time t. On output, t is replaced by t + h and y contains the computed approximate solution at t + h. The step size must be dened on input, but may be modied each time the computed error measure fails to satisfy the prescribed tolerance. procedure onestep (f: vector function : real var t, h: real var y: vector) begin repeat Integrate (3.5.b) from t to t + h using (3.5.a) Compute error measure at t + h if error measure >then Calculate a new step size h until error measure t = t + h Suggest a step size h for the next step end Figure 3.5.: Pseudo-code segment of a one-step numerical method with error control and automatic step size adjustment. In addition to supplying a one-step method, the procedure presented in Figure 3.5. will require routines to compute an error measure and to vary the step size. concentrate on the error measure rst. We'll Example Let us calculate an estimate of the local discretization error of the midpoint rule predictor-corrector. We do this by subtracting the Taylor Taylor's series expansion of the exact solution (3.., 3..3) from the expansion of the Runge-Kutta formula (3..7) with a = c ==, b =,and b =. The result is d n = h3 6 [3(f tt +ff ty + f f yy ) ; (f t f y + ff y )] (t n; y ( tn;) + O(h 4 ): 43

44 Clearly this is too complicated to be used as a practical error estimation scheme. Two practical approaches to estimating the local and local discretization errors of Runge-Kutta methods are (i) Richardson's extrapolation (or step doubling) and (ii) embedding. We'll study Richardson's extrapolation rst. For simplicity, consider a scalar one-step method of order p having the following form and local error y n = y n; + h(t n; y n; h) (3.5.a) d n = C n h p+ + O(h p+ ): (3.5.b) The coecient C n may depend on t n; and y(t n; ) but is independent of h. Typically, C n is proportional to y (p+) (t n; ). Of course, the ODE solution must have derivatives of order p + for this formula to exist. Let y h n be the solution obtained from (3.5.a) using a step size h. Calculate a second solution y h= n at t = t n using two steps with a step size h= and an \initial condition" of y n; at t n;. (We'll refer to the solution computed at t n;= = t n; + h= as y h= n;=. Assuming that the error after two steps of size h= is twice that after one step (i.e., C n;= C n ), the local errors of both solutions are y h n ; y(t n )=C n h p+ + O(h p+ ) and y h= n ; y(t n )=C n (h=) p+ + O(h p+ ) Subtracting the two solutions to eliminate the exact solution gives y h n ; y h n= =C n h p+ ( ; ;p )+O(h p+ ): Neglecting the O(h p+ ) term, we estimate the local error in the solution of (3.5.a) as jd n jjc n jh p+ = jyh n ; yn h= j : (3.5.3a) ; ;p Computation of the error estimate requires s additional function evaluations (to compute y n;= and y n ) for an s-stage Runge-Kutta method. If s p then approximately 44

45 p extra function evaluations (for scalar systems). This cost for m-dimensional vector problems is approximately pm function evaluations per step. Richardson's extrapolation is particularly expensive when used with implicit methods because the change of step size requires another Jacobian evaluation and (possible) factorization. It may, however, be useful with DIRK methods because of their lower triangular coecient matrices. It's possible to estimate the error of the solution y h= n as jd h= n j jc njh p+ = jyh n ; yn h= j : (3.5.3b) p p ; Proceeding in this manner seems better than accepting y h n as the solution however, it is a bit risky since we do not have an estimate of the error of the intermediate solution y h= n;=. Finally, the local error estimate (3.5.3a) or (3.5.3b) may be added to y h n or y h= n, respectively, to obtain a higher-order method. For example, using (3.5.3b), Thus, we could accept y(t n )=yn h= + yh n ; yn h= p ; + O(hp+ ): ^y n h= = yn h= + yh n ; yn h= p ; as an O(h p+ ) approximation of y(t n ). This technique, called local extrapolation, is also a bit risky since we do not have an error estimate of ^y n h=. We'll return to this topic in Chapter 4. Embedding, the second popular means of estimating local (or local discretization) errors, involves using two one-step methods having dierent orders. Thus, consider calculating two solutions using the p th- and p + st-order methods and y p n = y n; + h p (t n; y n; h) d p n = C p nh p+ (3.5.4a) y p+ n = y n; + h p+ (t n; y n; h) d p+ n = C p+ n h p+ : (3.5.4b) (The superscripts on y n and d n are added to distinguish solutions of dierent order.) The local error of the p-order solution is jd p nj = jyn p ; y(t n )j = jyn p ; yn p+ + yn p+ ; y(t n )j: 45

46 Using the triangular inequality jd p njjyn p ; yn p+ j + jyn p+ ; y(t n )j: The last term on the right is the local error of the order p +method (3.5.4b) and is O(h p+ ) thus, jd p njjy p n ; y p+ n j + jd p+ n j: The higher-order error term on the right may be neglected to get an error estimate of the form jd p njjy p n ; y p+ n j: (3.5.5) Embedding, like Richardson's extrapolation, is also an expensive way of estimating errors. If the number of Runge-Kutta stages s p, then embedding requires approximately m(p + )additional function evaluations per step for a system of m ODEs. The number of function evaluations can be substantially reduced by embedding the pth-order method within an (s+)-stage method of order p+. For explicit Runge-Kutta methods, the tableau of the (s + )-stage method would have the form c a c 3 a 3 a c s+ a s+ a s+ a s+ s ^b ^b ^bs ^bs+ (Zero's on an above the diagonal in A are not shown.) Assuming that the p th-order Runge-Kutta method has s stages, it would be required to have the form c a c 3 a 3 a c s a s a s a s s; b b b s+ b s With this form, only one additional function evaluation is needed to estimate the error in the (lower) p th-order method. However, the derivation of such formula pairs is 46

47 not simple since the order conditions are nonlinear. Additionally, it may be impossible to obtain a p + -order method by adding a single stage to an s-stage method. Formulas, nevertheless, exist. Example The forward Euler method is embedded in the trapezoidal rule predictor-corrector method. The tableaux for these methods are The two methods are / / k = f(t n; y n; ) k = f(t n; + h y n; + hk ) y n = y n; + hk y n = y n; + h (k + k ): Example There is a three-stage, second-order method embedded in the classical fourth-order Runge-Kutta method. Their tableaux are These formulas are / / / / /6 /3 /3 /6 / / / / k = f(t n; y n; ) k = f(t n; + h= y n; + hk =) k 3 = f(t n; + h= y n; + hk =) k 4 = f(t n; + h y n; + hk 3 ) y n = y n; + hk 3 47

48 y 4 n = y n; + h 6 (k +k +k 3 + k 4 ): Example Fehlberg [4] constructed pairs of explicit Runge-Kutta formulas for non-sti problems. His fourth- and fth-order formula pair is ; ; ; 8 ; ; ; 44 5 ^ ; The ^ denotes the coecients in the higher fth-order formula. Thus, after determining k i, i = ::: 6, the solutions are calculated as y 4 n = y n; + h[ 5 6 k k k 4 ; 5 k 5] and y 5 n = y n; + h[ 6 35 k k k 4 ; 9 5 k k 6]: Hairer et al. [6], Section II.4 give several Fehlberg formulas. Their fourth- and fth-order pair is slightly dierent than the one presented here. Example Dormand and Prince [] develop another fourth- and fth-order pair that has been designed to minimize the error coecient of the higher-order method so that it may beused with local extrapolation. Its tableau follows. 48

49 ; ; ; ; ; ; ; ^ ; Having procedures for estimating local (or local discretization) errors, we need to develop practical methods of using them to control step sizes This will involve the selection of an appropriate (i) error measure, (ii) error test, and (iii) renement strategy. As indicated in Figure 3.5., we will concentrate on step changing algorithms without changing the order of the method. Techniques that automatically vary the order of the method with the step size are more dicult and are not generally used with Runge-Kutta methods (cf., however, Moore and Flaherty []). For vector IVPs (3.5.b), we will measure the \size" of the solution or error estimate by using avector norm. Many such metrics are possible. Some that suit our needs are. the maximum norm ky(t)k = max im jy i(t)j (3.5.6a). the L or sum norm ky(t)k = mx jy i (t)j (3.5.6b) i= 3. and the L or Euclidean norm " mx i= # = ky(t)k = jy i (t)j : (3.5.6c) 49

50 The two most common error tests are control of the absolute and relative errors. An absolute error test would specify that the chosen measure of the local error be less than a prescribed tolerance thus, k d ~ n k A where the ~ signies the local error estimate rather than the actual error. Using a relative error test, we would control the error measure relative tothe magnitude of the solution, e.g., k d ~ n k R ky n k: It is also common to base an error test on a combination of an absolute and a relative tolerance, i.e., k d ~ n k R ky n k + A : When some components of the solution are more important than others it may be appropriate to use a weighted norm with y i (t) in (3.5.6) replaced by y i (t)=w i, where w =[w w ::: w m ] T (3.5.7a) is a vector of positive weights. As an example, consider the weighted maximum norm of the local error estimate k ~ d n k w = max im ~d n i w i where ~ d n i denotes the local error estimate of the i th component ofd n. Use of a weighted test such as (3.5.7b) k ~ d n k w (3.5.7c) adds exibility to the software. Users may assign weights prior to the integration in proportion to the importance of a variable. The weighted norm may also be used to simulate a variety of standard tests. Thus, for example, an absolute error test would be obtained by setting w i =, i = ::: m, and = A. A mixed error test where the integration step is accepted if the local error estimate of the ithode does not exceed R jy n i j + A 5

51 may be specied by using the maximum norm and selecting =max( A R ) and Present Runge-Kutta software controls: w i =( R jy n i j + A )=:. the local error k ~ d n k w (3.5.8a). the local error perunitstep k ~ d n k w h (3.5.8b) 3. or the indirect (extrapolated) local error per unit step k ~ d n k w Ch (3.5.8c) where C is a constant depending on the method. The latter two formulas are attempts to control a measure of the global error. Let us describe a step size selection process for controlling the local error per unit step in a p th order Runge-Kutta method. Suppose that we have just completed an integration from t n; to t n. We have computed an estimate of the local error d ~ n using either Richardson's extrapolation or order embedding. We compare k d ~ n k w with the prescribed tolerance and. if k d ~ n k w >we reject the step and repeat the integration with a smaller step size,. otherwise we accept the step and suggest a step size for the subsequent step. In either case, k ~ d n k w h C n h p : 5

52 Ideally, we would like to compute a step size h OPT so that C n h p OPT : Eliminating the coecient C n between the two equations or h OPT h k ~ d n k w hp OPT h p+ h k ~ d n k w =p : (3.5.9a) The error estimates are based upon an asymptotic analysis and are, thus, not completely reliable. Therefore, it is best to include safety factors such as ( " h OPT = h min MAX max MIN s k ~ d n k w =p #) : (3.5.9b) The factors MAX and MIN limit the maximum step size increase and decrease, respectively, while s tends to make step size changes more conservative. Possible choices of the parameters are MAX = 5, MIN = :, and s = :9. Step size control based on either (3.5.8a) or (3.5.8c) works similarly. In general, the user must also provide a maximum step size h MAX so that the code does not miss interesting features in the solution. Selection of the initial step size is typically left to the user. This can be somewhat problematical and several automatic initial step size procedures are under investigation. One automatic procedure that seems to be reasonably robust is to select the initial step size as h = =T p + kf( y())k p =p where T is the nal time and p = p + for local error control and p = p for local error per unit step control. Example ([6], Section II.4). We report results when several explicit fourthorder explicit Runge-Kutta codes were applied to y =ty log(max(y ; 3)) y () = y = ;ty log(max(y ; 3)) y () = e: 5

53 ; ; ; ; ; 6 36 ; ; 8 43 ; Table 3.5.: Butcher's seven-stage sixth-order explicit Runge-Kutta method The exact solution of this problem is y (t) =e sin t y (t) =e cos t : Hairer et al. [6] solved the problem on t 5 using tolerances ranging from ;7 to ;3. The results presented in Figure 3.5. compare the base logarithms of the maximum global error and the number of function evaluations. The several methods that are not identied in Figure 3.5. are the more traditional formulas, including the classical Runge-Kutta method (solid line). All of these are listed in Hairer et al. [6], Section II.. \Fehlberg's method" is the fourth- and fth-order pair given in Example The \Dormand-Prince" method is the fourth- and fth-order pair of Example \Butcher's method" is the sixth-order seven-stage formula shown in Table It is the only formula that is beyond fourth or fth order. Results in the lower graph of Figure 3.5. use local extrapolation thus, the higher-order solution of the pair is kept, even though it has no local error estimate. Of all the methods shown in Figure 3.5., the Dormand-Prince and Fehlberg methods appear to have the greatest accuracy for a given cost. The higher-order Butcher formula gains appeal as accuracy increases. The Dormand-Prince method has a distinct advantage relative to the Fehlberg method when local extrapolation is used. As noted, the Dormand- Prince method was designed for this purpose. For this problem, which has a smooth 53

54 Figure 3.5.: Accuracy vs. eort for several explicit Runge-Kutta methods [6]. 54

### Numerical Analysis An Introduction

Walter Gautschi Numerical Analysis An Introduction 1997 Birkhauser Boston Basel Berlin CONTENTS PREFACE xi CHAPTER 0. PROLOGUE 1 0.1. Overview 1 0.2. Numerical analysis software 3 0.3. Textbooks and monographs

### 5 Numerical Differentiation

D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

### AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

### Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

### since by using a computer we are limited to the use of elementary arithmetic operations

> 4. Interpolation and Approximation Most functions cannot be evaluated exactly: x, e x, ln x, trigonometric functions since by using a computer we are limited to the use of elementary arithmetic operations

### Implementation of the Bulirsch Stöer extrapolation method

Implementation of the Bulirsch Stöer extrapolation method Sujit Kirpekar Department of Mechanical Engineering kirpekar@newton.berkeley.edu December, 003 Introduction This paper outlines the pratical implementation

### 1 Error in Euler s Method

1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of

### On computer algebra-aided stability analysis of dierence schemes generated by means of Gr obner bases

On computer algebra-aided stability analysis of dierence schemes generated by means of Gr obner bases Vladimir Gerdt 1 Yuri Blinkov 2 1 Laboratory of Information Technologies Joint Institute for Nuclear

### 4.3 Lagrange Approximation

206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

### Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical

### 3.7 Non-autonomous linear systems of ODE. General theory

3.7 Non-autonomous linear systems of ODE. General theory Now I will study the ODE in the form ẋ = A(t)x + g(t), x(t) R k, A, g C(I), (3.1) where now the matrix A is time dependent and continuous on some

### General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

### 7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written

### Topic 1: Matrices and Systems of Linear Equations.

Topic 1: Matrices and Systems of Linear Equations Let us start with a review of some linear algebra concepts we have already learned, such as matrices, determinants, etc Also, we shall review the method

### Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

### Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

### t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

### On the Efficient Implementation of Implicit Runge-Kutta Methods

MATHEMATICS OF COMPUTATION, VOLUME 33, NUMBER 146 APRIL 1979, PAGES 557-561 On the Efficient Implementation of Implicit Runge-Kutta Methods By J. M. Varah Abstract. Extending some recent ideas of Butcher,

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Some Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.

Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Taylor Polynomials and Taylor Series Math 126

Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will

### Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Recurrence Relations Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally modeled by recurrence relations. A recurrence relation is an equation which

### Solving nonlinear equations in one variable

Chapter Solving nonlinear equations in one variable Contents.1 Bisection method............................. 7. Fixed point iteration........................... 8.3 Newton-Raphson method........................

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### State of Stress at Point

State of Stress at Point Einstein Notation The basic idea of Einstein notation is that a covector and a vector can form a scalar: This is typically written as an explicit sum: According to this convention,

### South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus Key Concepts Arithmetic with Polynomials and Rational Expressions PC.AAPR.2 PC.AAPR.3 PC.AAPR.4 PC.AAPR.5 PC.AAPR.6 PC.AAPR.7 Standards Know

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### CHAPTER 1 Splines and B-splines an Introduction

CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the

### Tangent and normal lines to conics

4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints

### PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

### Limit processes are the basis of calculus. For example, the derivative. f f (x + h) f (x)

SEC. 4.1 TAYLOR SERIES AND CALCULATION OF FUNCTIONS 187 Taylor Series 4.1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. For example, the derivative f f (x + h) f

### ( % . This matrix consists of \$ 4 5 " 5' the coefficients of the variables as they appear in the original system. The augmented 3 " 2 2 # 2 " 3 4&

Matrices define matrix We will use matrices to help us solve systems of equations. A matrix is a rectangular array of numbers enclosed in parentheses or brackets. In linear algebra, matrices are important

### (Refer Slide Time: 00:00:56 min)

Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology, Delhi Lecture No # 3 Solution of Nonlinear Algebraic Equations (Continued) (Refer Slide

### DEFINITION 5.1.1 A complex number is a matrix of the form. x y. , y x

Chapter 5 COMPLEX NUMBERS 5.1 Constructing the complex numbers One way of introducing the field C of complex numbers is via the arithmetic of matrices. DEFINITION 5.1.1 A complex number is a matrix of

### Homework 2 Solutions

Homework Solutions Igor Yanovsky Math 5B TA Section 5.3, Problem b: Use Taylor s method of order two to approximate the solution for the following initial-value problem: y = + t y, t 3, y =, with h = 0.5.

### MOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu

Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### ORDINARY DIFFERENTIAL EQUATIONS

ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. SEPTEMBER 4, 25 Summary. This is an introduction to ordinary differential equations.

### The Dirichlet Unit Theorem

Chapter 6 The Dirichlet Unit Theorem As usual, we will be working in the ring B of algebraic integers of a number field L. Two factorizations of an element of B are regarded as essentially the same if

### ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

### CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

### Vector and Matrix Norms

Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

### We shall turn our attention to solving linear systems of equations. Ax = b

59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

### LECTURE NOTES: FINITE ELEMENT METHOD

LECTURE NOTES: FINITE ELEMENT METHOD AXEL MÅLQVIST. Motivation The finite element method has two main strengths... Geometry. Very complex geometries can be used. This is probably the main reason why finite

### Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### Introduction. The Aims & Objectives of the Mathematical Portion of the IBA Entry Test

Introduction The career world is competitive. The competition and the opportunities in the career world become a serious problem for students if they do not do well in Mathematics, because then they are

### Roots of Polynomials

Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Mean value theorem, Taylors Theorem, Maxima and Minima.

MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

### Numerical Solution of Differential

Chapter 13 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown

### Lecture Notes on Polynomials

Lecture Notes on Polynomials Arne Jensen Department of Mathematical Sciences Aalborg University c 008 Introduction These lecture notes give a very short introduction to polynomials with real and complex

### MATH 132: CALCULUS II SYLLABUS

MATH 32: CALCULUS II SYLLABUS Prerequisites: Successful completion of Math 3 (or its equivalent elsewhere). Math 27 is normally not a sufficient prerequisite for Math 32. Required Text: Calculus: Early

### Dynamics. Figure 1: Dynamics used to generate an exemplar of the letter A. To generate

Dynamics Any physical system, such as neurons or muscles, will not respond instantaneously in time but will have a time-varying response termed the dynamics. The dynamics of neurons are an inevitable constraint

### Numerical methods for finding the roots of a function

Numerical methods for finding the roots of a function The roots of a function f (x) are defined as the values for which the value of the function becomes equal to zero. So, finding the roots of f (x) means

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### Matrix Algebra LECTURE 1. Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 = a 11 x 1 + a 12 x 2 + +a 1n x n,

LECTURE 1 Matrix Algebra Simultaneous Equations Consider a system of m linear equations in n unknowns: y 1 a 11 x 1 + a 12 x 2 + +a 1n x n, (1) y 2 a 21 x 1 + a 22 x 2 + +a 2n x n, y m a m1 x 1 +a m2 x

### Overview of Math Standards

Algebra 2 Welcome to math curriculum design maps for Manhattan- Ogden USD 383, striving to produce learners who are: Effective Communicators who clearly express ideas and effectively communicate with diverse

### Elementary Number Theory We begin with a bit of elementary number theory, which is concerned

CONSTRUCTION OF THE FINITE FIELDS Z p S. R. DOTY Elementary Number Theory We begin with a bit of elementary number theory, which is concerned solely with questions about the set of integers Z = {0, ±1,

### Stability Analysis for Systems of Differential Equations

Stability Analysis for Systems of Differential Equations David Eberly Geometric Tools, LLC http://wwwgeometrictoolscom/ Copyright c 1998-2016 All Rights Reserved Created: February 8, 2003 Last Modified:

### Numerically integrating equations of motion

Numerically integrating equations of motion 1 Introduction to numerical ODE integration algorithms Many models of physical processes involve differential equations: the rate at which some thing varies

### 16.1 Runge-Kutta Method

70 Chapter 6. Integration of Ordinary Differential Equations CITED REFERENCES AND FURTHER READING: Gear, C.W. 97, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs,

### 1 Example of Time Series Analysis by SSA 1

1 Example of Time Series Analysis by SSA 1 Let us illustrate the 'Caterpillar'-SSA technique [1] by the example of time series analysis. Consider the time series FORT (monthly volumes of fortied wine sales

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### Examination paper for Solutions to Matematikk 4M and 4N

Department of Mathematical Sciences Examination paper for Solutions to Matematikk 4M and 4N Academic contact during examination: Trygve K. Karper Phone: 99 63 9 5 Examination date:. mai 04 Examination

### Chapter 5. Methods for ordinary differential equations. 5.1 Initial-value problems

Chapter 5 Methods for ordinary differential equations 5.1 Initial-value problems Initial-value problems (IVP) are those for which the solution is entirely known at some time, say t = 0, and the question

### MCSE-004. Dec 2013 Solutions Manual IGNOUUSER

MCSE-004 Dec 2013 Solutions Manual IGNOUUSER 1 1. (a) Verify the distributive property of floating point numbers i.e. prove : a(b-c) ab ac a=.5555e1, b=.4545e1, c=.4535e1 Define : Truncation error, Absolute

### Portable Assisted Study Sequence ALGEBRA IIA

SCOPE This course is divided into two semesters of study (A & B) comprised of five units each. Each unit teaches concepts and strategies recommended for intermediate algebra students. The first half of

### Complex Numbers and the Complex Exponential

Complex Numbers and the Complex Exponential Frank R. Kschischang The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto September 5, 2005 Numbers and Equations

### Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

### PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

### Diploma Plus in Certificate in Advanced Engineering

Diploma Plus in Certificate in Advanced Engineering Mathematics New Syllabus from April 2011 Ngee Ann Polytechnic / School of Interdisciplinary Studies 1 I. SYNOPSIS APPENDIX A This course of advanced

### Lectures 5-6: Taylor Series

Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

### November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation

Interpolation, Extrapolation & Polynomial Approximation November 16, 2015 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

### Solution of Linear Systems

Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

### Identify examples of field properties: commutative, associative, identity, inverse, and distributive.

Topic: Expressions and Operations ALGEBRA II - STANDARD AII.1 The student will identify field properties, axioms of equality and inequality, and properties of order that are valid for the set of real numbers

### Integration. Topic: Trapezoidal Rule. Major: General Engineering. Author: Autar Kaw, Charlie Barker. http://numericalmethods.eng.usf.

Integration Topic: Trapezoidal Rule Major: General Engineering Author: Autar Kaw, Charlie Barker 1 What is Integration Integration: The process of measuring the area under a function plotted on a graph.

### Linear Systems. Singular and Nonsingular Matrices. Find x 1, x 2, x 3 such that the following three equations hold:

Linear Systems Example: Find x, x, x such that the following three equations hold: x + x + x = 4x + x + x = x + x + x = 6 We can write this using matrix-vector notation as 4 {{ A x x x {{ x = 6 {{ b General

### PowerTeaching i3: Algebra I Mathematics

PowerTeaching i3: Algebra I Mathematics Alignment to the Common Core State Standards for Mathematics Standards for Mathematical Practice and Standards for Mathematical Content for Algebra I Key Ideas and

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

### Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

### Central-Difference Formulas

SEC 61 APPROXIMATING THE DERIVATIVE 323 Central-Difference Formulas If the function f (x) can be evaluated at values that lie to the left and right of x, then the best two-point formula will involve abscissas

### We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant

### Algebra II Pacing Guide First Nine Weeks

First Nine Weeks SOL Topic Blocks.4 Place the following sets of numbers in a hierarchy of subsets: complex, pure imaginary, real, rational, irrational, integers, whole and natural. 7. Recognize that the

### Iterative Techniques in Matrix Algebra. Jacobi & Gauss-Seidel Iterative Techniques II

Iterative Techniques in Matrix Algebra Jacobi & Gauss-Seidel Iterative Techniques II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin

### Rational inequality. Sunil Kumar Singh. 1 Sign scheme or diagram for rational function

OpenStax-CNX module: m15464 1 Rational inequality Sunil Kumar Singh This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 2.0 Rational inequality is an inequality

### Notes on Factoring. MA 206 Kurt Bryan

The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor

### AP Calculus BC. All students enrolling in AP Calculus BC should have successfully completed AP Calculus AB.

AP Calculus BC Course Description: Advanced Placement Calculus BC is primarily concerned with developing the students understanding of the concepts of calculus and providing experiences with its methods