ASYMPTOTIC MEANSQUARE STABILITY OF TWOSTEP METHODS FOR STOCHASTIC ORDINARY DIFFERENTIAL EQUATIONS


 Judith Jackson
 1 years ago
 Views:
Transcription
1 ASYMPTOTIC MEANSQUARE STABILITY OF TWOSTEP METHODS FOR STOCHASTIC ORDINARY DIFFERENTIAL EQUATIONS E. BUCKWAR 1, R. HORVÁTHBOKOR AND R. WINKLER 1 1 Department of Mathematics, HumboldtUniversität zu Berlin, Unter den Linden Berlin, Germany. Department of Math. and Computer Science, University of Veszprém, Egyetem utca Veszprém, Hungary. Abstract. We deal with linear multistep methods for SDEs and study when the numerical approximation shares asymptotic properties in the meansquare sense of the exact solution. As in deterministic numerical analysis we use a linear timeinvariant test equation and perform a linear stability analysis. Standard approaches used either to analyse deterministic multistep methods or stochastic onestep methods do not carry over to stochastic multistep schemes. In order to obtain sufficient conditions for asymptotic meansquare stability of stochastic linear twostepmaruyama methods we construct and apply Lyapunovtype functionals. In particular we study the asymptotic meansquare stability of stochastic counterparts of twostep AdamsBashforth and Adams Moultonmethods, the MilneSimpson method and the BDF method. AMS subject classification: 60H35, 65C30, 65L06, 65L0. Key words: Stochastic linear twostepmaruyama methods, meansquare asymptotic stability, linear stability analysis, Lyapunov functionals. 1 Introduction Our aim is to study when a numerical approximation generated by a stochastic linear two step scheme shares asymptotic properties of the exact solution of an SDE of the form (1.1) dx(t) = f(t, X(t)) dt + G(t, X(t)) dw (t), t J, X(t 0 ) = X 0, where J = [t 0, ), f : J R n R n, G : J R n R n m. Later we consider also complexvalued functions f, G, X. The driving process W is an mdimensional Wiener process on the given probability space (Ω, F, P) with a filtration (F t ) t J. The initial value X 0 is a given F t0 measurable random variable (it can be deterministic data, of course), independent of the Wiener process and with finite second moments. We assume that there exists a pathwise unique strong solution X = {X(t), t J } of Equation (1.1) and we indicate the This research was supported by the DFG grant and the DFG Research Center Mathematics for Key Technologies in Berlin.
2 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER dependence of this solution upon the initial data by writing X(t) X(t; t 0, X 0 ). The numerical methods to be considered are generally driftimplicit linear twostep Maruyama methods with constant stepsize h which for (1.1) take the form (1.) α X i+1 + α 1 X i + α 0 X i 1 = h[β f(t i+1,x i+1 ) + β 1 f(t i,x i ) + β 0 f(t i 1,X i 1 )] + γ 1 G(t i,x i ) W i + γ 0 G(t i 1,X i 1 ) W i 1, for i = 1,,..., where t i = i h, i = 0, 1,..., and W i = W (t i+1 ) W (t i ). For normalization we set α = 1. We require given initial values X 0, X 1 L (Ω, R n ) that are F t0  and F t1 measurable, respectively. We emphasize that an explicit discretization is used for the diffusion term. For β = 0, the stochastic multistep scheme (1.) is explicit, otherwise it is driftimplicit. See also [3, 4, 7, 8, 9, 13, 14, 17, 18]. Given a reference solution X(t; t 0, X 0 ) of (1.1), the concept of asymptotic meansquare stability in the sense of Lyapunov concerns the question whether or not solutions X D0 (t) = X(t; t 0, X 0 + D 0 ) of (1.1) exist and approach the reference solution when t tends to infinity. The distance between X(t) and X D0 (t) is measured in the meansquare sense, i.e. in L (Ω), and the terms D 0 L (Ω) are small perturbations of the initial value X 0. Already in the deterministic case it is a difficult problem to answer the question when numerical approximations share asymptotic stability properties of the exact solution in general. Including stochastic components into the problem does not simplify the analysis. In deterministic numerical analysis, the first step in this direction is a linear stability analysis, where one applies the method of interest to a linear test equation and discusses the asymptotic behaviour of the resulting recurrence equation (see e.g. [10]). Wellknown notions like Astability of numerical methods refer to the stability analysis of linear test equations. In this article we would like to contribute to the linear stability analysis of stochastic numerical methods and thus we choose as a test equation the linear scalar complex stochastic differential equation (1.3) dx(t) = λx(t) dt + µx(t) dw (t), t 0, X(0) = X 0, λ, µ, X 0 C, with the complex geometric Brownian motion as exact solution. In complex arithmetic we denote by η the complex conjugate of a complex scalar η C. The method (1.) applied to (1.3) takes the following form: (1.4) X i+1 + α 1 X i + α 0 X i 1 = h[β λ X i+1 + β 1 λ X i + β 0 λ X i 1 ] + γ 1 µ X i W i + γ 0 µ X i 1 W i 1, i 1, where α = 1. Subsequently we will assume that the parameters in (1.4) are chosen such that the resulting scheme is meansquare convergent (see [8]). Then the coefficients of the stochastic terms have to satisfy γ 1 = α = 1 and γ 0 = α 1 +α. We will in particular discuss stochastic versions of the explicit and implicit
3 MEANSQUARE STABILITY OF TWOSTEP METHODS FOR SODEs 3 Adams methods, i.e., the AdamsBashforth and the AdamsMoulton method, respectively, the MilneSimpson method and the BDF method. Section contains definitions of the notions of stability discussed in this article. In Section 3 we will discuss the difficulties that arise when applying standard approaches for performing a linear stability analysis either for stochastic onestep methods or for deterministic multistep schemes to the case of stochastic multistep methods. In Section 4 we give a Lyapunovtype theorem ensuring asymptotic meansquare stability of the zero solution of a class of stochastic difference equations under sufficient conditions on the parameters. The method of construction of Lyapunov functionals [15] is briefly sketched. Then we construct appropriate Lyapunov functionals in four different ways and thus obtain four sets of sufficient conditions on the parameters. In Section 5 results for stochastic linear twostep methods, in particular the explicit and implicit Adams methods, the MilneSimpson method and the BDF method are presented. Section 6 summarizes our findings and points out open problems. Basic notions We will be concerned with meansquare stability of the solution of Equation (1.1), with respect to perturbations D 0 in the initial data X 0. We here recall various definitions, which are based on those given in [11]. Definition.1. The reference solution X of the SDE (1.1) is termed 1. meansquare stable, if for each ɛ > 0, there exists a δ 0 such that the solution X D0 (t) exists for all t t 0 and whenever t t 0 and E D 0 < δ; E X D0 (t) X(t) < ɛ. asymptotically meansquare stable, if it is meansquare stable and if there exists a δ 0 such that whenever E D 0 < δ E X D0 (t) X(t) 0 for t. It is well known for which parameters λ, µ C the solutions (.1) X(t) = X 0 e (λ 1 µ )t+µw (t) of the linear test equation (1.3) approach zero in the meansquare sense. The following result can be found e.g. in [1, pp ], [11, 1, 0, 1]. Its proof uses the fact, that Ee µw (t) 1 µ t = 1. Theorem.1. The zero solution of (1.3) is asymptotically meansquare stable if (.) Re(λ) < 1 µ.
4 4 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER Now we formulate analogous definitions for the discrete recurrence equation (1.) with solution {X i } i=0. We denote by {X i} i=0 = {X i(x 0, X 1 )} i=0 the reference solution and by {X D0,D1 i } i=0 = {X i(x 0 + D 0, X 1 + D 1 ) i=0 } a solution of (1.) where the initial values have been perturbed. Definition.. The reference solution {X i } i=0 of (1.) is said to be 1. meansquare stable if, for each ε > 0, there exists a value δ > 0 such that, whenever E( D 0 + D 1 ) < δ, E X D0,D1 i X i < ε, i = 1,,... ;. asymptotically meansquare stable if it is meansquare stable and if there exists a value δ > 0 such that, whenever E( D 0 + D 1 ) < δ, E X D0,D1 i X i 0 as i. Recall that applying a convergent stochastic twostep method (1.) to our test equation (1.3) results in the stochastic difference equation (1.4) with γ 1 = 1, γ 0 = 1 + α 1. For simplification in our subsequent analysis we rewrite (1.4) as (.3) (.4) X i+1 = a X i + c X i 1 + b X i ξ i + d X i 1 ξ i 1, a = α 1 + λh β 1 1 λh β, c = α 0 + λh β 0 1 λh β, (.5) b = µh 1, d = µh 1 1 λh β (1+α 1 ), 1 λh β where ξ i 1 = h 1 Wi 1, and ξ i = h 1 Wi are N (0, 1) Gaussian random variables, independent of each other. Obviously this recurrence equation admits the zero solution {X i } i=0 = {0} i=0, which will be the reference solution in the subsequent stability analysis. We would like to add a remark concerning the choice of the linear test equation (1.3). The scalar linear test equation (1.3) is less significant for SDEs than the corresponding scalar linear test equation is for ODEs. Generally, it is not possible to decouple systems of linear equations of the form dx(t) = AX(t) dt + m B r X(t) dw r(t), where X is a vectorvalued stochastic process, to scalar equations, since the eigenspaces of the matrices A, B 1,..., B m may differ. The results for the scalar test equation are thus only significant for linear systems where the matrices A, B 1,..., B m are simultaneously diagonalizable with constant transformations. We refer to [19] for a linear stability analysis of onestep methods applied to r=1
5 MEANSQUARE STABILITY OF TWOSTEP METHODS FOR SODEs 5 systems of linear equations. As a first step in the area of linear stability analysis for stochastic multistep schemes we restrict our attention to scalar linear test equations (see Section 6). 3 Review of standard approaches for a linear stability analysis Several approaches for an investigation in linear stability analysis exist, either for stochastic onestep methods or for stochastic multistep methods. However, it turns out that standard methods can not easily be extended to the case of stochastic multistep methods. We here describe the difficulties arising with standard approaches. 3.1 The approach for stochastic linear onestep schemes In [1] the author considers the stochastic θmethod and investigates its asymptotic meansquare stability properties. The method, applied to the test equation (1.3) has the form X i+1 = X i + θ hλ X i+1 + (1 θ) hλ X i + h µ X i ξ i, where θ [0, 1] is a fixed parameter. Rewritten as a onestep recurrence it reads (3.1) X i+1 = (ã + b ξ i ) X i, where ã = 1 + (1 θ)λh µh 1, b = 1 θλh 1 θλh. Squaring both sides of Equation (3.1) and taking the expectation, one obtains an exact onestep recurrence for E X i, i.e., (3.) E X i+1 = ( ã + b ) E X i, which allows a direct derivation of conditions for asymptotic meansquare stability of the zero solution. For comparison we now apply this approach to the stochastic multistep method. Squaring both sides of (.3) yields X i+1 = a+bξ i X i + c+dξ i 1 X i 1 +R{(a+bξ i )X i (c+d ξ i 1 )X i 1 }, and the last term is not so easily resolved. Either one resorts to inequalities, such as AB A + B, or one follows the recurrence further down. The latter approach provides for λ, µ R an exact recurrence of the form E X i+1 = ( a + b ) E X i + ( c + d )E X i 1 i +a(ac + bd) c j E X i j + ae(c + dξ 0 )X 0 X 1. j=0 In any case one does not immediately obtain conditions for asymptotic meansquare stability of the zero solution as in the case of the onestep recurrence (3.).
6 6 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER 3. The approach for deterministic twostep schemes When the schemes (1.) are applied to deterministic ordinary differential equations they are reduced to wellknown deterministic linear twostep schemes. With µ=0 one recovers the deterministic linear test equation x (t)=λx(t), t > 0, and obtains the recurrence (.3) with b = d = 0, i.e. (3.3) X i+1 = a X i + c X i 1, i = 1,,..., with coefficients a, c from (.4). The recurrence (3.3) may be considered as a scalar, linear, homogeneous difference equation with constant coefficients. Its (complex) solutions form a twodimensional linear space spanned by X i = c 1 ζ1 i + c ζ i or X i = c 1 ζ3 i + c iζ3 i, i = 0, 1,..., if either ζ 1, ζ are the two distinct roots or if ζ 3 is the double root of the characteristic polynomial (3.4) ψ(ζ) = ζ aζ c of the difference equation. Therefore, all solutions of the difference equation (3.3) approach zero for i if and only if the roots of the characteristic polynomial (3.4) lie inside the unit circle of the complex plane. Equivalently, this result is obtained by reformulating the twostep recurrence (3.4) as a onestep recurrence in a twodimensional space by setting (3.5) ( ) Xi+1 = A X i ( Xi X i 1 ), i = 1,,..., where A := ( ) a c, 1 0 and looking at the eigenvalues of the companion matrix A. These are exactly the roots of the characteristic polynomial (3.4). Again as a comparison we apply the above techniques to the stochastic recurrence (.3). Writing (.3) in an analogous form to (3.3) as X i+1 = (a + b ξ i )X i + (c + d ξ i 1 )X i 1, i = 1,,..., it is obvious that the coefficients of this difference equation, or equivalently of the companion matrix when choosing a reformulation of (.3) analogously to (3.5), depend on the random values ξ i, ξ i 1 and vary from step to step. One then faces the difficulty that a stability investigation necessarily depends on products of random stepdependent companion matrices n i=1 ( ) ai c i, n = 1,, These products have to be bounded for all n to ensure stability. Of course, coefficients or companion matrices which vary from step to step also appear in stability investigations of deterministic variable stepsize variable coefficient
7 MEANSQUARE STABILITY OF TWOSTEP METHODS FOR SODEs 7 multistep schemes. The stability analysis of these schemes is sophisticated. It makes use of the fact that the coefficients of the difference equation depend continuously on the stepsizes and the ratios of the stepsizes, which are assumed to be bounded. Due to the random terms in the coefficients of the stochastic difference equation, one can not depend on similar assumptions in the stochastic case. 4 Lyapunov functionals for stochastic difference equations In this section we present an approach for performing a linear stability analysis of stochastic multistep methods. With the aid of a Lyapunovtype theorem we obtain sufficient conditions on the parameters in (.3) to ensure asymptotic meansquare stability of the zero solution of the recurrence equation. Related methods have been used in the case of stochastic delay differential equations in []. In [15] a general method is described to construct appropriate Lyapunov functionals and in fact, it is possible to construct different Lyapunov functionals giving different sets of sufficient conditions. We will present several constructions of appropriate Lyapunov functionals and the resulting conditions. We will subsequently discuss stability of the zero solution of (.3) and thus of (1.4). Slightly abusing notation, the initial values X 0, X 1 of (1.4) will take the role of the perturbations D 0, D 1 in Definition.. For ease of reading we also omit the superscript on the perturbed solution of the recurrence equation. The following theorem is taken from [15, Theorem 1], where it was stated and proved for the general case of asymptotic stability in the pth mean, p > 0. As the proof is short, we repeat it in our notation for the convenience of the reader. Theorem 4.1. Suppose X i X i (X 0, X 1 ) is a solution of (.3) (and thus of (1.4)). Assume that there exist a positive realvalued functional V (i, X i 1, X i ) and positive constants c 1 and c, such that (4.1) (4.) E V (1, X 0, X 1 ) c 1 max(e X 0, E X 1 ), E [V (i + 1, X i, X i+1 ) V (i, X i 1, X i )] c E X i, for all i N, i 1. Then the zero solution of (.3) (and thus of (1.4)) is asymptotically meansquare stable, that is (4.3) lim i E X i = 0. Proof. From condition (4.) we obtain Thus E V (i + 1, X i, X i+1 ) E V (1, X 0, X 1 ) = i E [V (j + 1, X j, X j+1 ) V (j, X j 1, X j )] c i E X j. j=1 j=1 i E X j 1 (E V (1, X 0, X 1 ) E V (i + 1, X i, X i+1 )). c j=1
8 8 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER An application of (4.1) yields i j=1 and therefore E X j 1 c E V (1, X 0, X 1 ) c 1 c max(e X 0, E X 1 ), (4.4) E X i c 1 c max(e X 0, E X 1 ). Now, for every δ 1 > 0 there exists δ = δ 1 c /c 1, such that E X i δ 1 if we have max(e X 0, E X 1 ) < δ. In addition, from (4.4) it follows that E X j. j=1 Hence lim j E X j = 0. Thus, the zero solution of (.3) (and thus of (1.4)) is asymptotically meansquare stable and the theorem is proved. Remark 4.1. Theorem 4.1 can also be formulated for a Lyapunov functional V which additionally depends on random variables V (i, X i 1, X i, ξ i 1, ξ i ). The proof is identical. Lyapunovtype theorems like Theorem 4.1 are strong results. The remaining problem is to find an appropriate Lyapunov function or functional to apply on specific problems and obtain conditions on problem parameters which can be easily checked. In [15] the authors develop a general method to construct Lyapunov functionals for stochastic difference equations and illustrate their method with examples. This method consists of constructing a Lyapunovfunctional V as the sum of functionals Ṽ + V, where one starts with a first good guess Ṽ and finds the second functional V as a correction term. Sometimes several iterations of this process may be necessary. The main point is to obtain the first good guess Ṽ, for which the authors in [15] have also provided a formal procedure by using an auxiliary simplified difference equation and a Lyapunov functional v for that equation. We follow this procedure in our subsequent analysis. There are several possibilities for the choice of an auxiliary difference equation. One can distinguish between the auxiliary equation being a deterministic or stochastic onestep method (see Subsection 4.1, where we discuss three variants of a first functional Ṽ ), or a deterministic twostep scheme (see Subsection 4.). 4.1 The auxiliary difference equation as a onestep method First guess as Ṽ (i, X i 1, X i ) := X i We start with the first guess (4.5) Ṽ (i, X i 1, X i ) := X i, i = 1,,.... which is a Lyapunov functional for the simplified recursion (4.6) Xi+1 := a X i, X0 = X 0,
9 MEANSQUARE STABILITY OF TWOSTEP METHODS FOR SODEs 9 as well as for (4.7) Xi+1 := (a + bξ i ) X i, X0 = X 0. This first guess Ṽ satisfies the conditions of Theorem 4.1 for (4.6) or (4.7) if a < 1 or a + b < 1, respectively. Now, we apply the functional Ṽ to the original recursion (.3) and check condition (4.). We compute for i = 1,,... E Ṽi := E ( Ṽ (i + 1, X i, X i+1 ) Ṽ (i, X i 1, X i ) ) = E( X i+1 X i ) = E ( ax i + cx i 1 + bx i ξ i + dx i 1 ξ i 1 X i ) = E ax i + cx i 1 + bx i ξ i + dx i 1 ξ i 1 E X i = E ax i + cx i 1 + E bx i ξ i + dx i 1ξ i 1 =:Q 1 =:Q + R{E[(aX i + cx i 1 )(bx i ξ i + dx i 1 ξ i 1 )]} E X i =:Q 3 = Q 1 + Q + Q 3 E X i. Estimating the individual terms we obtain Q 1 = E ax i + cx i 1 = E [ a X i + c X i 1 + R{aX i 1 cx i } ] E [ a X i + c X i 1 + ax i 1 cx i ] E [ a X i + c X i 1 + a c ( X i + X i 1 ) ] = ( a + c )( a E X i + c E X i 1 ), Q = E bx i ξ i + dx i 1 ξ i 1 = E [ b X i ξi + d X i 1 ξi ] = b E X i + d E X i 1, Q 3 = R{EaX i dx i 1 ξ i 1 } E ax i dx i 1 ξ i 1 a d (E X i + E[ X i 1 ξ i 1 ]) = a d (E X i + E X i 1 ). Summarizing the terms we have E Ṽi = Q 1 + Q + Q 3 E X i ( a + c )( a E X i + c E X i 1 ) + b E X i + d E X i 1 + a d (E X i + E X i 1 ) E X i = ( ( a + c ) a + b + a d 1 ) E X i + ( ( a + c ) c + d + a d ) E X i 1 =: K E X i + M E X i 1. In the next step we add a correction V to Ṽ to deal with the term M E X i 1 on the righthand side of the above inequality. This is done by setting (4.8) V (i, Xi 1, X i ) := M X i 1,
10 10 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER since then we have E V i := E V (i + 1, X i, X i+1 ) E V (i, X i 1, X i ) = M E X i M E X i 1. Altogether for V := Ṽ + V one obtains (4.9) E V i = E Ṽi + E V i (K + M) E X i. Further, we check the initial condition (4.1) for V = Ṽ + V. This condition is always satisfied due to E V (1, X 0, X 1 ) = E X 1 + M E X 0 (1 + M) max(e X 0, E X 1 ). Hence, V is a discrete Lyapunov functional for (.3), satisfying conditions (4.1,4.) if K + M < 0, i.e. (4.10) ( a + c ) + b + d + a d < 1. (Cond.1) This condition is sufficient, but in general not necessary to guarantee the asymptotic meansquare stability of (.3) First guess as Ṽ (i, X i 1, X i ) := X i + c X i 1 We write (4.11) X i+1 = a X i + c X i 1 + b X i ξ i + d X i 1 ξ i 1 = (a + c)x i c X i + c X i 1 + b X i ξ i + d X i 1 ξ i 1, and distinguish (4.1) Xi+1 := (a + c) X i, X0 = X 0, as the auxiliary difference equation, with the Lyapunov function v(y) = y (if a + c < 1 ). According to [15] the first functional Ṽ must be chosen in the form Ṽ (i, X i 1, X i ) = X i +c X i 1. We apply this functional Ṽ to the original recursion (.3) and check condition (4.). Using the representation (4.11) we compute, for i = 1,,... E Ṽ = E [Ṽ (i + 1, X i, X i+1 ) Ṽ (i, X i 1, X i )] = E [ X i+1 + cx i X i + cx i 1 ] = E [ (a + c)x i + cx i 1 + bx i ξ i + dx i 1 ξ i 1 X i + cx i 1 ] = E [ (a + c)x i + cx i 1 X i + cx i 1 ] =:Q 4 + R {E[(a + c)x i + cx i 1 ][ bx i ξ i + dx i 1 ξ i 1 ]}. =:Q 5 + E bx i ξ i + dx i 1ξ i 1 =:Q
11 MEANSQUARE STABILITY OF TWOSTEP METHODS FOR SODEs 11 The term Q is estimated exactly as before. Similarly, we estimate Q 4 = ( a + c 1)E X i + RE[(a + c 1)X i cx i 1 ] ( a + c 1)E X i + a + c 1 c (E X i + E X i 1 ), Q 5 = R{E(a + c)x i dξ i 1 X i 1 } a + c d (E X i + E X i 1 ). Summarizing we arrive at E Ṽ = Q 4 + Q + Q 5 ( a + c 1)E X i + a + c 1 c (E X i + E X i 1 ) + b E X i + d E X i 1 + a + c d (E X i + E X i 1 ) ( a + c 1 + a + c 1 c + b + a + c d ) E X i =:K + ( a + c d + d + a + c 1 c ) E X i 1. =:M The correction V has to be taken again in the form (4.8) and then the same estimate (4.9) holds for V = Ṽ + V, where now K + M = a + c 1 + a + c 1 c + b + d + a + c d. Further, the initial condition (4.1) is satisfied for V = Ṽ + V, since E V (1, X 0, X 1 ) = E [ X 1 + c X 0 + M X 0 ] ( max(1, c ) + M) max(e X 1, E X 0 ). Thus V = Ṽ + V is a discrete Lyapunov functional for (.3), satisfying conditions (4.1,4.) if K + M < 0, i.e., if (4.13) a + c + b + d + a + c d + a + c 1 c < 1. (Cond.) This condition is sufficient but not necessary to guarantee that the zerosolution of (.3) is asymptotically meansquare stable First guess as Ṽ (i, X i 1, X i, ξ i 1, ξ i ) = X i + cx i 1 + d ξ i 1 X i 1 Now we write X i+1 = a X i + c X i 1 + b X i ξ i + d X i 1 ξ i 1 (4.14) = (a + c)x i cx i + cx i 1 + (b + d)x i ξ i dx i ξ i + dx i 1 ξ i 1, and distinguish (4.15) Xi+1 := (a + c + (b + d) ξ i ) X i, X0 = X 0,
12 1 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER as the auxiliary difference equation, with the Lyapunov function v(y) = y (if (a+c) + (b+d) < 1 ). According to [15] the first functional Ṽ must be chosen in the form Ṽ (i, X i 1, X i, ξ i 1, ξ i ) = X i +c X i 1 +d ξ i 1 X i 1. In this case the chosen Lyapunov functional Ṽ also depends on a random value (cf. Remark 4.1). We apply this functional Ṽ to the original recursion (.3) and check condition (4.). Using the representation (4.14) we compute, for i = 1,,... E Ṽ = E [Ṽ (i + 1, X i, X i+1, ξ i, ξ i+1 ) Ṽ (i, X i 1, X i, ξ i 1, ξ i )] = E [ X i+1 + (c + d ξ i )X i X i + (c + d ξ 1 )X i 1 ] = E [ (a+c)x i + cx i 1 + (b+d)x i ξ i + dx i 1 ξ i 1 X i + (c+dξ i 1 )X i 1 ] = E [ (a + c)x i + cx i 1 X i + cx i 1 ] =:Q 4 + E (b + d)x i ξ i + dx i 1 ξ i 1 dx i 1 ξ i 1 =:Q 6 + RE[((a + c)x i + cx i 1 )( (b + d)x i ξ i + dx i 1 ξ i 1 )] =:Q 7 RE[(X i + cx i 1 )( dx i 1 ξ i 1 )]. =:Q 8 The term Q 4 is estimated exactly as before. Similarly, we estimate Q 6 = b + d E X i + d E X i d E X i = b + d E X i Q 7 Q 8 = R{E[(a + c)x i dξ i 1Xi 1 ]} R{E[X i dξ i 1 X i 1 ]} Together this yields = R{E[(a + c 1)X i dξ i 1Xi 1 ]} a + c 1 d (E X i + E X i 1 ). E Ṽ = Q 4 + Q 6 + Q 7 Q 8 ( a + c 1)E X i + a + c 1 c (E X i + E X i 1 ) + b + d E X i + a + c 1 d (E X i + E X i 1 ) = ( a+c 1 + b+d + ( c + d ) a+c 1 ) E X i =:K + ( c + d ) a+c 1 E X i 1. =:M Again we take the correction V in the form (4.8), although with a different constant M, such that (4.9) holds for V = Ṽ + V. Finally, we check the initial condition (4.1): E V (1, X 0, X 1, ξ 0, ξ 1 ) = E [ X 1 + (c + d ξ 0 ) X 0 + M X 0 ] E X 1 + E (c + d ξ 0 ) X 0 + ME X 0 ( max(1, c + d ) + M) max(e X 1, E X 0 ).
13 MEANSQUARE STABILITY OF TWOSTEP METHODS FOR SODEs 13 We conclude that V = Ṽ + V is a discrete Lyapunov functional for (.3), satisfying conditions (4.1,4.) if K + M < 0, i.e., if (4.16) a + c + b + d + ( c + d ) a + c 1 < 1. (Cond.3) This condition is sufficient but not necessary to guarantee that the zerosolution of (.3) is asymptotically meansquare stable. 4. The auxiliary difference equation as a deterministic twostep method In this section we will consider the auxiliary difference equation in the form of a deterministic twostep scheme written as (3.5). To avoid the subsequent calculations becoming overly technical we assume here that the parameters λ, µ of the considered test equation (1.3) are realvalued. We start with the deterministic part of the equation (.3) (4.17) y i+1 = ay i + cy i 1, and, upon setting Y i = (y i, y i 1 ) T, rewrite equation (4.17) as a onestep recursion in R : ( ) a c (4.18) Y i+1 = AY i, where A =. 1 0 The next step is to determine a Lyapunovfunction v for the auxiliary problem (4.18). A function v : R R + is a Lyapunovfunction for (4.18) if its incremental values v i := v(y i+1 ) v(y i ) satisfy v i c 0 Y i where is a norm on R and c 0 is a positive constant. The ansatz v(y) = Y T Q Y with a positive definite matrix Q yields positive values of v and v i = Y T i+1 Q Y i+1 Y T i Q Y i = Y T i [A T QA Q] Y i, such that v is a Lyapunovfunction if the matrix A T QA Q is negative definite. To find a matrix Q with these properties we start from an arbitrary positive definite matrix P and solve the Lyapunov matrix equation A T QA Q = P. For simplicity we choose a diagonal matrix P = diag(p 11, p ) where the positive parameters p 11, p can be arbitrarily chosen. Then the elements of the matrix Q = (q ij ) i,j=1, can be calculated as (supposing that c 1, a 1 c ) ( q11 q 1 ) If the conditions = p ac ( 1 c ac q = p + c q 11. ), where p ac = (4.19) c < 1 and a < 1 c p 11 + p (1 + c)((1 c) a ), hold, the matrix Q is positive definite with q 11, q, p ac > 0. Then we have a Lyapunov function v for the auxiliary problem (4.18) given as v(y) = Y T Q Y. Following [15] the first functional Ṽ must be chosen in the form Ṽ (i, X i 1, X i ) = X T i QX i, where X i = (X i, X i 1 ) T.
14 14 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER After calculating E Ṽi = E[X T i+1 QX i+1 X T i QX i], we can determine the correction functional V and thus V = Ṽ + V. We obtain E Ṽi = p 11 EX i p EX i 1 + Q 9 + Q 10, where Q 9 = q 11 E(bX i ξ i + dx i 1 ξ i 1 ), Q 10 = (q 1 + q 11 a)ex i (dx i 1 ξ i 1 ). Estimating these terms as we obtain with (4.0) (4.1) Q 9 = p ac (1 c) (b EX i + d EX i 1 ), Q 10 p ac ad ( EX i + EX i 1), E Ṽi K EX i + M EX i 1, K = (γ 1 p 11 ), γ 1 = p ac ( b (1 c) + ad ), M = (γ p ), γ = p ac ( d (1 c) + ad ). The correction V can be taken again in the form (4.8) with the above value of M, such that (4.9) holds for V = Ṽ + V if K + M < 0. Finally, we check the initial condition (4.1). We obtain with X 1 = (X 1, X 0 ) T V (1, X 0, X 1 ) = X1 T QX 1 + MX0 = q 11 X0 + q 1X 0 X 1 + q X1 + MX 0 q 11 X 0 + q 1 (X 0 + X 1) + q X 1 + MX 0 (q 11 + q 1 + q + γ p ) max(x 0, X 1 ) = ( (1 + c ) q 11 + q 1 + γ ) max(x 0, X 1 ). Thus condition (4.1) from Theorem 4.1 is satisfied with the positive constant (1 + c ) q 11 + q 1 + γ. We conclude that V = Ṽ + V is a discrete Lyapunov functional for (.3), satisfying conditions (4.1,4.) if (4.19) and K + M < 0 hold. The last inequality means γ 1 + γ < p 11 + p (p 11 + p ) (1 + c) ( (1 c) a ) ( ad + (b + d )(1 c) ) < p 11 + p ad + (b + d )(1 c) (1 + c) ( (1 c) a ) < 1. Summarizing, we get the following set of conditions c < 1 and a < 1 c, (4.) ad + (b + d )(1 c) (1 + c) ( (Cond.4) (1 c) a ) < 1. This condition is sufficient but not necessary to guarantee that the zerosolution of (.3) is asymptotically meansquare stable.
15 MEANSQUARE STABILITY OF TWOSTEP METHODS FOR SODEs 15 5 Regions of guaranteed meansquare absolute stability The analysis in Section 4 yields the sufficient conditions (4.10), (4.13), (4.16) and (4.) for asymptotic meansquare stability of the zero solution of the recurrence (.3) in terms of the parameters a, b, c, d. Using the relations (.4) and (.5) these conditions can be expressed in terms of the coefficients α, α 1, α 0, β 0, β 1, β 0 (and γ 1, γ 0 ) of the twostep schemes (1.) and the parameters z = hλ, y = h 1/ µ, z, y C, h > 0 representing the parameters of the test equation and the applied stepsize. To illustrate these results we discuss the explicit and implicit Adams methods, the MilneSimpson method and the BDF method. In Table 5.1 we give the values of their coefficients, in Table 5. we give the resulting coefficients a, b, c, d for the recurrence (.3) in terms of z = hλ, y = h 1/ µ. Method α α 1 α 0 β β 1 β 0 γ 1 γ 0 AdamsBashforth AdamsMoulton MilneSimpson BDF Table 5.1: Table of coefficients of twostep schemes Method a c b d AdamsBashforth z 1 z y 0 AdamsMoulton MilneSimpson BDF z z 1 z y z z z z y y z z z z y 1 3 y 1 3 z 1 3 z 1 3 z 1 3 z Table 5.: Table of the parameters a,b,c,d of twostep schemes We restrict the subsequent discussion to the case of realvalued parameters λ, µ R, resp. z, y R to be able to visualize stability regions. We will draw these
16 16 E. BUCKWAR, R. HORVÁTHBOKOR AND R. WINKLER regions in the (z, y )plane. For given parameters λ, µ in equation (1.3), varying the stepsize h in the numerical schemes corresponds to moving along a ray that passes through the origin and (λ, µ ). We compare the regions where the exact solution is asymptotically meansquare stable to those where the sufficient conditions for asymptotic meansquare stability of the schemes are fulfilled. From (.) we have that the exact solution (.1) is asymptotically meansquare stable if λ < 1 µ, or upon multiplying by h and rearranging if y < z. The boundary of the region of asymptotic stability of the exact solution is thus given by the line y = z. Any of the inequalities in the examples below are considered as conditions for y in relation to z and the borders of the resulting regions are given in Figures 1 to 4. For the parameters (z, y ) = h(λ, µ ) below these lines the zero solutions of the test equation and the numerical schemes, respectively, are asymptotically meansquare stable. For the test equation, condition (.) is sufficient and necessary, thus above the corresponding line the zero solution of the test equation is unstable. For the numerical schemes we have only sufficient conditions and therefore we can not make statements for the regions above the corresponding lines. Example 5.1. Sufficient conditions for the twostep AdamsBashforthMaruyama scheme: Inserting the expressions for the AdamsBashforthMaruyama scheme a = z, c = 1 z, b = y, d = 0 from Table 5. into the sufficient condition (4.10) yields Cond.1 ( a + c ) + b + d + a d < 1 ( z + 1 z ) + y < 1 y < 1 ( z + 1 z ). Because of d = 0 for the explicit Adams scheme the sufficient condition (4.16) coincides with (4.13). We compute Cond.,3 a + c + b + d + a + c d + a + c 1 c < 1 (1 + z) + y z 1 1 z < 1 y < 1 (1 + z) z. Finally we compute for the sufficient condition (4.) ad + (b Cond.4 c < 1, a < 1 c, + d )(1 c) (1 + c) ( (1 c) a ) < 1 z ( 1, 0), y (1 + z ) < (1 z )( (1 + z ) (1 + 3 z)) z ( 1, 0), y < z3 z z 1 + z.
Controllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationSpaceTime Approach to NonRelativistic Quantum Mechanics
R. P. Feynman, Rev. of Mod. Phys., 20, 367 1948 SpaceTime Approach to NonRelativistic Quantum Mechanics R.P. Feynman Cornell University, Ithaca, New York Reprinted in Quantum Electrodynamics, edited
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More information4.1 Learning algorithms for neural networks
4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationDiscreteTime Signals and Systems
2 DiscreteTime Signals and Systems 2.0 INTRODUCTION The term signal is generally applied to something that conveys information. Signals may, for example, convey information about the state or behavior
More informationFactor Graphs and the SumProduct Algorithm
498 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 47, NO. 2, FEBRUARY 2001 Factor Graphs and the SumProduct Algorithm Frank R. Kschischang, Senior Member, IEEE, Brendan J. Frey, Member, IEEE, and HansAndrea
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationLevel sets and extrema of random processes and fields. JeanMarc Azaïs Mario Wschebor
Level sets and extrema of random processes and fields. JeanMarc Azaïs Mario Wschebor Université de Toulouse UPS CNRS : Institut de Mathématiques Laboratoire de Statistiques et Probabilités 118 Route de
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationRegular Languages are Testable with a Constant Number of Queries
Regular Languages are Testable with a Constant Number of Queries Noga Alon Michael Krivelevich Ilan Newman Mario Szegedy Abstract We continue the study of combinatorial property testing, initiated by Goldreich,
More informationLecture notes on the Stefan problem
Lecture notes on the Stefan problem Daniele Andreucci Dipartimento di Metodi e Modelli Matematici Università di Roma La Sapienza via Antonio Scarpa 16 161 Roma, Italy andreucci@dmmm.uniroma1.it Introduction
More informationChapter 5. Banach Spaces
9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationINTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING
GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING ISYE 8813 Arkadi Nemirovski On sabbatical leave from
More informationA Course on Number Theory. Peter J. Cameron
A Course on Number Theory Peter J. Cameron ii Preface These are the notes of the course MTH6128, Number Theory, which I taught at Queen Mary, University of London, in the spring semester of 2009. There
More informationRECENTLY, there has been a great deal of interest in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth KreutzDelgado, Senior Member,
More informationA Case Study in Approximate Linearization: The Acrobot Example
A Case Study in Approximate Linearization: The Acrobot Example Richard M. Murray Electronics Research Laboratory University of California Berkeley, CA 94720 John Hauser Department of EESystems University
More informationFast Greeks by algorithmic differentiation
The Journal of Computational Finance (3 35) Volume 14/Number 3, Spring 2011 Fast Greeks by algorithmic differentiation Luca Capriotti Quantitative Strategies, Investment Banking Division, Credit Suisse
More informationWMR Control Via Dynamic Feedback Linearization: Design, Implementation, and Experimental Validation
IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL. 10, NO. 6, NOVEMBER 2002 835 WMR Control Via Dynamic Feedback Linearization: Design, Implementation, and Experimental Validation Giuseppe Oriolo, Member,
More informationThe Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationCOSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES
COSAMP: ITERATIVE SIGNAL RECOVERY FROM INCOMPLETE AND INACCURATE SAMPLES D NEEDELL AND J A TROPP Abstract Compressive sampling offers a new paradigm for acquiring signals that are compressible with respect
More informationMatthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka
Matthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka Version.5 Matthias Beck A First Course in Complex Analysis Version.5 Gerald Marchesi Department of Mathematics Department of Mathematical Sciences
More informationWHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? 1. Introduction
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
More informationChoosing Multiple Parameters for Support Vector Machines
Machine Learning, 46, 131 159, 2002 c 2002 Kluwer Academic Publishers. Manufactured in The Netherlands. Choosing Multiple Parameters for Support Vector Machines OLIVIER CHAPELLE LIP6, Paris, France olivier.chapelle@lip6.fr
More informationAn Introduction to the NavierStokes InitialBoundary Value Problem
An Introduction to the NavierStokes InitialBoundary Value Problem Giovanni P. Galdi Department of Mechanical Engineering University of Pittsburgh, USA Rechts auf zwei hohen Felsen befinden sich Schlösser,
More informationAn Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral Operators 1
Journal of Research of the National Bureau of Standards Vol. 45, No. 4, October 95 Research Paper 233 An Iteration Method for the Solution of the Eigenvalue Problem of Linear Differential and Integral
More informationFeedback Control of a Nonholonomic Carlike Robot
Feedback Control of a Nonholonomic Carlike Robot A. De Luca G. Oriolo C. Samson This is the fourth chapter of the book: Robot Motion Planning and Control JeanPaul Laumond (Editor) Laboratoire d Analye
More information