Starting algorithms for partitioned RungeKutta methods: the pair Lobatto IIIAIIIB


 Jodie Summers
 1 years ago
 Views:
Transcription
1 Monografías de la Real Academia de Ciencias de Zaragoza. : 49 8, 4). Starting algorithms for partitioned RungeKutta methods: the pair Lobatto IIIAIIIB I. Higueras and T. Roldán Departamento de Matemática e Informática. Universidad Pública de Navarra. 3 Pamplona, Spain. Abstract Partitioned RungeKutta PRK) methods are suitable for the numerical integration of separable Hamiltonian systems. For implicit PRK methods, the computational effort may be dominated by the cost of solving the nonlinear systems. For these nonlinear systems, good starting values minimize both the risk of a failed iteration and the effort required for convergence. In this paper we consider the Lobatto IIIAIIIB pair. Applying the theory developed by the authors for PRK methods, we obtain optimum predictors for this pair. Numerical experiments show the efficiency of these predictors. Key words and expressions: Starting algorithms, stage value predictors, partitioned RungeKutta methods, Hamiltonian systems. MSC: L, L, L8, H. Introduction We consider partitioned differential equations of the form y = fy, z) yx ) = y, z = gy, z) zx ) = z, ) where f : IR l IR m IR l and g : IR l IR m IR m are sufficiently smooth functions. An example of this type of systems are Hamiltonian systems p = H q, ) q = 49 H p, 3)
2 where the Hamiltonian function H has the special structure Hp, q) = Tp) + V q). In this case, the Hamiltonian system )3) is of the form p q = fq) = gp) with f = q V, g = p T. For the numerical integration of ) we consider partitioned RungeKutta PRK) methods. The idea of the PRK methods is to take two different RungeKutta RK) methods, A, b, Â,ˆb), and to treat the variables y with the first method, A, b), and the z variables with the second one, Â,ˆb). In this way, the numerical solution from t n, y n, z n ) to t n+, y n+, z n+ ) with the sstage PRK method A, b, Â,ˆb) is given by y n+ = y n + h b t I l )fy n+, Z n+ ), z n+ = z n + h ˆb t I m )gy n+, Z n+ ), where the internal stages vectors Y n+, Z n+ are obtained from the system Y n+ = e y n + h A I l )fy n+, Z n+ ), 4) Z n+ = e z n + h Â I m)gy n+, Z n+ ). ) As usual, we have denoted by the Kronecker product and by e =,..., ) t. For implicit PRK methods, the computational effort may be dominated by the cost of solving the nonlinear systems 4)). These nonlinear systems are solved with an iterative method that requires starting values Y ) n+, Z ) n+, and it is well known that these values must be as accurate as possible, because in other case, the number of iterations in each step may be too high or even worse, the convergence may fail. A common way to proceed is to take as starting values the last computed numerical solution, i.e., Y ) n+ = e y n, Z ) n+ = e z n. We will refer to them as the trivial predictors. However it is also possible to involve the last computed internal stages, Y n, Z n, and the approximation y n, and consider starting algorithms of the form Y ) n+ = b y n + B I l )Y n, ) Z ) n+ = c z n + C I m )Z n, 7) where b, c IR s, B and C are s s matrices which have to be determined. Several kinds of starting algorithms for RK methods, also called stage value predictors, have been studied by different authors [], [], [4], [7], [4], [3], [], [], [], []). The
3 starting algorithms considered for example in [], [] or [7] are of the form )7), whereas the ones considered for example in [], [4] or [] also involve function evaluations of the internal stages. Although in this paper we only deal with Hamiltonian systems without restrictions ODEs), we are also interested in Hamiltonian systems with restrictions DAEs). For ODE systems we can use any of the stage value predictors mentioned in the previous paragraph, but this is not the case for DAEs. Predictors involving function evaluations cannot be used for this type of systems [], [8]). This is the reason why we use predictors of the form )7). These predictors have already been studied for PRK methods in [9]. In this paper we consider the PRK methods Lobatto IIIAIIIB, that have been proved to be suitable ones for Hamiltonian systems [], [8]) and construct good starting algorithms for them. This is done in Section. The numerical experiments done in Section 3, show the efficiency of these starting algorithms. Starting Algorithms for Lobatto IIIAIIIB Methods We consider the PRK methods A, b, Â,ˆb), where A, b) is the Lobatto IIIA method and Â,ˆb) is the Lobatto IIB method. We will refer to this PRK method as the Lobatto IIIAIIB method. Observe that these methods share the same nodes, that is to say c = ĉ. Furthermore, the sstage Lobatto IIIA method satisfies Cs), and thus Ac q = c q /q, q =,..., s; whereas the sstage Lobatto IIIB method satisfies Cs ), and therefore Âĉ q = ĉ q /q, q =,..., s Figure : The pair Lobatto IIIAIIIB for s = Figure : The pair Lobatto IIIAIIIB for s = 4
4 In [9] the general order conditions for predictors of the form )7) have been obtained. Fortunately, these conditions are reduced considerably if the method PRK satisfies some simplifying assumptions, as it occurs with Lobatto IIIAIIIB methods. For s 3, the sstage Lobatto IIIA and IIIB methods satisfy at least C3) and C) respectively. In this case, the general order conditions in [9] for the variable y are reduced to the following ones Consistency: b + B e = e, Order : B c = e + r c, Order : B A c = eb t c + rae + rc), 8) Order 3: B Ac = eb t c + rae + rc), B AÂc = ebt Âc + raeb t c + r AÂe + rc), whereas the order conditions for the variable z are reduced to Consistency: c + C e = e, Order : C c = e + r c, Order : C Â c = eˆb t c + râe + rc), 9) Order 3: C Âc = eˆb t c + râe + rc), C Â c = eˆb t Âc + râeˆb t c + r Â e + rc). The parameter r = h n+ /h n, where h n+ denotes the current stepsize, h n+ = t n+ t n, and h n denotes the last stepsize, h n = t n t n. For s = 3, if we take into account the number of parameters available and the number of order conditions, we get that the maximum order achieved is for both variables, y and z. It is still possible to impose one of the two conditions of order 3, but not both of them. For s 4, the sstage Lobatto IIIA and IIIB methods satisfy at least C4) and C) respectively. Both of them satisfy b t c = ˆb t c = /. In this case, in 8) and 9) the two conditions of order 3 are equivalent. Hence for s = 4, it is possible to achieve order 3 for both variables, y and z. In order to simplify the implementation, it is interesting to initialize the variables y and z with the same coefficients. This situation corresponds to the case b = c and B = C in )7).
5 In that case, for s 3, up to order, the joint order conditions 8) and 9) are reduced to the following ones Consistency: b + B e = e, Order : B c = e + r c, Order : B A c = eb t c + rae + rc), B Â c = eˆb t c + râe + rc). Now it is easy to conclude that there exists a unique joint predictor of order for the case s = 3. This predictor is given by the following coefficients r r b = r3 + r), B = r + 3r) r + r) + 3r + r ). ) r + r) r + 3r) 4r + r) + 3r + r For s 4, up to order 3, the joint order conditions 8) and 9) are reduced to the following ones Consistency: b + B e = e, Order : B c = e + r c, Order : B A c = eb t c + rae + rc), Order 3: B Ac = eb t c + rae + rc), B Âc = eˆb t c + râe + rc). We immediately obtain that for s = 4 there exists a unique joint predictor of order 3. This predictor is given by the following coefficients r 3 ) + + r ) ) r r 3 b = + ) r ) ), ) r r 3 r 3 r 9 r 3 + r 3 b b b 3 b 4 B =, ) b 3 b 3 b 33 b 34 b 4 b 4 b 43 b 44 3
6 where b = + ) r 3 + ) r ) r 3, ) ) + 3 r 9 + r b = ) r 3, b 3 = r + 3 ) r + ) r 3, b 4 = ) r + 3 ) r + ) r 3, b 3 = + + ) r ) r b 3 = r ) r + + ) r 3, ) ) 3 r 9 r b 33 = + b 34 = ) r ) r ) r 3, + ) r 3, + ) r 3, b 4 = + r + r + 4r 3, b 4 = + ) r b 43 = + ) r b 44 = + r + r + r ) r + 3 ) r + r 3, r 3, For Lobatto IIIA, the first stage of the variable y is explicit. As Y n, = y n and Y n,s = y n, in ) and )) we get Y ) = y n, and therefore, as expected, the first stage of the variable y is not initialized. Observe that this is not the case for Lobatto IIIB method. In this case the first stage is implicit and the coefficients in ) and )) are necessary to obtain a predictor. 3 Numerical Experiments In this section we test the predictor constructed in the previous section for the 3stage pair Lobatto IIIAIIIB. We have selected an academic problem and the well known restricted threebody problem. In both cases we have proceeded in the same way. The stopping criteria for the Newton iterations is Y k) Y k) TOL. 4
7 The problems have been integrated for different values of T OL and for different stepsizes. In Tables  we show the average number of iterations per step needed when the trivial predictor is used, and the ones when the optimum predictor given by ) is used. For each value of TOL and h, the values on the left correspond to the trivial predictor, whereas the values on the right correspond to the optimum predictor given by ). 3. Problem The problem is given by y = 4z + t) + t y) = 3) z = y t z+t) z) = The exact solution is yt) = sint + t, zt) = cost t. We have integrated this problem for t [, ]. h TOL.E3.E.E7.E./../.9 3./..E3./../../..E3./../../..E3.898/../../.9 Table : Iterations per step. Problem It can be seen that except in one case, the number of iterations is lower with the predictor constructed in this paper. The exception is for h =.E3 and TOL =.E7 where the number of iterations coincides. We remark that in this case, the numerical solution obtained with the optimum predictor is more accurate. 3. The restricted threebody problem We have considered the restricted threebody problem from [3] x = v x x) = x y = v y y) = y z = v z z) = z v x = v y + x [ ] x+µ µ x µ + µ r 3 v r 3 x ) = v x v y = v x + y [ ] µ + µ r 3 r y 3 vy ) = v y v z = [ ] µ + µ r 3 r z 3 vz ) = v z 4)
8 where µ = µ, and r = x + µ ) + y + z r = x µ ) + y + z. We have integrated this problem for t [, ] with different values of the parameter µ and different initial conditions. Case I We take µ =.8 and the initial values.4,,,,, ). The numerical results are shown in Table. h TOL.E3.E.E7.E./.84.4/.3 3.9/.43.E3.8/.3.3/.8.874/.87.E3./..3/.49./..E3.93/../..77/.938 Table : Iterations per step: µ =.8 Case II We take µ =.9 and the initial values.4,,,,.99,.). The numerical results are shown in Table 3. h TOL.E3.E.E7.E./..94/.4.4/.74.E3./.3.49/.3.9/.3.E3.4/../..9/..E3.9/../.3.4/.37 Table 3: Iterations per step: µ =.9 Case III We take µ =.9994 and the initial values.74,,,,.43, ). The numerical results are shown in Table 4. h TOL.E.E7.E9.E./../../..E3./../../..E3./../../..E3./../../. Table 4: Iterations per step: µ =.9994
9 In Tables 4 we again see that the number of iterations needed when we use the predictors constructed in this paper is lower than the ones needed when the trivial predictor is used. We remark that the reduction in the number of iterations when solving the nonlinear systems implies a reduction in the total CPU time needed for the numerical integration, and hence it may be essential when long time integrations are done. Acknowledgments This research has been supported by the Ministerio de Ciencia y Tecnología, Project BFM88. References [] Calvo, M., Laburta, P. and Montijano, J. I., 3: Starting algorithms for Gauss RungeKutta methods for Hamiltonian systems, Computers and Mathematics with Applications 4, 4 4. [] Calvo, M., Laburta, P. and Montijano, J. I., 3: Twostep high order starting values for implicit RungeKutta methods, Advances in Computational Mathematics 9, 4 4. [3] Calvo, M. P., : High order starting iterates for implicit RungeKutta methods. An improvement for variablestep symplectic integrators, IMA Journal of Numerical Analysis, 3. [4] González Pinto, S., Montijano, J. I. and Rodríguez, S. P., : On the starting algorithms for fully implicit RungeKutta methods, BIT 4, [] González Pinto, S., Montijano, J. I. and Rodríguez, S. P., 3: Stabilized starting algorithms for collocation RungeKutta methods, Computers and Mathematics with Applications 4, [] González Pinto, S., Montijano, J. I. and Rodríguez, S. P., 3: Variable order starting algorithms for implicit RungeKutta methods on stiff problems, Applied Numerical Mathematics 44, [7] Higueras, I. and Roldán, T., : Starting algorithms for some DIRK methods, Numerical Algorithms 3, [8] Higueras, I. and Roldán, T., 3: IRK methods for index and 3 DAEs: Starting algorithms, BIT 43,
10 [9] Higueras, I. and Roldán, T., 3: Stage value predictors for additive Runge Kutta methods, submitted for publication. [] Jay, L., 99: Symplectic partitioned RungeKutta methods for constrained Hamiltonian systems, SIAM Journal on Numerical Analysis 33, [] Jay, L., 998: Structure preservation for constrained dynamics with super partitioned additive RungeKutta methods, SIAM Journal on Scientific Computing, [] Laburta, M., 997: Starting algorithms for IRK methods, Journal of Computational and Applied Mathematics 83, [3] Murray, C. D. and Dermott S. F., 999: Solar System Dynamics, Cambridge University Press, Cambridge. [4] Olsson, H. and Söderlind, G., 999: Stage value predictors and efficient Newton iterations in implicit RungeKutta methods, SIAM Journal on Scientific Computing, 8. [] Roldán, T. and Higueras, I., 999: IRK methods for DAE: Starting algorithms, Journal of Computational and Applied Mathematics, [] Sand, J., 99, Methods for starting iterations schemes for implicit RungeKutta formulae, Institute of Mathematical Applied Conference Series, New Series vol. 39, Oxford University Press, New York,. [7] Sanz Serna, J. and Calvo, M., 994: Numerical Hamiltonian Problems, Chapman & Hall, London. [8] Sun, G., 993: Symplectic partitioned Runge Kutta methods, Journal of Computational Mathematics,
The Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationChapter 3 OneStep Methods 3. Introduction The explicit and implicit Euler methods for solving the scalar IVP y = f(t y) y() = y : (3..) have an O(h) global error which istoolow to make themofmuch practical
More informationA Perfect Example for The BFGS Method
A Perfect Example for The BFGS Method YuHong Dai Abstract Consider the BFGS quasinewton method applied to a general nonconvex function that has continuous second derivatives. This paper aims to construct
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationRECENTLY, there has been a great deal of interest in
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 47, NO. 1, JANUARY 1999 187 An Affine Scaling Methodology for Best Basis Selection Bhaskar D. Rao, Senior Member, IEEE, Kenneth KreutzDelgado, Senior Member,
More informationHighRate Codes That Are Linear in Space and Time
1804 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 48, NO 7, JULY 2002 HighRate Codes That Are Linear in Space and Time Babak Hassibi and Bertrand M Hochwald Abstract Multipleantenna systems that operate
More informationScattered Node Compact Finite DifferenceType Formulas Generated from Radial Basis Functions
Scattered Node Compact Finite DifferenceType Formulas Generated from Radial Basis Functions Grady B. Wright a,,, Bengt Fornberg b,2 a Department of Mathematics, University of Utah, Salt Lake City, UT
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationSampling 50 Years After Shannon
Sampling 50 Years After Shannon MICHAEL UNSER, FELLOW, IEEE This paper presents an account of the current state of sampling, 50 years after Shannon s formulation of the sampling theorem. The emphasis is
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationA Principled Approach to Bridging the Gap between Graph Data and their Schemas
A Principled Approach to Bridging the Gap between Graph Data and their Schemas Marcelo Arenas,2, Gonzalo Díaz, Achille Fokoue 3, Anastasios Kementsietsidis 3, Kavitha Srinivas 3 Pontificia Universidad
More informationA Study on SMOtype Decomposition Methods for Support Vector Machines
1 A Study on SMOtype Decomposition Methods for Support Vector Machines PaiHsuen Chen, RongEn Fan, and ChihJen Lin Department of Computer Science, National Taiwan University, Taipei 106, Taiwan cjlin@csie.ntu.edu.tw
More informationInstituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC Coimbra
Instituto de Engenharia de Sistemas e Computadores de Coimbra Institute of Systems Engineering and Computers INESC Coimbra João Clímaco and Marta Pascoal A new method to detere unsupported nondoated solutions
More informationEquivalence in Answer Set Programming
Equivalence in Answer Set Programming Mauricio Osorio, Juan Antonio Navarro, José Arrazola Universidad de las Américas, CENTIA Sta. Catarina Mártir, Cholula, Puebla 72820 México {josorio, ma108907, arrazola}@mail.udlap.mx
More informationThe Set Covering Machine
Journal of Machine Learning Research 3 (2002) 723746 Submitted 12/01; Published 12/02 The Set Covering Machine Mario Marchand School of Information Technology and Engineering University of Ottawa Ottawa,
More informationA Case Study in Approximate Linearization: The Acrobot Example
A Case Study in Approximate Linearization: The Acrobot Example Richard M. Murray Electronics Research Laboratory University of California Berkeley, CA 94720 John Hauser Department of EESystems University
More informationData Integration: A Theoretical Perspective
Data Integration: A Theoretical Perspective Maurizio Lenzerini Dipartimento di Informatica e Sistemistica Università di Roma La Sapienza Via Salaria 113, I 00198 Roma, Italy lenzerini@dis.uniroma1.it ABSTRACT
More informationIEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1
IEEE TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, 2013. ACCEPTED FOR PUBLICATION 1 ActiveSet Newton Algorithm for Overcomplete NonNegative Representations of Audio Tuomas Virtanen, Member,
More informationThis document has been written by Michaël Baudin from the Scilab Consortium. December 2010 The Scilab Consortium Digiteo. All rights reserved.
SCILAB IS NOT NAIVE This document has been written by Michaël Baudin from the Scilab Consortium. December 2010 The Scilab Consortium Digiteo. All rights reserved. December 2010 Abstract Most of the time,
More informationHighDimensional Image Warping
Chapter 4 HighDimensional Image Warping John Ashburner & Karl J. Friston The Wellcome Dept. of Imaging Neuroscience, 12 Queen Square, London WC1N 3BG, UK. Contents 4.1 Introduction.................................
More informationOn SetBased Multiobjective Optimization
1 On SetBased Multiobjective Optimization Eckart Zitzler, Lothar Thiele, and Johannes Bader Abstract Assuming that evolutionary multiobjective optimization (EMO) mainly deals with set problems, one can
More information(1) I. INTRODUCTION (2)
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 12, DECEMBER 2010 3243 Distance Regularized Level Set Evolution and Its Application to Image Segmentation Chunming Li, Chenyang Xu, Senior Member, IEEE,
More informationThe number field sieve
The number field sieve A.K. Lenstra Bellcore, 435 South Street, Morristown, NJ 07960 H.W. Lenstra, Jr. Department of Mathematics, University of California, Berkeley, CA 94720 M.S. Manasse DEC SRC, 130
More informationWorking Set Selection Using Second Order Information for Training Support Vector Machines
Journal of Machine Learning Research 6 (25) 889 98 Submitted 4/5; Revised /5; Published /5 Working Set Selection Using Second Order Information for Training Support Vector Machines RongEn Fan PaiHsuen
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289. Compressed Sensing. David L. Donoho, Member, IEEE
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006 1289 Compressed Sensing David L. Donoho, Member, IEEE Abstract Suppose is an unknown vector in (a digital image or signal); we plan to
More informationFlexible and efficient Gaussian process models for machine learning
Flexible and efficient Gaussian process models for machine learning Edward Lloyd Snelson M.A., M.Sci., Physics, University of Cambridge, UK (2001) Gatsby Computational Neuroscience Unit University College
More informationGiotto: A TimeTriggered Language for Embedded Programming
Giotto: A TimeTriggered Language for Embedded Programming THOMAS A HENZINGER, MEMBER, IEEE, BENJAMIN HOROWITZ, MEMBER, IEEE, AND CHRISTOPH M KIRSCH Invited Paper Giotto provides an abstract programmer
More informationFast Solution of l 1 norm Minimization Problems When the Solution May be Sparse
Fast Solution of l 1 norm Minimization Problems When the Solution May be Sparse David L. Donoho and Yaakov Tsaig October 6 Abstract The minimum l 1 norm solution to an underdetermined system of linear
More information