Implementation of the Bulirsch Stöer extrapolation method
|
|
|
- Frank Leonard
- 9 years ago
- Views:
Transcription
1 Implementation of the Bulirsch Stöer extrapolation method Sujit Kirpekar Department of Mechanical Engineering December, 003 Introduction This paper outlines the pratical implementation of extrapolation methods. It is often claimed that extrapolation methods are the best known ways to obtain high-accuracy solutions to ordinary differential equations (odes) with minimal computational efforts. In particular, we implement and then review the performance of the Bulirsch Stöer algorithm. We implement the method is fine detail, paying particular attention to the control of the step size and the order (obtained from higher and higher degrees of interpolation). We test the performance of the method on a simple ode and draw useful conclusions about the step- and order- control strategy of the algorithm. Asymptotic error expansions We have the following theorem due to Gragg (965): Theorem. Let f F N+ (a, b), and y(t) be the exact solution of the initial value problem y = f(t, y), y(t 0 ) = y 0, t 0 [a, b]. ()
2 ASYMPTOTIC ERROR EXPANSIONS For t R h = (t 0 + ih i = 0,,,...) let u n (t) be defined by the explicitly started (multi-step) midpoint rule as, u(t 0 ) = u 0 u(t 0 + h) = u 0 + hf(t 0, u 0 ) () u n+ = u n + hf(t, u n ) then u(t) has an asymptotic expansion of the form, u n = y(t n ) + N h ( k e k (t) + ( ) (tn t 0)/hẽ k (t) ) + h N+ E N+ (h, t n ) (3) k= valid for x [a, b] and all h = (t n t 0 )/n, n =,,... The functions e k (t) and ẽ k (t) are independant of h. The remainder term for fixed t n remains bounded for all h = ( t n t 0 ) / n, n =,,... From the above theorem, we notice that the midpoint rule has asymptotic expansion in h and not h. Thus, by systematically extrapolating errors away, one can improve the order of accuracy of the solution by h in each iteration. From above, the error in first approximation is π(t) = u n y(t n ) = h [ e (t n ) + ( ) (tn t 0)/hẽ (t) ] (4) In view of the term ( ) (tn t 0)/hẽ (t) the error shows oscillatory behaviour, π(t + h) = h [ e (t n ) + ( ) (t n+ t 0 )/hẽ (t) ] = h [ e (t n ) ( ) (tn t 0)/hẽ (t) ] (5) The oscillations may be removed due to a trick due to Gragg (965), as follows. Define, v(t n ) = v n = [u n + u n + hf(t, u n )] (6) And by Eqn. we know, u n+ = u n + hf(t, u n ) hence also, v(t n ) = v n = [u n + u n+ hf(t, u n )] (7) and by adding Eqn. 6 and qn. 7 gives us,
3 3 THE MODIFIED MIDPOINT RULE 3 v n = [ u n + ] u n u n+ (8) and by use of Eqn. 3 one can obtain the expansion for v(t n ) v(t n ) = ( y(t n ) + [y(t n + h) + y(t n h)] (9) N h k( e k (t) + [e k(t n + h) + e k (t n h)] k= + ( ) (tn t 0)/h (ẽ k (t) [ẽ k(t n + h) + ẽ k (t n h)]) )) + O(h N+ ) Expanding y(t n ± h) and e k, ẽ k in a Taylor series in h, one finally obtains for v(t n ) an expansion of the form, v(t n ) =y(t n ) + h [e (t n ) + ] 4 y (t n ) (0) + N h k( e k (t) + ( ) (tn t 0)/hẽ k (t) ) + O(h N+ ) k= in which the leading error term no longer contains an oscillatory factor. 3 The modified midpoint rule The procedure decribed above naturally leads us to use the modified midpoint rule as described by: Eqn. and Eqn. 6 combined. We use this procedure to advance the solution from u N at t to u N+ at t + H by taking a set of n steps each of size h. The method can be summarized as, h j = H/n j () z 0 =u N () z =u N + hf(t N, u N ) z m+ =z m + hf(t + mh, z m ) for m =,,..., n u N+ = [z n + z n + hf(t + H, z n )]
4 4 THE BULIRSCH STÖER METHOD 4 Here the z s are intermediate approximations which march along in steps of h, while u N+ is the final approximation to y(t + H) 4 The Bulirsch Stöer Method The basic idea of the Bulirsch Stoer method is the same as any other extrapolation method. We compute the solution u N+ at t + H using the modified midpoint method for various values of n. Once we know a few desired values, we fit the solution to an analytic form and evaluate it at h = 0. This idea is nothing but Richardson s deferred approach to the limit. The sequence of separate attempts of the modified midpoint rule is made with increasing values of n. Bulirsch and Stöer originally proposed, n =, 4, 6, 8,, 6, 4, 3, 48, 64, n j = n j and Deufelhard suggests that n =, 4, 6, 8, 0,, 4... n j = j is more efficient. For extrapolation we use the Aitkens-Neville algorithm of rational function interpolation which is consicely described by the following tableau, T T T T 3 T 3 T T kk The first column of the tableau is formed by use of the modified midpoint rule for increasing substeps. T j, (h) = u n j N+ (t) (3) and successive columns can be filled by using the recurrence, T j,k (h) = (h h j k+)t j,k (h) (h h j )T j,k (h) h j h j k+ (4)
5 5 STEP-SIZE AND ORDER CONTROL 5 and we are really interested in the extrapolated value at h = 0, hence, T j,k (0) = h j k+t j,k (h) + h j )T j,k (h) h j h j k+ = T j,k + T j,k T j,k ( nj n j k+ ) (5) After each successive n is tried, polynomial interpolation is attempted. Adjacent diagonal entries of the tableau are used for error estimates and if the error is more than the tolerance, the next n in the sequence is attempted. 5 Step-size and order control To implement error control, we define an error estimate of the form, and enforce the requirement that ɛ k+,k = T k+,k T k+,k+ (6) ɛ k+,k < ε (7) where ε is the user specified tolerance. Error estimates from the Aitken- Neville s algorithm can be shown to be of O(H k ). This has been subsequently verified in a numerical experiment in this paper. Thus, for an even sequence of n the order of the method is k and a simple estimate of a new stepsize would be, ( ) ε k H k = H (8) ɛ k+,k subject to conditions of maximum and minimum change from the previous value. The question now remains which column k we aim to achieve convergence in. Suppose A k is the work to obtain row k of the tableau, A k+ is the work to obtain column k. Defining the work in terms of the function evaluations f(t, u), we use the following recurrence relations, A = n + see We note that error estimates from the diagonal entries only, as described by, ɛ k+,k = T k+,k+ T k,k has been demonstrated to be too conservative. By relaxing this criteria computations usually speed up by 0% in non-critical applications The number of right-hand side evaluations required by the modified midpoint method is n +.
6 5 STEP-SIZE AND ORDER CONTROL 6 A k+ = A k + n k+ The work per unit step is equal to A k+ /H k which we non-dimensionalize by H and write as, W k = A ( ) k+ ε k H = A k+ (9) H k ɛ k+,k A vector of the quantities W k are recorded during the integration and the optimal column index q is then defined by, W q = min{w, W,...W kf } where k f is the final column in which the error criteria was satisfied. The q thus determined defines the step size to be used as the next large step so that we can expect to get convergence in the optimal column q. After some numerical experimentation, we propose the two important refinements:. If the current H is too small, then k f will be too small and the order q will remain too small. It may be desirable to increase H and select a higher order.. If the current H is too big the required tolerance may never be achieved and we may need to decrease H To deal with these issues, we compute the following table along with the integration α(k, q) = ε A k+ Aq+ (k )(A q+ A +) for k < q The idea is that if we attain convergence in the qth column of the extrapolation table, we aim to increase the step size to H q+. Since we dont have a value for H q+ we estimate it as, H q+ = H q α(q, q + ) However this increase in step size is efficient only if the work per unit step reduces, which reqiures, A q+ α(q, q + ) > A q+
7 6 NUMERICAL EXPERIMENTS 7 The problem of stepsize reduction can be handled by computing stepsize estimates H k H k α(k, q). If the any H k is too small, we abandon the current step and restart using H k. The criterion for smallness is taken as, This check is only restricted to, H k α(k, q + ) < H max(q, q ) k min(k max, q + ) 6 Numerical experiments. To test our code, we use a simple non-linear ode, which has the exact solution, y (t, y) = 00ty (0) y(0) = ; y( 3) = 90 y(t) = or with + 00t We first verify the order of accuracy of the modified midpoint rule, as we derived in Eqn. 0. The results are shown below in Fig.. It is seen that the error does scale as O(h ), and thus our basic implementation is verified.. Next we test, the convergence of our interpolation scheme. Our claim from Section 5 is: Use of kth column interpolation results in an error estimate of O(h k ). This is successfully verified by our code and shown in Fig. 3. Next we test our algorithm for its choice of step-size. The basic idea is that, for a fixed order of interpolation, the algorithm should choose a step size such that the error remains within the tolerance. We note that the error in the first step depends on the size of the first step and the column of the interpolation we choose. Also, there may be
8 6 NUMERICAL EXPERIMENTS 8 cases, when we are never able to achieve convergence to the specified tolerance because of too low-order choice of interpolation. Taking into account these effects, we successfully show that our algorithm functions well to keep the error below the required tolerance. This is shown in Fig. 3 where the algorithm is forced to choose a stepsize for a fixed order of interpolation. The user specified tolerance was set to Our next experiment is with different seqeunces of steps to be taken during the modified midpoint rule. We experiment with the following three: n = [ ] Romberg Sequence () n = [ ] n = [ ] Bulirsch Sequence Deufelhard Sequence We test the above three sequences with a tolerance of 0 3, and report that the Deufelhard seqeunce performed the best (3693 function evalutions) compared to the Romberg (379) and Bulirsch (5487) sequences. 5. Finally, we run our code with full freedom, allowing the algorithm to choose order and step-size and report the cost of the method as a function of tolerance. A typical solution, with the programs adopted strategy is plotted in Fig. 4. We notice that the size of the interpolation table is usually not allowed to increase beyond 5. The algorithm does not increase the value of q for reasons of cost (work evaluted in W ). Also, the step-size increases progressively as the solution becomes more and more flat (magnitude of function derivatives decrease). This is a good sign, but the oscillations we see in the step-size are not. They are probably because we tend to be slightly ambitious in estimating stepsize and need to reduce immediately it to keep the errors incontrol. 6. Our final figure (Fig. 5) is a display of the cost of our method compared to a standard Runge Kutta Fehlberg method. For comparison, we use the Merson s 4-5 method, in which we use the simple strategy of doubling the step-size if the error is bounded by ε/0 for two consequtive steps and rescaling the current step by 0.5 for a failed step. The figure clearly shows that extrpolation methods are more efficient at higher tolerances, but more expensive for lower tolerances. The Merson s method
9 7 CONCLUSION 9 is actually a four-stage and five-stage RK, with both methods of order four. (Hence 4-5 is actually misleading). The error estimate is correct for linear odes, but is taken as an approximation for non-linear odes like the one we are working with. The Butcher array is, b 0 3 ˆb e Conclusion From our results, we conclude that extrapolation methods are indeed efficient. For a higher orders of accuracy, the cost (in terms of the number of function evaluations) of our method is smaller than traditional Runge-Kutta methods. We have also demonstrated that our algorithm is successful in adjusting both step-size and order (i.e. the column of the interpolation polynomial). The algorithm successfully displays the quality of taking smaller steps when the function derivative is large and taking larger steps when the function derivative is smaller. However, the technique presented above may not be suitable to odes whose right hand sides are evaluated by interpolation or from a table lookup. Also, the technique would be unsuitable for odes that contain a singular point inside the region of integration. All codes used in this paper are available at
10 REFERENCES 0 Figure : Convergence of the modified midpoint rule References Figure : Convergence of Aitken-Neville s algorithm [] Deufelhard,P. Order and step-size control in Extrapolation methods 983, Numerische Mathematik, vol. 4, pp [] Deufelhard,P. Recent Progress in Extrapolation methods for ordinary differential equtions 985, Siam Review, vol. 7, pp [3] Gragg, W. B. Repeated extrapolation to the limit in the numerical solution of ordinary differential equations, SIAM J. Numer. Anal. (965) pp [4] Press, et. al. Numerical Recipes in Fortran 77: the art of scientific computing, Cambridge University Press. [5] Stoer, J. and Bulirsch, R 980, Introduction to Numerical Analysis, Springer-Verlag. 8 Figures Figure 3: Step-size control ascpect of our algorithm
11 8 FIGURES Figure 4: Typical Solution using our algorithm Figure 5: Cost of our method compared to Merson s 4-5 RKF method
Homework 2 Solutions
Homework Solutions Igor Yanovsky Math 5B TA Section 5.3, Problem b: Use Taylor s method of order two to approximate the solution for the following initial-value problem: y = + t y, t 3, y =, with h = 0.5.
1 Error in Euler s Method
1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of
5 Numerical Differentiation
D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives
16.1 Runge-Kutta Method
70 Chapter 6. Integration of Ordinary Differential Equations CITED REFERENCES AND FURTHER READING: Gear, C.W. 97, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs,
Natural cubic splines
Natural cubic splines Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology October 21 2008 Motivation We are given a large dataset, i.e. a function sampled
Designing Efficient Software for Solving Delay Differential Equations. C.A.H. Paul. Numerical Analysis Report No. 368. Oct 2000
ISSN 1360-1725 UMIST Designing Efficient Software for Solving Delay Differential Equations C.A.H. Paul Numerical Analysis Report No. 368 Oct 2000 Manchester Centre for Computational Mathematics Numerical
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88
Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have
Roots of Polynomials
Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
4.3 Lagrange Approximation
206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average
Numerical Solution of Differential
Chapter 13 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown
Euler s Method and Functions
Chapter 3 Euler s Method and Functions The simplest method for approximately solving a differential equation is Euler s method. One starts with a particular initial value problem of the form dx dt = f(t,
Stability Analysis for Systems of Differential Equations
Stability Analysis for Systems of Differential Equations David Eberly Geometric Tools, LLC http://wwwgeometrictoolscom/ Copyright c 1998-2016 All Rights Reserved Created: February 8, 2003 Last Modified:
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
7.6 Approximation Errors and Simpson's Rule
WileyPLUS: Home Help Contact us Logout Hughes-Hallett, Calculus: Single and Multivariable, 4/e Calculus I, II, and Vector Calculus Reading content Integration 7.1. Integration by Substitution 7.2. Integration
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
General Framework for an Iterative Solution of Ax b. Jacobi s Method
2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,
Numerical Methods for Differential Equations
Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical
Roots of Equations (Chapters 5 and 6)
Roots of Equations (Chapters 5 and 6) Problem: given f() = 0, find. In general, f() can be any function. For some forms of f(), analytical solutions are available. However, for other functions, we have
Linear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
Numerical Analysis An Introduction
Walter Gautschi Numerical Analysis An Introduction 1997 Birkhauser Boston Basel Berlin CONTENTS PREFACE xi CHAPTER 0. PROLOGUE 1 0.1. Overview 1 0.2. Numerical analysis software 3 0.3. Textbooks and monographs
Derivative Free Optimization
Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free
Numerical Methods for Ordinary Differential Equations 30.06.2013.
Numerical Methods for Ordinary Differential Equations István Faragó 30.06.2013. Contents Introduction, motivation 1 I Numerical methods for initial value problems 5 1 Basics of the theory of initial value
Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction
1 The Brownian bridge construction
The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
1 Lecture: Integration of rational functions by decomposition
Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.
Special Situations in the Simplex Algorithm
Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the
Chapter 3. Distribution Problems. 3.1 The idea of a distribution. 3.1.1 The twenty-fold way
Chapter 3 Distribution Problems 3.1 The idea of a distribution Many of the problems we solved in Chapter 1 may be thought of as problems of distributing objects (such as pieces of fruit or ping-pong balls)
8. Linear least-squares
8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot
Higher Order Equations
Higher Order Equations We briefly consider how what we have done with order two equations generalizes to higher order linear equations. Fortunately, the generalization is very straightforward: 1. Theory.
HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!
Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following
PSTricks. pst-ode. A PSTricks package for solving initial value problems for sets of Ordinary Differential Equations (ODE), v0.7.
PSTricks pst-ode A PSTricks package for solving initial value problems for sets of Ordinary Differential Equations (ODE), v0.7 27th March 2014 Package author(s): Alexander Grahn Contents 2 Contents 1 Introduction
Support Vector Machines Explained
March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),
4 Lyapunov Stability Theory
4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We
Math 55: Discrete Mathematics
Math 55: Discrete Mathematics UC Berkeley, Spring 2012 Homework # 9, due Wednesday, April 11 8.1.5 How many ways are there to pay a bill of 17 pesos using a currency with coins of values of 1 peso, 2 pesos,
Numerical Methods for Option Pricing
Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly
MATH 10550, EXAM 2 SOLUTIONS. x 2 + 2xy y 2 + x = 2
MATH 10550, EXAM SOLUTIONS (1) Find an equation for the tangent line to at the point (1, ). + y y + = Solution: The equation of a line requires a point and a slope. The problem gives us the point so we
G.A. Pavliotis. Department of Mathematics. Imperial College London
EE1 MATHEMATICS NUMERICAL METHODS G.A. Pavliotis Department of Mathematics Imperial College London 1. Numerical solution of nonlinear equations (iterative processes). 2. Numerical evaluation of integrals.
3.2 The Factor Theorem and The Remainder Theorem
3. The Factor Theorem and The Remainder Theorem 57 3. The Factor Theorem and The Remainder Theorem Suppose we wish to find the zeros of f(x) = x 3 + 4x 5x 4. Setting f(x) = 0 results in the polynomial
Lecture 3: Finding integer solutions to systems of linear equations
Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture
Notes on Factoring. MA 206 Kurt Bryan
The General Approach Notes on Factoring MA 26 Kurt Bryan Suppose I hand you n, a 2 digit integer and tell you that n is composite, with smallest prime factor around 5 digits. Finding a nontrivial factor
SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I
Lennart Edsberg, Nada, KTH Autumn 2008 SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Parameter values and functions occurring in the questions belowwill be exchanged
Lectures 5-6: Taylor Series
Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,
JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson
JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials
5 Homogeneous systems
5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
Section 3.7. Rolle s Theorem and the Mean Value Theorem. Difference Equations to Differential Equations
Difference Equations to Differential Equations Section.7 Rolle s Theorem and the Mean Value Theorem The two theorems which are at the heart of this section draw connections between the instantaneous rate
1 Cubic Hermite Spline Interpolation
cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite
Numerical Methods for Solving Systems of Nonlinear Equations
Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s
MA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity
MA4001 Engineering Mathematics 1 Lecture 10 Limits and Dr. Sarah Mitchell Autumn 2014 Infinite limits If f(x) grows arbitrarily large as x a we say that f(x) has an infinite limit. Example: f(x) = 1 x
Introduction to the Finite Element Method
Introduction to the Finite Element Method 09.06.2009 Outline Motivation Partial Differential Equations (PDEs) Finite Difference Method (FDM) Finite Element Method (FEM) References Motivation Figure: cross
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
Chapter 4. Polynomial and Rational Functions. 4.1 Polynomial Functions and Their Graphs
Chapter 4. Polynomial and Rational Functions 4.1 Polynomial Functions and Their Graphs A polynomial function of degree n is a function of the form P = a n n + a n 1 n 1 + + a 2 2 + a 1 + a 0 Where a s
Starting algorithms for partitioned Runge-Kutta methods: the pair Lobatto IIIA-IIIB
Monografías de la Real Academia de Ciencias de Zaragoza. : 49 8, 4). Starting algorithms for partitioned Runge-Kutta methods: the pair Lobatto IIIA-IIIB I. Higueras and T. Roldán Departamento de Matemática
1 Review of Newton Polynomials
cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael
Mechanics 1: Conservation of Energy and Momentum
Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation
Zeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
Notes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions
Fourth-Order Compact Schemes of a Heat Conduction Problem with Neumann Boundary Conditions Jennifer Zhao, 1 Weizhong Dai, Tianchan Niu 1 Department of Mathematics and Statistics, University of Michigan-Dearborn,
Derivative Approximation by Finite Differences
Derivative Approximation by Finite Differences David Eberly Geometric Tools, LLC http://wwwgeometrictoolscom/ Copyright c 998-26 All Rights Reserved Created: May 3, 2 Last Modified: April 25, 25 Contents
Solution of Linear Systems
Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start
AP1 Oscillations. 1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false?
1. Which of the following statements about a spring-block oscillator in simple harmonic motion about its equilibrium point is false? (A) The displacement is directly related to the acceleration. (B) The
Numerical Solution of Delay Differential Equations
Numerical Solution of Delay Differential Equations L.F. Shampine 1 and S. Thompson 2 1 Mathematics Department, Southern Methodist University, Dallas, TX 75275 [email protected] 2 Dept. of Mathematics &
Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS
Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and
Introduction to ODE solvers and their application in OpenFOAM
Introduction to ODE solvers and their application in OpenFOAM Gu Zongyuan Department of Mathematical Science March 8, 2009 Contents 1 Introduction 2 1.1 Methods to solve ODE.................................
1 Review of Least Squares Solutions to Overdetermined Systems
cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares
Mean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
by the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
Moving Least Squares Approximation
Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Nonlinear Algebraic Equations Example
Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities
Verifying Numerical Convergence Rates
1 Order of accuracy Verifying Numerical Convergence Rates We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, suc as te grid size or time step, and
n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...
6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence
5 Scalings with differential equations
5 Scalings with differential equations 5.1 Stretched coordinates Consider the first-order linear differential equation df dx + f = 0. Since it is first order, we expect a single solution to the homogeneous
CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers
Automatic Detection of Emergency Vehicles for Hearing Impaired Drivers Sung-won ark and Jose Trevino Texas A&M University-Kingsville, EE/CS Department, MSC 92, Kingsville, TX 78363 TEL (36) 593-2638, FAX
The Mean Value Theorem
The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers
arxiv:hep-th/0507236v1 25 Jul 2005
Non perturbative series for the calculation of one loop integrals at finite temperature Paolo Amore arxiv:hep-th/050736v 5 Jul 005 Facultad de Ciencias, Universidad de Colima, Bernal Diaz del Castillo
Recall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
Integration. Topic: Trapezoidal Rule. Major: General Engineering. Author: Autar Kaw, Charlie Barker. http://numericalmethods.eng.usf.
Integration Topic: Trapezoidal Rule Major: General Engineering Author: Autar Kaw, Charlie Barker 1 What is Integration Integration: The process of measuring the area under a function plotted on a graph.
AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT
AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT Dr. Nikunja Swain, South Carolina State University Nikunja Swain is a professor in the College of Science, Mathematics,
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation
Interpolation, Extrapolation & Polynomial Approximation November 16, 2015 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic
Linear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
Elementary Differential Equations and Boundary Value Problems. 10th Edition International Student Version
Brochure More information from http://www.researchandmarkets.com/reports/3148843/ Elementary Differential Equations and Boundary Value Problems. 10th Edition International Student Version Description:
Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.
Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard
Linear and quadratic Taylor polynomials for functions of several variables.
ams/econ 11b supplementary notes ucsc Linear quadratic Taylor polynomials for functions of several variables. c 010, Yonatan Katznelson Finding the extreme (minimum or maximum) values of a function, is
Solving ODEs in Matlab. BP205 M.Tremont 1.30.2009
Solving ODEs in Matlab BP205 M.Tremont 1.30.2009 - Outline - I. Defining an ODE function in an M-file II. III. IV. Solving first-order ODEs Solving systems of first-order ODEs Solving higher order ODEs
2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system
1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables
Zeros of Polynomial Functions
Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.
