Ordinary differential equations

Similar documents
Numerical Analysis An Introduction

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Homework 2 Solutions

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Numerical Methods for Ordinary Differential Equations

5 Numerical Differentiation

1 Error in Euler s Method

Numerical Methods for Differential Equations

Chapter 5. Methods for ordinary differential equations. 5.1 Initial-value problems

Stability Analysis for Systems of Differential Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

3. Reaction Diffusion Equations Consider the following ODE model for population growth

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Nonlinear Algebraic Equations Example

Integration. Topic: Trapezoidal Rule. Major: General Engineering. Author: Autar Kaw, Charlie Barker.

Lecture 7: Finding Lyapunov Functions 1

Elementary Differential Equations and Boundary Value Problems. 10th Edition International Student Version

5.4 The Heat Equation and Convection-Diffusion

MAT225 Differential Equations Spring 2016

Maximum Likelihood Estimation

Solving Linear Systems, Continued and The Inverse of a Matrix

4.3 Lagrange Approximation

A power series about x = a is the series of the form

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

CS Software Engineering for Scientific Computing. Lecture 16: Particle Methods; Homework #4

Lectures 5-6: Taylor Series

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Chebyshev Expansions

Lecture 13 Linear quadratic Lyapunov theory

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

(Quasi-)Newton methods

Vector and Matrix Norms

General Theory of Differential Equations Sections 2.8, , 4.1

Numerical Solution of Differential

APPLIED MATHEMATICS ADVANCED LEVEL

Integrals of Rational Functions

GEC320 COURSE COMPACT. Four hours per week for 15 weeks (60 hours)

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Finite Difference Approach to Option Pricing

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Linear and quadratic Taylor polynomials for functions of several variables.

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Pricing Options with Discrete Dividends by High Order Finite Differences and Grid Stretching

The Epsilon-Delta Limit Definition:

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy

4.5 Chebyshev Polynomials

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I

25 Integers: Addition and Subtraction

Partial Fractions: Undetermined Coefficients

Student name: Earlham College. Fall 2011 December 15, 2011

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

An Introduction to Partial Differential Equations

Linear Equations and Inequalities

Metric Spaces. Chapter Metrics

(Refer Slide Time: 01:11-01:27)

1 Review of Newton Polynomials

7 Gaussian Elimination and LU Factorization

Lecture 3: Finding integer solutions to systems of linear equations

1 Review of Least Squares Solutions to Overdetermined Systems

Separable First Order Differential Equations

Numerical Methods for Option Pricing

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2015

5 Scalings with differential equations

Practice with Proofs

Equations, Inequalities & Partial Fractions

1 Cubic Hermite Spline Interpolation

1 if 1 x 0 1 if 0 x 1

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Introduction to Complex Fourier Series

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

3.1. RATIONAL EXPRESSIONS

The Heat Equation. Lectures INF2320 p. 1/88

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Math 120 Final Exam Practice Problems, Form: A

Solutions to Linear First Order ODE s

Lecture 5 Rational functions and partial fraction expansion

Notes for AA214, Chapter 7. T. H. Pulliam Stanford University

Worksheet 1. What You Need to Know About Motion Along the x-axis (Part 1)

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Numerical methods for American options

A new continuous dependence result for impulsive retarded functional differential equations

Linear functions Increasing Linear Functions. Decreasing Linear Functions

Inner Product Spaces

Vector Spaces; the Space R n

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

Chapter 7 Numerical Differentiation and Integration

Numerical Analysis Lecture Notes

Matlab Practical: Solving Differential Equations

Probability Generating Functions

Pre-Algebra Lecture 6

Domain of a Composition

Lecture notes on Ordinary Differential Equations. Plan of lectures

BANACH AND HILBERT SPACE REVIEW

ω h (t) = Ae t/τ. (3) + 1 = 0 τ =.

NEW YORK CITY COLLEGE OF TECHNOLOGY The City University of New York

ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

Transcription:

Week 13: Monday, Apr 23 Ordinary differential equations Consider ordinary differential equations of the form (1) y = f(t, y) together with the initial condition y(0) = y 0. These are equivalent to integral equations of the form (2) y(t) = y 0 + t 0 f(s, y(s)) ds. Only the simplest sorts of differential equations can be solved in closed form, so our goal for the immediate future is to try to solve these ODEs numerically. For the most part, we will focus on equations of the form (1), even though many equations of interest are not posed in this form. We do this, in part, because we can always convert higher-order differential to first-order form by adding variables. For example, becomes my + by + ky = f(t) [ ] [ ] y v = v f(t) b v k y. m m Thus, while there are methods that work directly with higher-order differential equations (and these are sometimes preferable), we can always solve high-order equations by first converting them to first-order form by introducing auxiliary variables. We can also always convert equations of the form into autonomous systems of the form y = f(t, y) u = g(u) by defining an auxiliary equation for the evolution of time: [ ] [ ] y f(t, y) u =, g(u) =. t 1

Basic methods Maybe the simplest numerical method for solving ordinary differential equations is Euler s method. Euler s method can be written in terms of finite difference approximation of the derivative, Hermite interpolation, Taylor series, numerical quadrature using the left-hand rule, or the method of undetermined coefficients. For the moment, we will start with a derivation by Taylor series. If y(t) is known and y = f(t, y), then y(t + h) = y(t) + hf(t, y(t)) + O(h 2 ) Now, drop the O(h 2 ) term to get a recurrence for y k y(t k ): y k+1 = y k + h k f(t k, y k ) where t k+1 = t k + h k. We can derive another method based on first-order Taylor expansion about the point t + h rather than t: y(t) = y(t + h) hf(t + h, y(t + h)) + O(h 2 ). If we drop the O(h 2 ) term, we get the approximation y(t k ) = y k where y k+1 = y k + hf(t k+1, y k+1 ). This is the backward Euler method, sometimes also called implicit Euler. Notice that in the backward Euler step, the unknown y k+1 appears on both sides of the equations, and in general we will need a nonlinear equation solver to take a step. Another basic method is the trapezoidal rule. Let s think about this one by quadrature. If we apply the trapezoidal rule to (2), we have y(t + h) = y(t) + t+h t f(s, y(s)) ds = y(t) + h 2 (f(t, y(t)) + f(t + h, y(t + h))) + O(h3 ) If we drop the O(h 3 ) term, we have y k+1 = y k + h 2 (f(t k, y k ) + f(t k+1, y k+1 )). Like the backward Euler rule, the trapezoidal rule is implicit: in order to compute y k+1, we have to solve a nonlinear equation.

Re(hλ) Re(hλ) Im(hλ) Im(hλ) Figure 1: Regions of absolute stability for Euler (left) and backward Euler (right). For values of hλ in the colored region, the numerical method produces decaying solutions to the test problem y = λy. Stability regions Consider what happens when we apply Euler and backward Euler to a simple linear test problem y = λy with a fixed step size h. Note that the solutions y(t) = y 0 exp(λt) decay to zero whenever Re(λ) < 0. This is a qualitative property we might like our numerical methods to reproduce. Euler s method yields y k+1 = (1 + hλ)y k, which gives decaying solutions only when 1 + hλ < 1. The set of values hλ where Euler produces a decaying solution is called the region of absolute stability for the method. This region is shown in Figure 1. Backward Euler produces the iterates y k+1 = (1 hλ) 1 y k Therefore, the discrete solutions decay whenever (1 hλ) 1 < 1 or, equivalently, whenever 1 hλ > 1. Thus, the region of absolute stability includes the entire left half plane Re(λ) < 0 (see Figure 1), and so backward Euler produces a decaying solution when Re(λ) < 0, no matter how large or small h is. This property is known as A-stability.

Euler and trapezoidal rules So far, we have introduced three methods for solving ordinary differential equations: forward Euler, backward Euler, and the trapezoidal rule: y n+1 = y n + hf(t n, y n ) y n+1 = y n + hf(t n+1, y n+1 ) y n+1 = y n + h 2 (f(t n, y n ) + f(t n+1, y n+1 )) Euler Backward Euler Trapezoidal Each of these methods is consistent with the ordinary differential equation y = f(t, y). That is, if we plug solutions to the exact equation into the numerical method, we get a small local error. For example, for forward Euler we have consistency of order 1, N h y h (t n+1 ) y(t n+1) y(t n ) h n f(t n, y(t n )) = O(h n ), and for the trapezoidal rule we have second-order consistency N h y h (t n+1 ) y(t n+1) y(t n ) h n f(t n, y(t n )) = O(h 2 n). Consistency + 0-stability = convergence Each of the numerical methods we have described can be written in the form N h y h = 0, where y h denotes the numerical solution and N h is a (nonlinear) difference operator. If the method is consistent of order p, then the true solution gives a small residual error as h goes to zero: N h y = O(h p ). As we have seen in the past, however, small residual error is not the same as small forward error. In order to establish convergence, therefore, we need

one more property. Formally, a method is zero-stable if there are constants h 0 and K so for any mesh functions x h and z h on an interval [0, T ] with h h 0, d h x h z h K { x 0 z 0 + N h x h (t j ) N h z h (t j ) }. Zero stability essentially says that the difference operators N h can t become ever more singular as h 0: they are invertible, and the inverse is bounded by K. If a method is consistent and zero stable, then the error at step n is y(t n ) y h (t n ) = e n K max d j = O(h p ). j The proof is simply a substitution of y and y h into the definition of zero stability. The only tricky part, in general, is to show that the method is zero stable. Let s at least do this for forward Euler, to see how it s done but you certainly won t be required to describe the details of this calculation on an exam! We assume without loss of generality that the system is autonomous (y = f(y)). We also assume that f is Lipschitz continuous; that is, there is some L so that for any x and z, f(x) f(z) L x y. It turns out that Lipschitz continuity of f plays an important rule not only in the numerical analysis of ODEs, but in the theory of existence and uniqueness of ODEs as well: if f is not Lipschitz, then there might not be a unique solution to the ODE. The standard example of this is u = 2 sign(u) u, which has solutions u = ±t 2 that both satisfy the ODE with initial condition u(0) = 0. We can rearrange our description of N h to get x n+1 = x n + hf(x n ) + hn h [x](t n ) z n+1 = z n + hf(z n ) + hn h [z](t n ). Subtract the two equations and take absolute values to get x n+1 z n+1 x n z n + h f(x n ) f(z n ) + h N h [x](t n ) N h [z](t n ) Define d n = x n z n and θ = max j N h [x](t j ) N h [z](t j ). Note that by Lipschitz continuity, f(x n ) f(z n ) < Ld n ; therefore, d n+1 (1 + hl)d n + hθ.

Let s look at the first few steps of this recurrence inequality: In general, we have d 1 (1 + hl)d 0 + hθ d 2 (1 + hl) 2 d 0 + [(1 + hl) + 1] hθ d 3 (1 + hl) 3 d 0 + [ (1 + hl) 2 + (1 + hl) + 1 ] hθ d n (1 + hl) n d 0 + (1 + hl) n d 0 + [ n 1 ] (1 + hl) j hθ j=0 [ (1 + hl) n 1 (1 + hl) 1 ] hθ (1 + hl) n d 0 + L 1 [(1 + hl) n 1] θ Now note that (1 + hl) n exp(lnh) = exp(l(t n t 0 )) exp(lt ), where T is the length of the time interval we consider. Therefore, d n exp(lt )d 0 + exp(lt ) 1 L max N h [x](t j ) N h [z](t j ). j While you need not remember the entire argument, there are a few points that you should take away from this exercise: 1. The basic analysis technique is the same one we used when talking about iterative methods for solving nonlinear equations: take two equations of the same form, subtract them, and write a recurrence for the size of the difference. 2. The Lipschitz continuity of f plays an important role. In particular, if LT is large, exp(lt ) may be very inconvenient, enough so that we have to take very small time steps to get good error results according to our theory. As it turns out, in practice we will usually give up on global accuracy bounds via analyzing Lipschitz constant. Instead, we will use the same sort of

local error estimates that we described when talking about quadrature: look at the difference between two methods that are solving the same equation with different accuracy, and use the difference of the numerical methods as a proxy for the error. We will discuss this strategy and more sophisticated Runge-Kutta and multistep methods next lecture.