Central-Difference Formulas

Similar documents
4.3 Lagrange Approximation

4.5 Chebyshev Polynomials

5 Numerical Differentiation

1 Error in Euler s Method

Piecewise Cubic Splines

Taylor and Maclaurin Series

Homework 2 Solutions

Lies My Calculator and Computer Told Me

Lectures 5-6: Taylor Series

5.3 SOLVING TRIGONOMETRIC EQUATIONS. Copyright Cengage Learning. All rights reserved.

Modeling, Computers, and Error Analysis Mathematical Modeling and Engineering Problem-Solving

Numerical Solution of Differential

CHAPTER 5 Round-off errors

Derivative Approximation by Finite Differences

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Figure 1.1 Vector A and Vector F

Section 12.6: Directional Derivatives and the Gradient Vector

16.1 Runge-Kutta Method

7 Gaussian Elimination and LU Factorization

Review of Scientific Notation and Significant Figures

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

LS.6 Solution Matrices

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Rational Exponents. Squaring both sides of the equation yields. and to be consistent, we must have

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

LIES MY CALCULATOR AND COMPUTER TOLD ME

An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate.

Biggar High School Mathematics Department. National 5 Learning Intentions & Success Criteria: Assessing My Progress

Operation Count; Numerical Linear Algebra

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

Equations, Inequalities & Partial Fractions

FFT Algorithms. Chapter 6. Contents 6.1

In mathematics, there are four attainment targets: using and applying mathematics; number and algebra; shape, space and measures, and handling data.

PURSUITS IN MATHEMATICS often produce elementary functions as solutions that need to be

sin(x) < x sin(x) x < tan(x) sin(x) x cos(x) 1 < sin(x) sin(x) 1 < 1 cos(x) 1 cos(x) = 1 cos2 (x) 1 + cos(x) = sin2 (x) 1 < x 2

Student Performance Q&A:

9.2 Summation Notation

Representation of functions as power series

a cos x + b sin x = R cos(x α)

3.2 Measures of Spread

Zeros of Polynomial Functions

G.A. Pavliotis. Department of Mathematics. Imperial College London

7.6 Approximation Errors and Simpson's Rule

Continued Fractions and the Euclidean Algorithm

1.7 Graphs of Functions

Properties of Real Numbers

6 3 The Standard Normal Distribution

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Differentiation and Integration

Roots of Equations (Chapters 5 and 6)

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)

2.3. Finding polynomial functions. An Introduction:

Trigonometric Functions and Triangles

Activity 1: Using base ten blocks to model operations on decimals

APPLICATIONS AND MODELING WITH QUADRATIC EQUATIONS

1 TRIGONOMETRY. 1.0 Introduction. 1.1 Sum and product formulae. Objectives

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

CHAPTER 1. Compound Interest

Implicit Differentiation

CS321. Introduction to Numerical Methods

TOPIC 4: DERIVATIVES

Sequences. A sequence is a list of numbers, or a pattern, which obeys a rule.

1 if 1 x 0 1 if 0 x 1

Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM. x + 5 = = (2-2) = = 5. x = 7-5. x + 0 = 20.

26 Integers: Multiplication, Division, and Order

Introduction to Complex Fourier Series

Indiana State Core Curriculum Standards updated 2009 Algebra I

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Note on growth and growth accounting

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

AP CALCULUS BC 2008 SCORING GUIDELINES

AP Physics 1 and 2 Lab Investigations

3.3 Real Zeros of Polynomials

PYTHAGOREAN TRIPLES KEITH CONRAD

Numerical Analysis Lecture Notes

Math Circle Beginners Group October 18, 2015

8 Square matrices continued: Determinants

Using a table of derivatives

Solution of Linear Systems

Sequences and Series

Section 1.5 Linear Models

Does Black-Scholes framework for Option Pricing use Constant Volatilities and Interest Rates? New Solution for a New Problem

Section V.3: Dot Product

Approximating functions by Taylor Polynomials.

Scalar Valued Functions of Several Variables; the Gradient Vector

9.10 calculus blank.notebook February 26, 2010

Understanding Basic Calculus

FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Section 3.7. Rolle s Theorem and the Mean Value Theorem. Difference Equations to Differential Equations

Unit 18 Determinants

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

x a x 2 (1 + x 2 ) n.

Equations Involving Lines and Planes Standard equations for lines in space

6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.

WORK SCHEDULE: MATHEMATICS 2007

Example 1: Suppose the demand function is p = 50 2q, and the supply function is p = q. a) Find the equilibrium point b) Sketch a graph

MATH 132: CALCULUS II SYLLABUS

Transcription:

SEC 61 APPROXIMATING THE DERIVATIVE 323 Central-Difference Formulas If the function f (x) can be evaluated at values that lie to the left and right of x, then the best two-point formula will involve abscissas that are chosen symmetrically on both sides of x Theorem 61 (Centered Formula of Order O(h 2 )) that x h, x, x + h [a, b] Then (3) f f (x + h) f (x h) (x) 2h Furthermore, there exists a number c = c(x) [a, b] such that (4) f f (x + h) f (x h) (x) = + E trunc ( f, h), 2h where E trunc ( f, h) = h2 f (3) (c) = O(h 2 ) 6 The term E( f, h) is called the truncation error Assume that f C 3 [a, b] and Proof Start with the second-degree Taylor expansions f (x) = P 2 (x) + E 2 (x), about x,for f (x + h) and f (x h): (5) f (x + h) = f (x) + f (x)h + f (2) (x)h 2 and (6) f (x h) = f (x) f (x)h + f (2) (x)h 2 2! After (6) is subtracted from (5), the result is 2! + f (3) (c 1 )h 3 f (3) (c 2 )h 3 (7) f (x + h) f (x h) = 2 f (x)h + (( f (3) (c 1 ) + f (3) (c 2 ))h 3

324 CHAP 6 NUMERICAL DIFFERENTIATION Since f (3) (x) is continuous, the intermediate value theorem can be used to find a value c so that f (3) (c 1 ) + f (3) (c 2 ) (8) = f (3) (c) 2 This can be substituted into (7) and the terms rearranged to yield (9) f (x) = f (x + h) f (x h) 2h f (3) (c)h 2 The first term on the right side of (9) is the central-difference formula (3), the second term is the truncation error, and the proof is complete Suppose that the value of the third derivative f (3) (c) does not change too rapidly; then the truncation error in (4) goes to zero in the same manner as h 2, which is expressed by using the notation O(h 2 ) When computer calculations are used, it is not desirable to choose h too small For this reason it is useful to have a formula for approximating f (x) that has a truncation error term of the order O(h 4 ) Theorem 62 (Centered Formula of Order O(h 4 )) that x 2h, x h, x, x + h, x + 2h [a, b] Then Assume that f C 5 [a, b] and (10) f (x) f (x + 2h) + 8 f (x + h) 8 f (x h) + f (x 2h) 12h Furthermore, there exists a number c = c(x) [a, b] such that (11) f (x) = f (x + 2h) + 8 f (x + h) 8 f (x h) + f (x 2h) 12h + E trunc ( f, h), where E trunc ( f, h) = h4 f (5) (c) = O(h 4 ) 30 Proof One way to derive formula (10) is as follows Start with the difference between the fourth-degree Taylor expansions f (x) = P 4 (x) + E 4 (x), about x,of f (x + h) and f (x h): (12) f (x + h) f (x h) = 2 f (x)h + 2 f (3) (x)h 3 + 2 f (5) (c 1 )h 5 5! Then use the step size 2h, instead of h, and write down the following approximation: (13) f (x + 2h) f (x 2h) = 4 f (x)h + 16 f (3) (x)h 3 + 64 f (5) (c 2 )h 5 5!

SEC 61 APPROXIMATING THE DERIVATIVE 325 Next multiply the terms in equation (12) by 8 and subtract (13) from it The terms involving f (3) (x) will be eliminated and we get f (x + 2h) + 8 f (x + h) 8 f (x h) + f (x 2h) (14) = 12 f (x)h + (16 f (5) (c 1 ) 64 f (5) (c 2 ))h 5 120 If f (5) (x) has one sign and if its magnitude does not change rapidly, we can find a value c that lies in [x 2h, x + 2h] so that (15) 16 f (5) (c 1 ) 64 f (5) (c 2 ) = 48 f (5) (c) After (15) is substituted into (14) and the result is solved for f (x), we obtain (16) f f (x + 2h) + 8 f (x + h) 8 f (x h) + f (x 2h) (x) = + f (5) (c)h 4 12h 30 The first term on the right side of (16) is the central-difference formula (10) and the second term is the truncation error; the theorem is proved Suppose that f (5) (c) is bounded for c [a, b]; then the truncation error in (11) goes to zero in the same manner as h 4, which is expressed with the notation O(h 4 ) Now we can make a comparison of the two formulas (3) and (10) Suppose that f (x) has five continuous derivatives and that f (3) (c) and f (5) (c) are about the same Then the truncation error for the fourth-order formula (10) is O(h 4 ) and will go to zero faster than the truncation error O(h 2 ) for the second-order formula (3) This permits the use of a larger step size Example 62 Let f (x) = cos(x) (a) Use formulas (3) and (10) with step sizes h = 01, 001, 0001, and 00001, and calculate approximations for f (08) Carry nine decimal places in all the calculations (b) Compare with the true value f (08) = sin(08) (a) Using formula (3) with h = 001, we get f f (081) f (079) 0689498433 0703845316 (08) 0717344150 002 002 Using formula (10) with h = 001, we get f f (082) + 8 f (081) 8 f (079) + f (078) (08) 012 0682221207 + 8(0689498433) 8(0703845316) + 0710913538 012 0717356108 (b) The error in approximation for formulas (3) and (10) turns out to be 0000011941 and 0000000017, respectively In this example, formula (10) gives a better approximation to f (08) than formula (3) when h = 001 The error analysis will illuminate this example and show why this happened The other calculations are summarized in Table 62

SEC 62 NUMERICAL DIFFERENTIATION FORMULAS 339 62 Numerical Differentiation Formulas More Central-Difference Formulas The formulas for f (x 0 ) in the preceding section required that the function can be computed at abscissas that lie on both sides of x, and they were referred to as centraldifference formulas Taylor series can be used to obtain central-difference formulas for the higher derivatives The popular choices are those of order O(h 2 ) and O(h 4 ) and are given in Tables 63 and 64 In these tables we use the convention that f k = f (x 0 +kh) for k = 3, 2, 1, 0, 1, 2, 3 For illustration, we will derive the formula for f (x) of order O(h 2 ) in Table 63 Start with the Taylor expansions (1) f (x + h) = f (x) + hf (x) + h2 f (x) 2 + h3 f (3) (x) 6 + h4 f (4) (x) 24 + Table 63 Central-Difference Formulas of Order O(h 2 ) f (x 0 ) f 1 f 1 2h f (x 0 ) f 1 2 f 0 + f 1 h 2 f (3) (x 0 ) f 2 2 f 1 + 2 f 1 f 2 2h 3 f (4) (x 0 ) f 2 4 f 1 + 6 f 0 4 f 1 + f 2 h 4 Table 64 Central-Difference Formulas of Order O(h 4 ) f (x 0 ) f 2 + 8 f 1 8 f 1 + f 2 12h f (x 0 ) f 2 + 16 f 1 30 f 0 + 16 f 1 f 2 12h 2 f (3) (x 0 ) f 3 + 8 f 2 13 f 1 + 13 f 1 8 f 2 + f 3 8h 3 f (4) (x 0 ) f 3 + 12 f 2 39 f 1 + 56 f 0 39 f 1 + 12 f 2 f 3 6h 4

340 CHAP 6 NUMERICAL DIFFERENTIATION and (2) f (x h) = f (x) hf (x) + h2 f (x) 2 h3 f (3) (x) 6 + h4 f (4) (x) 24 Adding equations (1) and (2) will eliminate the terms involving the odd derivatives f (x), f (3) (x), f (5) (x), : (3) f (x + h) + f (x h) = 2 f (x) + 2h2 f (x) 2 Solving equation (3) for f (x) yields + 2h4 f (4) (x) 24 + (4) f (x) = f (x + h) 2 f (x) + f (x h) h 2 2h4 f (6) (x) 6! 2h2k 2 f (2k) (x) (2k)! 2h2 f (4) (x) 4! If the series in (4) is truncated at the fourth derivative, there exists a value c that lies in [x h, x + h], so that (5) f (x 0 ) = f 1 2 f 0 + f 1 h 2 h2 f (4) (c) 12 This gives us the desired formula for approximating f (x): (6) f (x 0 ) f 1 2 f 0 + f 1 h 2 Example 64 Let f (x) = cos(x) (a) Use formula (6) with h = 01, 001, and 0001 and find approximations to f (08) Carry nine decimal places in all calculations (b) Compare with the true value f (08) = cos(08) (a) The calculation for h = 001 is f f (081) 2 f (080) + f (079) (08) 00001 0689498433 2(0696706709) + 0703845316 00001 0696690000 (b) The error in this approximation is 0000016709 The other calculations are summarized in Table 65 The error analysis will illuminate this example and show why h = 001 was best

SEC 62 NUMERICAL DIFFERENTIATION FORMULAS 341 Table 65 Numerical Approximations to f (x) for Example 64 Step Approximation by Error using size formula (6) formula (6) h = 01 0696126300 0000580409 h = 001 0696690000 0000016709 h = 0001 0696000000 0000706709 Error Analysis Let f k = y k + e k, where e k is the error in computing f (x k ), including noise in measurement and round-off error Then formula (6) can be written (7) f (x 0 ) = y 1 2y 0 + y 1 h 2 + E( f, h) The error term E(h, f ) for the numerical derivative (7) will have a part due to roundoff error and a part due to truncation error: (8) E( f, h) = e 1 2e 0 + e 1 h 2 h2 f (4) (c) 12 If it is assumed that each error e k is of the magnitude ɛ, with signs that accumulate errors, and that f (4) (x) M, then we get the following error bound: (9) E( f, h) 4ɛ h 2 + Mh2 12 If h is small, then the contribution 4ɛ/h 2 due to round-off error is large When h is large, the contribution Mh 2 /12 is large The optimal step size will minimize the quantity (10) g(h) = 4ɛ h 2 + Mh2 12 Setting g (h) = 0 results in 8ɛ/h 3 + Mh/6 = 0, which yields the equation h 4 = 48ɛ/M, from which we obtain the optimal value: (11) h = ( ) 48ɛ 1/4 M When formula (11) is applied to Example 64, use the bound f (4) (x) cos(x) 1 = M and the value ɛ = 05 10 9 The optimal step size is h = (24 10 9 /1) 1/4 = 001244666, and we see that h = 001 was closest to the optimal value

342 CHAP 6 NUMERICAL DIFFERENTIATION Since the portion of the error due to round off is inversely proportional to the square of h, this term grows when h gets small This is sometimes referred to as the step-size dilemma One partial solution to this problem is to use a formula of higher order so that a larger value of h will produce the desired accuracy The formula for f (x 0 ) of order O(h 4 ) in Table 64 is (12) f (x 0 ) = f 2 + 16 f 1 30 f 0 + 16 f 1 f 2 12h 2 + E( f, h) The error term for (12) has the form (13) E( f, h) = 16ɛ 3h 2 + h4 f (6) (c), 90 where c lies in the interval [x 2h, x + 2h] A bound for E( f, h) is (14) E( f, h) 16ɛ 3h 2 + h4 M 90, where f (6) (x) M The optimal value for h is given by the formula ( ) 240ɛ 1/6 (15) h = M Example 65 Let f (x) = cos(x) (a) Use formula (12) with h = 10, 01, and 001 and find approximations to f (08) Carry nine decimal places in all the calculations (b) Compare with the true value f (08) = cos(08) (c) Determine the optimal step size (a) The calculation for h = 01 is f (08) f (10) + 16 f (09) 30 f (08) + 16 f (07) f (06) 012 0540302306 + 9945759488 2090120127 + 1223747499 0825335615 012 0696705958 (b) The error in this approximation is 0000000751 The other calculations are summarized in Table 66 (c) When formula (15) is applied, we can use the bound f (6) (x) cos(x) 1 = M and the value ɛ = 05 10 9 These values give the optimal step size h = (120 10 9 /1) 1/6 = 0070231219

Numerical Methods Using Matlab, 4 th Edition, 2004 John H Mathews and Kurtis K Fink ISBN: 0-13-065248-2 Prentice-Hall Inc Upper Saddle River, New Jersey, USA http://vigprenhallcom/