Derivative Approximation by Finite Differences



Similar documents
5 Numerical Differentiation

4/1/2017. PS. Sequences and Series FROM 9.2 AND 9.3 IN THE BOOK AS WELL AS FROM OTHER SOURCES. TODAY IS NATIONAL MANATEE APPRECIATION DAY

Microeconomic Theory: Basic Math Concepts

Linear and quadratic Taylor polynomials for functions of several variables.

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

SOLVING LINEAR SYSTEMS

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Constrained optimization.

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

0.8 Rational Expressions and Equations

What is Linear Programming?

Notes on Determinant

Envelope Theorem. Kevin Wainwright. Mar 22, 2004

NOTES ON LINEAR TRANSFORMATIONS

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

1 Error in Euler s Method

Representation of functions as power series

The Characteristic Polynomial

Largest Fixed-Aspect, Axis-Aligned Rectangle

Negative Integer Exponents

Does Black-Scholes framework for Option Pricing use Constant Volatilities and Interest Rates? New Solution for a New Problem

5.1 Radical Notation and Rational Exponents

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Rational Exponents. Squaring both sides of the equation yields. and to be consistent, we must have

10.2 Series and Convergence

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

1 Inner Products and Norms on Real Vector Spaces

Sequences. A sequence is a list of numbers, or a pattern, which obeys a rule.

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

A.2. Exponents and Radicals. Integer Exponents. What you should learn. Exponential Notation. Why you should learn it. Properties of Exponents

Equations, Inequalities & Partial Fractions

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Factoring Polynomials

k, then n = p2α 1 1 pα k

Linear Equations and Inequalities

Class Meeting # 1: Introduction to PDEs

Objective. Materials. TI-73 Calculator

Partial Fractions. (x 1)(x 2 + 1)

Introduction. Appendix D Mathematical Induction D1

Algebraic expressions are a combination of numbers and variables. Here are examples of some basic algebraic expressions.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Figure 1.1 Vector A and Vector F

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

II. Linear Systems of Equations

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

1.2 Solving a System of Linear Equations

Polynomials. Dr. philippe B. laval Kennesaw State University. April 3, 2005

Integrals of Rational Functions

Math 55: Discrete Mathematics

26 Integers: Multiplication, Division, and Order

Algebra Practice Problems for Precalculus and Calculus

Quotient Rings and Field Extensions

1 Lecture: Integration of rational functions by decomposition

Solution of Linear Systems

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

LEARNING OBJECTIVES FOR THIS CHAPTER

4.5 Linear Dependence and Linear Independence

Year 9 set 1 Mathematics notes, to accompany the 9H book.

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Systems of Linear Equations

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

minimal polyonomial Example

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

Computing Orthonormal Sets in 2D, 3D, and 4D

University of Lille I PC first year list of exercises n 7. Review

Chapter 9. Systems of Linear Equations

Section 1.1 Linear Equations: Slope and Equations of Lines

Inner Product Spaces

MULTIVARIATE PROBABILITY DISTRIBUTIONS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Unit 3 (Review of) Language of Stress/Strain Analysis

GEOMETRIC SEQUENCES AND SERIES

State of Stress at Point

Definitions 1. A factor of integer is an integer that will divide the given integer evenly (with no remainder).

How To Prove The Dirichlet Unit Theorem

POLYNOMIAL FUNCTIONS

F Matrix Calculus F 1

2 Integrating Both Sides

Continued Fractions and the Euclidean Algorithm

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

7 Gaussian Elimination and LU Factorization

Linear Programming. March 14, 2014

16.1 Runge-Kutta Method

Factoring Special Polynomials

Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM. x + 5 = = (2-2) = = 5. x = 7-5. x + 0 = 20.

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

is identically equal to x 2 +3x +2

Multi-variable Calculus and Optimization

The Point-Slope Form

A Primer on Index Notation

Rotation Matrices and Homogeneous Transformations

Transcription:

Derivative Approximation by Finite Differences David Eberly Geometric Tools, LLC http://wwwgeometrictoolscom/ Copyright c 998-26 All Rights Reserved Created: May 3, 2 Last Modified: April 25, 25 Contents Introduction 2 2 Derivatives of Univariate Functions 2 3 Derivatives of Bivariate Functions 6 4 Derivatives of Multivariate Functions 8

Introduction This document shows how to approximate derivatives of univariate functions F (x) by finite differences Given a small value h >, the d-th order derivative satisfies the following equation where the integer order of error p > may be selected as desired, h d d! F (d) (x) i max ii min C i F (x + ih) + O(h d+p ) () for some choice of extreme indices i min and i max and for some choice of coefficients C i The equation becomes an approximation by throwing away the O(h d+p ) term The vector C (C imin,, C imax ) is called the template or convolution mask for the approximation Approximations for the derivatives of multivariate functions are constructed as tensor products of templates for univariate functions 2 Derivatives of Univariate Functions Recall from calculus that the following approximations are valid for the derivative of F (x) A forward difference approximation is F F (x + h) F (x) (x) + O(h) (2) h a backward difference approximation is F (x) and a centered difference approximation is F (x) F (x) F (x h) h F (x + h) F (x h) + O(h) (3) + O(h 2 ) (4) The approximations are obtained by throwing away the error terms indicated by the O notation The order of the error for each of these approximations is easily seen from formal expansions as Taylor series about the value x, and F (x + h) F (x) + hf (x) + h2 2! F (x) + F (x h) F (x) hf (x) + h2 2! F (x) + n h n n! F (n) (x) (5) ( ) n hn n! F (n) (x) (6) where F (n) (x) denotes the n-th order derivative of F Subtracting F (x) from both sides of equation (5 and then dividing by h leads to the forward difference F (x) (F (x+h) F (x))/h+o(h) Subtracting F (x) from both sides of equation (6) and then dividing by h leads to the backward difference F (x) (F (x) F (x h))/h + O(h) Both approximations have error O(h) The centered difference is obtained by subtracting equation (6) from equation (5) and then dividing by to obtain (F (x + h) F (x h))/() + O(h 2 ) n 2

Higher order approximations to the first derivative can be obtained by using more Taylor series, more terms in the Taylor series, and appropriately weighting the various expansions in a sum For example, () n F (x + ) F (n) (x), F (x ) ( ) n ()n F (n) (x) (7) n! n! n lead to a forward difference approximation with second order error, F (x) n F (x + ) + 4F (x + h) 3F (x) to a backward difference approximation with second order error, F (x) 3F (x) 4F (x h) + F (x ) and to a centered difference approximation with fourth order error, F (x) F (x + ) + 8F (x + h) 8F (x h) + F (x ) + O(h 2 ) (8) + O(h 2 ) (9) + O(h 4 ) () Higher-order derivatives can be approximated in the same way For example, a forward difference approximation to F (x) is F F (x + ) 2F (x + h) + F (x) (x) h 2 + O(h) () and centered difference approximations are and F (x) F (x) F (x + h) 2F (x) + F (x h) h 2 + O(h 2 ) (2) F (x + ) + 6F (x + h) 3F (x) + 6F (x h) F (x ) 2 + O(h 4 ) (3) Each of these formulas is easily verified by expanding the F (x + ih) terms in a formal Taylor series and computing the weighted sums on the right-hand sides However, of greater interest is to select the order of derivative d and the order of error p and determine the weights C i for the sum in equation () A formal Taylor series for F (x + ih) is Replacing this in equation () yields F (x + ih) n i n hn n! F (n) (x) (4) h d d! F (d) (x) i max ii min C i n ( imax ) n ii min i n C i d+p n in hn n! F (n) (x) + O(h d+p ) h n n! F (n) (x) + O(h d+p ) ( imax ii min i n C i ) h n n! F (n) (x) + O(h d+p ) (5) Multiplying by d!/h d, the desired approximation is F (d) (x) d! h d d+p n ( imax ii min i n C i 3 ) h n n! F (n) (x) + O(h p ) (6)

In order for equation () to be satisfied, it is necessary that i max i n, n d + p and n d C i ii min, n d This is a set of d+p linear equations in i max i min + unknowns If we constrain the number of unknowns to be d+p, the linear system has a unique solution A forward difference approximation occurs if we set i min and i max d + p A backward difference approximation occurs if we set i max and i min (d + p ) A centered difference approximation occurs if we set i max i min (d + p )/2 where it appears that d + p is necessarily an odd number As it turns out, p can be chosen to be even regardless of the parity of d and i max (d + p )/2 Table indicates the choices for d and p, the type of approximation (forward, backward, centered), and the corresponding equation number, (7) Table Choices for the parameters for the example approximations presented previously equation d p type i min i max (2) forward (3) backward - (4) 2 centered - (8) 2 forward 2 (9) 2 backward -2 () 4 centered -2 2 () 2 forward 2 (2) 2 2 centered - (3) 2 4 centered -2 2 Example Approximate F (3) (x) with a forward difference with error O(h), so d 3 and p We need i min and i max 3 The linear system from equation (7) is C 2 3 C 4 9 C 2 8 27 and has solution (C, C, C 2, C 3 ) (, 3, 3, )/6 Equation () becomes F (3) (x) C 3 F (x) + 3F (x + h) 3F (x + ) + F (x + 3h) h 3 4 + O(h)

Approximate F (3) (x) with a centered difference with error O(h 2 ), so d 3 and p 2 We need i max i min 2 The linear system from equation (7) is 2 2 4 4 8 8 6 6 C 2 C C C C 2 and has solution (C 2, C, C, C, C 2 ) (, 2,, 2, )/2 Equation () becomes F (3) (x) F (x ) + 2F (x h) 2F (x + h) + F (x + ) 3 + O(h 2 ) Finally, approximate with a centered difference with error O(h 4 ), so d 3 and p 4 We need i max i min 3 The linear system from equation (7) is 3 2 2 3 9 4 4 9 27 8 8 27 8 6 6 8 243 32 32 243 729 64 64 729 C 3 C 2 C C C C 2 C 3 and has solution (C 3, C 2, C, C, C, C 2, C 3 ) (, 8, 3,, 3, 8, )/48 Equation () becomes F (3) (x) F (x 3h) 8F (x ) + 3F (x h) 3F (x + h) + 8F (x + ) F (x + 3h) 8h 3 + O(h 4 ) Example 2 Approximate F (4) (x) with a forward difference with error O(h), so d 4 and p We need i min and i max 4 The linear system from equation (7) is 2 3 4 4 9 6 8 27 64 6 8 256 C C C 2 C 3 C 4 5

and has solution (C, C, C 2, C 3, C 4 ) (, 4, 6, 4, )/24 Equation () becomes F (4) (x) F (x) 4F (x + h) + 6F (x + ) 4F (x + 3h) + F (x + 4h) h 4 + O(h) Approximate F (4) (x) with a centered difference with error O(h 2 ), so d 4 and p 2 We need i max i min 2 The linear system from equation (7) is C 2 2 2 C 4 4 C 8 8 C 6 6 and has solution (C 2, C, C, C, C 2 ) (, 4, 6, 4, )/24 Equation () becomes F (4) (x) F (x ) 4F (x h) + 6F (x) 4F (x + h) + F (x + ) h 4 + O(h 2 ) Finally, approximate with a centered difference with error O(h 4 ), so d 4 and p 4 We need i max i min 3 The linear system from equation (7) is C 3 3 2 2 3 C 2 9 4 4 9 C 27 8 8 27 C 8 6 6 8 C 243 32 32 243 C 2 729 64 64 729 and has solution (C 3, C 2, C, C, C, C 2, C 3 ) (, 2, 39, 56, 39, 2, )/44 Equation () becomes C 2 C 3 F (4) (x) F (x 3h) + 2F (x ) 39F (x h) + 56F (x) 39F (x + h) + 2F (x + ) F (x + 3h) 6h 4 + O(h 4 ) 3 Derivatives of Bivariate Functions For functions with more variables, the partial derivatives can be approximated by grouping together all of the same variables and applying the univariate approximation for that group For example, if F (x, y) is our 6

function, then some partial derivative approximations are f x (x, y) f y (x, y) f xx (x, y) f yy (x, y) f xy (x, y) F (x+h,y) F (x h,y) F (x,y+k) F (x,y k) 2k F (x+h,y) 2f(x,y)+F (x h,y) h 2 F (x,y+k) 2f(x,y)+F (x,y k) k 2 F (x+h,y+k) F (x+h,y k) F (x h,y+k)+f (x h,y k) 4hk (8) Each of these can be verified in the limit, the x-derivatives by taking the limit as h approaches zero, the y-derivatives by taking the limit as y approaches zero, and the mixed second-order derivative by taking the limit as both h and k approach zero The derivatives F x, F y, F xx, and F yy just use the univariate approximation formulas The mixed derivative requires slightly more work The important observation is that the approximation for F xy is obtained by applying the x-derivative approximation for F x, then applying the y-derivative approximation to the previous approximation That is, f xy (x, y) F (x+h,y) F (x h,y) F (x+h,y+k) F (x h,y+k) F (x+h,y k) F (x h,y k) 2k F (x+h,y+k) F (x+h,y k) F (x h,y+k)+f (x h,y k) 4hk (9) The approximation implied by equation () may be written as h m m! d m dx m F (x) i max ii min C (m) i F (x + ih) (2) The inclusion of the superscript on the C coefficients is to emphasize that those coefficients are constructed for each order m For bivariate functions, we can use the natural extension of equation (2) by applying the approximation in x first, then applying the approximation in y to that approximation, just as in our example of F xy, k n n! n h m y n m! m x m F (x, y) kn n! n y n imax ii min C (m) i F (x + ih, y) i max ii min jmax i max ii min jmax jj min C (m) i jj min C (m,n) i,j C (n) j F (x + ih, y + jk) F (x + ih, y + jk) (2) where the last equality defines C (m,n) i,j C (m) i C (n) j (22) The coefficients for the bivariate approximation are just the tensor product of the coefficients for each of the univariate approximations 7

4 Derivatives of Multivariate Functions The approximation concept extends to any number of variables Let (x,, x n ) be those variables and let F (x,, x n ) be the function to approximate The approximation is ( h m m! where m x m hmn n m n! ) mn F (x x mn,, x n ) i max i i min C (m,,mn) i max n i ni min n (i,,i n) C (m) i is a tensor product of the coefficients of the n univariate approximations C (m,,mn) (i,,i n) F (x + i h,, x n + i n h n ) (23) C (mn) i n (24) 8