since by using a computer we are limited to the use of elementary arithmetic operations

Similar documents
INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

4.3 Lagrange Approximation

Piecewise Cubic Splines

November 16, Interpolation, Extrapolation & Polynomial Approximation

1 if 1 x 0 1 if 0 x 1

4.5 Chebyshev Polynomials

1 Review of Newton Polynomials

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

5 Numerical Differentiation

1 Review of Least Squares Solutions to Overdetermined Systems

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Quotient Rings and Field Extensions

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Natural cubic splines

Rolle s Theorem. q( x) = 1

SOLVING POLYNOMIAL EQUATIONS

Section 3.2 Polynomial Functions and Their Graphs

TOPIC 4: DERIVATIVES

The Method of Partial Fractions Math 121 Calculus II Spring 2015

MA107 Precalculus Algebra Exam 2 Review Solutions

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

7.6 Approximation Errors and Simpson's Rule

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

8 Divisibility and prime numbers

Inner Product Spaces

Practice with Proofs

Taylor and Maclaurin Series

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

Differentiation and Integration

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

CHAPTER 1 Splines and B-splines an Introduction

Review of Fundamental Mathematics

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

DERIVATIVES AS MATRICES; CHAIN RULE

Finite Elements for 2 D Problems

Numerical Methods for Solving Systems of Nonlinear Equations

Homework # 3 Solutions

Integrals of Rational Functions

Microeconomic Theory: Basic Math Concepts

Limits and Continuity

Winter Camp 2011 Polynomials Alexander Remorov. Polynomials. Alexander Remorov

5.3 Improper Integrals Involving Rational and Exponential Functions

15. Symmetric polynomials

Linear and quadratic Taylor polynomials for functions of several variables.

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

MA4001 Engineering Mathematics 1 Lecture 10 Limits and Continuity

Roots of Polynomials

Understanding Basic Calculus

Continued Fractions and the Euclidean Algorithm

4. Continuous Random Variables, the Pareto and Normal Distributions

0.8 Rational Expressions and Equations

Real Roots of Univariate Polynomials with Real Coefficients

An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate.

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Separable First Order Differential Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

8 Polynomials Worksheet

A Brief Review of Elementary Ordinary Differential Equations

UNCORRECTED PAGE PROOFS

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Zeros of a Polynomial Function

Equations, Inequalities & Partial Fractions

Math 120 Final Exam Practice Problems, Form: A

Representation of functions as power series

Section 3.7. Rolle s Theorem and the Mean Value Theorem. Difference Equations to Differential Equations

Notes on Determinant

Class Meeting # 1: Introduction to PDEs

1 Homework 1. [p 0 q i+j p i 1 q j+1 ] + [p i q j ] + [p i+1 q j p i+j q 0 ]

EVALUATING A POLYNOMIAL

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

(Refer Slide Time: 1:42)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

1 Error in Euler s Method

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Zeros of Polynomial Functions

Nonlinear Programming Methods.S2 Quadratic Programming

G.A. Pavliotis. Department of Mathematics. Imperial College London

How To Understand The Theory Of Algebraic Functions

Numerical Solution of Differential

Partial Fractions. (x 1)(x 2 + 1)

The Method of Least Squares. Lectures INF2320 p. 1/80

Efficient Curve Fitting Techniques

Solving Quadratic Equations

Interpolation. Chapter The Interpolating Polynomial

PYTHAGOREAN TRIPLES KEITH CONRAD

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Polynomial Invariants

Partial Fractions Examples

1 Cubic Hermite Spline Interpolation

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

1. First-order Ordinary Differential Equations

Constrained optimization.

by the matrix A results in a vector which is a reflection of the given

1 Lecture: Integration of rational functions by decomposition

Transcription:

> 4. Interpolation and Approximation Most functions cannot be evaluated exactly: x, e x, ln x, trigonometric functions since by using a computer we are limited to the use of elementary arithmetic operations +,,, With these operations we can only evaluate polynomials and rational functions (polynomial divided by polynomials).

> 4. Interpolation and Approximation Interpolation Given points and corresponding values x 0, x 1,..., x n find a function f(x) such that y 0, y 1,..., y n f(x i ) = y i, i = 0,..., n. The interpolation function f is usually taken from a restricted class of functions: polynomials.

> 4. Interpolation and Approximation > 4.1 Polynomial Interpolation Theory Interpolation of functions f(x) x 0, x 1,..., x n f(x 0 ), f(x 1 ),..., f(x n ) Find a polynomial (or other special function) such that What is the error f(x) = p(x)? p(x i ) = f(x i ), i = 0,..., n.

> 4. Interpolation and Approximation > 4.1.1 Linear interpolation Linear interpolation Given two sets of points (x 0, y 0 ) and (x 1, y 1 ) with x 0 x 1, draw a line through them, i.e., the graph of the linear polynomial x 0 x 1 l(x) = x x 1 y 0 + x x 0 y 1 x y 0 y 0 x 1 x 1 x 0 1 l(x) = (x 1 x)y 0 + (x x 0 )y 1 x 1 x 0 (5.1) We say that l(x) interpolates the value y i at the point x i, i = 0, 1, or l(x i ) = y i, i = 0, 1 Figure: Linear interpolation

> 4. Interpolation and Approximation > 4.1.1 Linear interpolation Example Let the data points be (1, 1) and (4,2). The polynomial P 1 (x) is given by P 1 (x) = (4 x) 1 + (x 1) 2 3 The graph y = P 1 (x) and y = x, from which the data points were taken. (5.2) Figure: y = x and its linear interpolating polynomial (5.2)

> 4. Interpolation and Approximation > 4.1.1 Linear interpolation Example Obtain an estimate of e 0.826 using the function values e 0.82 = 2.270500, e 0.83 = 2.293319 Denote x 0 = 0.82, x 1 = 0.83. The interpolating polynomial P 1 (x) interpolating e x at x 0 and x 1 is and P 1 (x) = while the true value s (0.83 x) 2.270500 + (x 0.82) 2.293319 0.01 P 1 (0.826) = 2.2841914 (5.3) e 0.826 = 2.2841638 to eight significant digits.

> 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation Assume three data points (x 0, y 0 ), (x 1, y 1 ), (x 2, y 2 ), with x 0, x 1, x 2 distinct. We construct the quadratic polynomial passing through these points using Lagrange s folmula P 2 (x) = y 0 L 0 (x) + y 1 L 1 (x) + y 2 L 2 (x) (5.4) with Lagrange interpolation basis functions for quadratic interpolating polynomial L 0 (x) = L 1 (x) = L 2 (x) = (x x1)(x x2) (x 0 x 1)(x 0 x 2) (x x0)(x x2) (x 1 x 0)(x 1 x 2) (x x0)(x x1) (x 2 x 0)(x 2 x 1) Each L i (x) has degree 2 P 2 (x) has degree 2. Moreover L i (x j ) = 0, j i for 0 i, j 2 i.e., L L i (x i ) = 1 i (x j ) = δ i,j = the Kronecker delta function. P 2 (x) interpolates the data P 2 (x) = y i, i=0,1,2 { 1, i = j 0, i j (5.5)

> 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation Example Construct P 2 (x) for the data points (0, 1), (1, 1), (2, 7). Then P 2 (x)= (x 1)(x 2) 2 ( 1)+ x(x 2) 1 ( 1)+ x(x 1) 2 7 (5.6) Figure: The quadratic interpolating polynomial (5.6)

> 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation With linear interpolation: obvious that there is only one straight line passing through two given data points. With three data points: only one quadratic interpolating polynomial whose graph passes through the points. Indeed: assume Q 2 (x), deg(q 2 ) 2 passing through (x i, y i ), i = 0, 1, 2, then it is equal to P 2 (x). The polynomial has deg(r) 2 and R(x) = P 2 (x) Q 2 (x) R(x i ) = P 2 (x i ) Q 2 (x i ) = y i y i = 0, for i = 0, 1, 2 So R(x) is a polynomial of degree 2 with three roots R(x) 0

> 4. Interpolation and Approximation > 4.1.2 Quadratic Interpolation Example Calculate a quadratic interpolate to e 0.826 from the function values e 0.82 = 2.27050 e 0.83 = 2.293319 e 0.84 = 2.231637 With x 0 = e 0.82, x 1 = e 0.83, x 2 = e 0.84, we have P 2 (0.826) = 2.2841639 to eight digits, while the true answer e 0.826 = 2.2841638 and P 1 (0.826) = 2.2841914.

> 4. Interpolation and Approximation > 4.1.3 Higher-degree interpolation Lagrange s Formula Given n + 1 data points (x 0, y 0 ), (x 1, y 1 ),..., (x n, y n ) with all x i s distinct, unique P n, deg(p n ) n such that P n (x i ) = y i, i = 0,..., n given by Lagrange s Formula P n (x) = n y i L i (x) = y 0 L 0 (x) + y 1 L 1 (x) + y n L n (x) (5.7) i=0 where L i (x)= n j=0,j i where L i (x j ) = δ ij x x j x i x j = (x x 0) (x x i 1 )(x x i+1 ) (x x n ) (x i x 0 ) (x i x i 1 )(x i x i ) (x i x n ),

> 4. Interpolation and Approximation > 4.1.4 Divided differences Remark; The Lagrange s formula (5.7) is suited for theoretical uses, but is impractical for computing the value of an interpolating polynomial: knowing P 2 (x) does not lead to a less expensive way to compute P 3 (x). But for this we need some preliminaries, and we start with a discrete version of the derivative of a function f(x). Definition (First-order divided difference) Let x 0 x 1, we define the first-order divided difference of f(x) by f[x 0, x 1 ] = f(x 1) f(x 2 ) x 1 x 2 (5.8) If f(x) is differentiable on an interval containing x 0 and x 1, then the mean value theorem gives f[x 0, x 1 ] = f (c), for c between x 0 and x 1. Also if x 0, x 1 are close together, then usually a very good approximation. f[x 0, x 1 ] f ( x0 + x 1 2 )

> 4. Interpolation and Approximation > 4.1.4 Divided differences Example Let f(x) = cos(x), x 0 = 0.2, x 1 = 0.3. Then f[x 0, x 1 ] = cos(0.3) cos(0.2) 0.3 0.2 = 0.2473009 (5.9) while ( ) f x0 + x 1 = sin(0.25) = 0.2474040 2 so f[x 0, x 1 ] is a very good approximation of this derivative.

> 4. Interpolation and Approximation > 4.1.4 Divided differences Higher-order divided differences are defined recursively Let x 0, x 1, x 2 R distinct. The second-order divided difference f[x 0, x 1, x 2 ] = f[x 1, x 2 ] f[x 0, x 1 ] x 2 x 0 (5.10) Let x 0, x 1, x 2, x 3 R distinct. The third-order divided difference f[x 0, x 1, x 2, x 3 ] = f[x 1, x 2, x 3 ] f[x 0, x 1, x 2 ] x 3 x 0 (5.11) In general, let x 0, x 1,..., x n R, n + 1 distinct numbers. The divided difference of order n f[x 0,..., x n ] = f[x 1,..., x n ] f[x 0,..., x n 1 ] x n x 0 (5.12) or the Newton divided difference.

> 4. Interpolation and Approximation > 4.1.4 Divided differences Theorem Let n 1, f C n [α, β] and x 0, x 1,..., x n n + 1 distinct numbers in [α, β]. Then f[x 0, x 1,..., x n ] = 1 n! f (n) (c) (5.13) for some unknown point c between the maximum and the minimum of x 0,..., x n. Example Let f(x) = cos(x), x 0 = 0.2, x 1 = 0.3, x 2 = 0.4. The f[x 0, x 1 ] is given by (5.9), and f[x 1, x 2 ] = cos(0.4) cos(0.3) 0.4 0.3 = 0.3427550 hence from (5.11) f[x 0, x 1, x 2 ] = 0.3427550 ( 0.2473009) = 0.4772705 (5.14) 0.4 0.2 For n = 2, (5.13) becomes f[x 0, x 1, x 2 ] = 1 2 f (c) = 1 2 cos(c) 1 2 cos(0.3) = 0.4776682 which is nearly equal to the result in (5.14).

> 4. Interpolation and Approximation > 4.1.5 Properties of divided differences The divided differences (5.12) have special properties that help simplify work with them. (1) Let (i 0, i 1,..., i n ) be a permutation (rearrangement) of the integers (0, 1,..., n). It can be shown that f[x i0, x i1,..., x in ] = f[x 0, x 1,..., x n ] (5.15) The original definition (5.12) seems to imply that the order of x 0, x 1,..., x n is important, but (5.15) asserts that it is not true. For n = 1 f[x 0, x 1 ] = f(x 0) f(x 1 ) = f(x 1) f(x 0 ) = f[x 1, x 0 ] x 0 x 1 x 1 x 0 For n = 2 we can expand (5.11) to get f(x 0 ) f[x 0, x 1, x 2 ] = (x 0 x 1 )(x 0 x 2 ) + f(x 1 ) (x 1 x 0 )(x 1 x 2 ) + f(x 2 ) (x 2 x 0 )(x 2 x 1 ) (2) The definitions (5.8), (5.11) and (5.12) extend to the case where some or all of the x i coincide, provided that f(x) is sufficiently differentiable.

> 4. Interpolation and Approximation > 4.1.5 Properties of divided differences For example, define f(x 0 ) f(x 1 ) f[x 0, x 0 ] = lim f[x 0, x 1 ] = lim x 1 x 0 x 1 x 0 x 1 x 0 f[x 0, x 0 ] = f (x 0 ) For an arbitrary n 1, let all x i x 0 ; this leads to the definition f[x 0,..., x 0 ] = 1 n! f (n) (x 0 ) (5.16) For the cases where only some of nodes coincide: using (5.15), (5.16) we can extend the definition of the divided difference. For example f[x 0, x 1, x 0 ] = f[x 0, x 0, x 1 ] = f[x 0, x 1 ] f[x 0, x 0 ] x 1 x 0 = f[x 0, x 1 ] f (x 0 ) x 1 x 0

> 4. Interpolation and Approximation > 4.1.5 Properties of divided differences MATLAB program: evaluating divided differences Given a set of values f(x 0 ),..., f(x n ) we need to calculate the set of divided differences f[x 0, x 1 ], f[x 0, x 1, x 2 ],..., f[x 0, x 1,..., x n ] We can use the MATLAB function divdif using the function call divdif y = divdif(x nodes, y values) Note that MATLAB does not allow zero subscripts, hence x nodes and y values have to be redefined as vectors containing n + 1 components: x nodes = [x 0, x 1,..., x n ] x nodes(i) = x i 1, i = 1,..., n + 1 y values = [f(x 0 ), f(x 1 ),..., f(x n )] y values(i) = f(x i 1 ), i = 1,..., n + 1

> 4. Interpolation and Approximation > 4.1.5 Properties of divided differences MATLAB program: evaluating divided differences function divdif y = divdif(x nodes,y values) % % This is a function % divdif y = divdif(x nodes,y values) % It calculates the divided differences of the function % values given in the vector y values, which are the values of % some function f(x) at the nodes given in x nodes. On exit, % divdif y(i) = f[x 1,...,x i], i=1,...,m % with m the length of x nodes. The input values x nodes and % y values are not changed by this program. % divdif y = y values; m = length(x nodes); for i=2:m for j=m:-1:i divdif y(j) = (divdif y(j)-divdif y(j-1))... /(x nodes(j)-x nodes(j-i+1)); end end

> 4. Interpolation and Approximation > 4.1.5 Properties of divided differences This program is illustrated in this table. i x i cos(x i ) D i 0 0.0 1.000000 0.1000000E+1 1 0.1 0.980067-0.9966711E-1 2 0.4 0.921061-0.4884020E+0 3 0.6 0.825336 0.4900763E-1 4 0.8 0.696707 0.3812246E-1 5 1.0 0.540302-0.3962047E-2 6 1.2 0.36358-0.1134890E-2 Table: Values and divided differences for cos(x)

> 4. Interpolation and Approximation > 4.1.6 Newton s Divided Differences Newton interpolation formula Interpolation of f at x 0, x 1,..., x n Idea: use P n 1 (x) in the definition of P n (x): P n (x) = P n 1 (x) + C(x) } C(x) P n correction term = C(x i ) = 0 i = 0,..., n 1 = C(x) = a(x x 0 )(x x 1 )... (x x n 1 ) P n (x) = ax n +...

> 4. Interpolation and Approximation > 4.1.6 Newton s Divided Differences Definition: The divided difference of f(x) at points x 0,..., x n is denoted by f[x 0, x 1,..., x n ] and is defined to be the coefficient of x n in P n (x; f). C(x) = f[x 0, x 1,..., x n ](x x 0 )... (x x n 1 ) p n (x; f) = p n 1 (x; f)+f[x 0, x 1,..., x n ](x x 0 )... (x x n 1 ) (5.17) p 1 (x; f) = f(x 0 ) + f[x 0, x 1 ](x x 0 ) (5.18) p 2 (x; f)=f(x 0 )+f[x 0, x 1 ](x x 0 )+f[x 0, x 1, x 2 ](x x 0 )(x x 1 ) (5.19). p n (x; f) = f(x 0 ) + f[x 0, x 1 ](x x 0 ) +... (5.20) +f[x 0,..., x n ](x x 0 )... (x x n 1 ) = Newton interpolation formula

> 4. Interpolation and Approximation > 4.1.6 Newton s Divided Differences For (5.18), consider p 1 (x 0 ), p 1 (x 1 ). Easily, p 1 (x 0 ) = f(x 0 ), and [ ] f(x1 ) f(x 0 ) p 1 (x 1 )=f(x 0 )+(x 1 x 0 ) =f(x 0 )+[f(x 1 ) f(x 0 )]=f(x 1 ) x 1 x 0 So: deg(p 1 ) 1 and satisfies the interpolation conditions. Then the uniqueness of polynomial interpolation (5.18) is the linear interpolation polynomial to f(x) at x 0, x 1. For (5.19), note that p 2 (x) = p 1 (x) + (x x 0 )(x x 1 )f[x 0, x 1, x 2 ] It satisfies: deg(p 2 ) 2 and p 2 (x i ) = p 1 (x i ) + 0 = f(x i ), i = 0, 1 p 2 (x 2 ) = f(x 0 ) + (x 2 x 0 )f[x 0.x 1 ] + (x 2 x 0 )(x 2 x 1 )f[x 0, x 1, x 2 ] = f(x 0 ) + (x 2 x 0 )f[x 0.x 1 ] + (x 2 x 1 ){f[x 1, x 2 ] f[x 0, x 1 ]} = f(x 0 ) + (x 1 x 0 )f[x 0.x 1 ] + (x 2 x 1 )f[x 1, x 2 ] = f(x 0 ) + {f(x 1 ) f(x 0 )} + {f(x 2 ) f(x 1 )} = f(x 2 ) By the uniqueness of polynomial interpolation, this is the quadratic interpolating polynomial to f(x) at {x 0, x 1, x 2 }.

> 4. Interpolation and Approximation > 4.1.6 Newton s Divided Differences Example Find p(x) P 2 such that p( 1) = 0, p(0) = 1, p(1) = 4. p(x) = p(x 0 ) + p[x 0, x 1 ](x x 0 ) + p[x 0, x 1, x 2 ](x x 0 )(x x 1 ) x i p(x i ) p[x i, x i+1 ] p[x i, x i+1, x i+1 ] 1 0 1 1 0 1 3 1 4 p(x) = 0 + 1(x + 1) + 1(x + 1)(x 0) = x 2 + 2x + 1

> 4. Interpolation and Approximation > 4.1.6 Newton s Divided Differences Example Let f(x) = cos(x). The previous table contains a set of nodes x i, the values f(x i ) and the divided differences computed with divdif D i = f[x 0,..., x i ] i 0 p n (0.1) p n (0.3) p n (0.5) 1 0.9900333 0.9700999 0.9501664 2 0.9949173 0.9554478 0.8769061 3 0.9950643 0.9553008 0.8776413 4 0.9950071 0.9553351 0.8775841 5 0.9950030 0.9553369 0.8775823 6 0.9950041 0.9553365 0.8775825 True 0.9950042 0.9553365 0.8775826 Table: Interpolation to cos(x) using (5.20) This table contains the values of p n (x) for various values of n, computed with interp, and the true values of f(x).

> 4. Interpolation and Approximation > 4.1.6 Newton s Divided Differences In general, the interpolation node points x i need not to be evenly spaced, nor be arranged in any particular order to use the divided difference interpolation formula (5.20) To evaluate (5.20) efficiently we can use a nested multiplication algorithm P n (x) = D 0 + (x x 0 )D 1 + (x x 0 )(x x 1 )D 2 +... + (x x 0 ) (x x n 1 )D n with D 0 = f(x 0 ), D i = f[x 0,..., x i ] for i 1 P n (x) = D 0 + (x x 0 )[D 1 + (x x 1 )[D 2 + (5.21) + (x x n 2 )[D n 1 + (x x n 1 )D n ] ]] For example P 3 (x) = D 0 + (x x 0 )D 1 + (x x 1 )[D 2 + (x x 2 )D 3 ] (5.21) uses only n multiplications to evaluate P n (x) and is more convenient for a fixed degree n. To compute a sequence of interpolation polynomials of increasing degree is more efficient to se the original form (5.20).

> 4. Interpolation and Approximation > 4.1.6 Newton s Divided Differences MATLAB: evaluating Newton divided difference for polynomial interpolation function p eval = interp(x nodes,divdif y,x eval) % % This is a function % p eval = interp(x nodes,divdif y,x eval) % It calculates the Newton divided difference form of % the interpolation polynomial of degree m-1, where the % nodes are given in x nodes, m is the length of x nodes, % and the divided differences are given in divdif y. The % points at which the interpolation is to be carried out % are given in x eval; and on exit, p eval contains the % corresponding values of the interpolation polynomial. % m = length(x nodes); p eval = divdif y(m)*ones(size(x eval)); for i=m-1:-1:1 p eval = divdif y(i) + (x eval - x nodes(i)).*p eval; end

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Formula for error E(t) = f(t) p n (t; f) n P n (x) = f(x j )L j (x) j=0 Theorem Let n 0, f C n+1 [a, b], x 0, x 1,..., x n distinct points in [a, b]. Then f(x) P n (x) = (x x 0 )... (x x n )f[x 0, x 1,..., x n, x] (5.22) = (x x 0)(x x 1 ) (x x n ) f (n+1) (c x ) (5.23) (n + 1)! for x [a, b], c x unknown between the minimum and maximum of x 0, x 1,..., x n.

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Proof of Theorem Fix t R. Consider p n+1 (x; f) interpolates f at x 0,..., x n, t p n (x; f) interpolates f at x 0,..., x n From Newton p n+1 (x; f) = p n (x; f) + f[x 0,..., x n, t](x x 0 )... (x x n ) Let x = t. f(t) := p n+1 (t, f) = p n (t; f) + f[x 0,..., x n, t](t x 0 )... (t x) E(t) := f[x 0, x 1,..., x n, t](t x 0 )... (t x n )

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Examples f(x) = x n, f[x 0,..., x n ] = n! n! = 1 or p n (x; f) = 1 x n f[x 0,..., x n ] = 1 f(x) P n 1, f[x 0, x 1,..., x n ] = 0 f(x) = x n+1, f[x 0,..., x n ] = x 0 + x 1 +... + x n R(x) = x n+1 p n (x; f) P n+1 R(x i ) = 0, i = 0,..., n R(x i ) = (x x 0 )... (x x n ) = x n+1 (x 0 +... + x n )x n +... }{{} P n = f[x 0,..., x n ] = x 0 + x 1 +... + x n. If f P m, f[x 0,..., x m, x] = { polynomial degree m n 1 if n m 1 0 if n > m 1

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Example Take f(x) = e x, x [0, 1] and consider the error in linear interpolation to f(x) using x 0, x 1 satisfying 0 x 0 x 1 1. From (5.23) e x P 1 (x) = (x x 0)(x x 1 ) e cx 2 for some c x between the max and min of x 0, x 1 and x. Assuming x 0 < x < x 1, the error is negative and approximately a quadratic polynomial e x P 1 (x) = (x 1 x)(x x 0 ) e cx 2 Since x 0 c x x 1 (x 1 x)(x x 0 ) 2 e x 0 e x P 1 (x) = (x 1 x)(x x 0 ) e x 1 2

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation For a bound independent of x and e x 1 e on [0, 1] (x 1 x)(x x 0 ) max = h2 x 0 x x 1 2 8, h = x 1 x 0 e x P 1 (x) h2 8 e, 0 x 0 x x 1 1 independent of x, x 0, x 1. Recall that we estimated e 0.826 = 2.2841914 by e 0.82 and e 0.83, i.e., h = 0.01. e x P 1 (x) h2 8 e = ex P 1 (x) 0.012 2.72 = 0.0000340, 8 The actual error being 0.0000276, it satisfies this bound.

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Example Again let f(x) = e x on [0, 1], but consider the quadratic interpolation. e x P 2 (x) = (x x 0)(x x 1 )(x x 2 ) e cx 6 for some c x between the min and max of x 0, x 1, x 2 and x. Assuming evenly spaced points, h = x 1 x 0 = x 2 x 1, and 0 x 0 < x < x 2 1, we have as before e x P 2 (x) (x x 0 )(x x 1 )(x x 2 ) 6 e1 while (x x 0 )(x x 1 )(x x 2 ) max = h3 x 0 x x 2 6 9 3 (5.24) hence e x P 2 (x) h3 e 9 3 0.174h3

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation For h = 0.01, 0 x 1 e x P 2 (x) 1.74 10 7

> 4. Interpolation and Approximation > 4.2 Error in polynomial interpolation Let w 2 (x) = (x + h)x(x h) 6 = x3 xh 6 a translation along x-axis of polynomial in (5.24). Then x = ± h 3 satisfies 0 = w 2(x) = 3x2 h 2 6 and gives w 2 (± 3 h ) = h3 9 3 Figure: y = w 2 (x)

> 4. Interpolation and Approximation > 4.2.2 Behaviour of the error When we consider the error formula (5.23) or (5.22), the polynomial ψ n (x) = (x x 0 ) (x x n ) is crucial in determining the behaviour of the error. Let assume that x 0,..., x n are evenly spaced and x 0 x x n. Figure: y = Ψ 6 (x)

> 4. Interpolation and Approximation > 4.2.2 Behaviour of the error Remark that the interpolation error is relatively larger in [x 0, x 1 ] and [x 5, x 6 ], and is likely to be smaller near the middle of the node points In practical interpolation problems, high-degree polynomial interpolation with evenly spaced nodes is seldom used. But when the set of nodes is suitable chosed, high-degree polynomials can be very useful in obtaining polynomial approximations to functions. Example Let f(x) = cos(x), h = 0.2, n = 8 and then interpolate at x = 0.9. Case (i) x 0 = 0.8, x 8 = 2.4 x = 0.9 [x 0, x 1 ]. By direct calculation of P 8 (0.9), cos(0.9) P 8 (0.9) = 5.51 10 9 Case (ii) x 0 = 0.2, x 8 = 1.8 x = 0.9 [x 3, x 4 ], where x 4 is the midpoint. cos(0.9) P 8 (0.9) = 2.26 10 10, a factor of 24 smaller than the first case.

> 4. Interpolation and Approximation > 4.2.2 Behaviour of the error Example Le f(x) = 1 1+x 2, x [ 5, 5], n > 0 an even integer, h = 10 n x j = 5 + jh, j = 0, 1, 2,..., n It can be shown that for many points x in [ 5, 5], the sequence of {P n (x)} does not converge to f(x) as n. Figure: The interpolation to 1 1+x 2

> 4. Interpolation and Approximation > 4.3 Interpolation using spline functions x 0 1 2 2.5 3 3.5 4 y 2.5 0.5 0.5 1.5 1.5 1.125 0 The simplest method of interpolating data in a table: connecting the node points by straight lines: the curve y = l(x) is not very smooth. Figure: y = l(x): piecewise linear interpolation We want to construct a smooth curve that interpolates the given data point that follows the shape of y = l(x).

> 4. Interpolation and Approximation > 4.3 Interpolation using spline functions Next choice: use polynomial interpolation. With seven data point, we consider the interpolating polynomial P 6 (x) of degree 6. Figure: y = P 6 (x): polynomial interpolation Smooth, but quite different from y = l(x)!!!

> 4. Interpolation and Approximation > 4.3 Interpolation using spline functions A third choice: connect the data in the table using a succession of quadratic interpolating polynomials: on each [0, 2], [2, 3], [3, 4]. Figure: y = q(x): piecewise quadratic interpolation Is smoother than y = l(x), follows it more closely than y = P 6 (x), but at x = 2 and x = 3 the graph has corners, i.e., q (x) is discontinuous.

> 4. Interpolation and Approximation > 4.3.1 Spline interpolation Cubic spline interpolation Suppose n data points (x i, y i ), i = 1,..., n are given and a = x 1 <... < x n = b We seek a function S(x) defined on [a, b] that interpolates the data S(x i ) = y i, i = 1,..., n Find S(x) C 2 [a, b], a natural cubic spline such that S(x)is a polynomial of degree 3on each subinterval [x j 1, x j ], j = 2, 3,...,n; S(x i ) = y i, i = 1,..., n, S (a) = S (b). On [x i 1, x i ]: 4 degrees of freedom (cubic spline) 4(n 1) DOF.

> 4. Interpolation and Approximation > 4.3.1 Spline interpolation S(x i ) = y i : n constraints S (x i 0) = S (x i + 0) continuity 2 = 1,..., n 1 S (x i 0) = S (x i + 0) i = 2,..., n 1 S(x i 0) = S(x i + 0) i = 2,..., n 1 n + (3n 6) = 4n 6 constraints 3n 6 Two more constraints needed to be added, Boundary Conditions: S = 0 at a = x 1, x n = b, for a natural cubic spline. (Linear system, with symmetric, positive definite, diagonally dominant matrix.)

> 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline Introduce the variables M 1,..., M n with M i S (x i ), i = 1, 2,..., n Since S(x) is cubic on each [x j 1, x j ] S (x) is linear, hence determined by its values at two points: S (x j 1 ) = M j 1, S (x j ) = M j S (x) = (x j x)m j 1 + (x x j 1 )M j x j x j 1, x j 1 x x j (5.25) Form the second antiderivative of S (x) on [x j 1, x j ] and apply the interpolating conditions we get S(x j 1 ) = y j 1, S(x j ) = y j S(x) = (x j x) 3 M j 1 + (x x j 1 ) 3 M j 6(x j x j 1 ) + (x j x)y j 1 + (x x j 1 )y j 6(x j x j 1 ) 1 6 (x j x j 1 ) [(x j x)m j 1 + (x x j 1 )M j ] (5.26) for x [x j 1, x j ], j = 1,..., n.

> 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline Formula (5.26) implies that S(x) is continuous over all [a, b] and satisfies the interpolating conditions S(x i ) = y i. Similarly, formula (5.25) for S (x) implies that is continuous on [a, b]. To ensure the continuity of S (x) over [a, b]: we require S (x) on [x j 1, x j ] and [x j, x j+1 ] have to give the same value at their common pointx j, j = 2, 3,..., n 1, leading to the tridiagonal linear system x j x j 1 6 M j 1 + x j+1 x j 1 3 M j + x j+1 x j M j+1 (5.27) 6 = y j+1 y j x j+1 x j y j y j 1 x j x j 1, j = 2, 3,..., n 1 These n 2 equations together with the assumption S (a) = S (b) = 0: M 1 = M n = 0 (5.28) leads to the values M 1,..., M n, hence the function S(x).

> 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline Example Calculate the natural cubic spline interpolating the data {(1, 1), (2, 12 ), (3, 13 ), (4, 14 ) } n = 4, and all x j 1 x j = 1. The system (5.27) becomes and with (5.28) this yields which by (5.26) gives S(x) = Are S (x) and S (x) continuous? 1 6 M 1+ 2 3 M 2 + 1 6 M 3 = 1 3 1 6 M 2 + 2 3 M 3 + 1 6 M 4 = 1 12 M 2 = 1 2, M 3 = 0 1 12 x3 1 4 x2 1 3 x + 3 2, 1 x 2 1 12 x3 + 3 4 x2 7 3 x + 17 6, 2 x 3 1 12 x + 7 12, 3 x 4

> 4. Interpolation and Approximation > 4.3.2 Construction of the cubic spline Example Calculate the natural cubic spline interpolating the data x 0 1 2 2.5 3 3.5 4 y 2.5 0.5 0.5 1.5 1.5 1.125 0 n = 7, system (5.27) has 5 equations Figure: Natural cubic spline interpolation y = S(x) Compared to the graph of linear and quadratic interpolation y = l(x) and y = q(x), the cubic spline S(x) no longer contains corners.

> 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions So far we only interpolated data points, wanting a smooth curve. When we seek a spline to interpolate a known function, we are interested also in the accuracy. Let f(x) be given on [a, b], that we want to interpolate on evenly spaced values of x. For n > 1, let h = b a n 1, x j = a + (j 1)h, j = 1, 2,..., n and S n (x) be the natural cubic spline interpolating f(x) at x 1,..., x n. It can be shown that max f(x) S n(x) ch 2 (5.29) a x b where c depends on f (a), f (b) and max a x b f (4) (x). The reason why S n (x) doesn t converge more rapidly (have an error bound with a higher power of h) is that f (a) 0 f (b), while by definition S n(a) = S n(b) = 0. For functions f(x) with f (a) = f (b) = 0, the RHS of (5.29) can be replaced with ch 4.

> 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions To improve on S n (x), we look for other interpolating functions S(x) that interpolate f(x) on a = x 1 < x 2 <... < x n = b Recall the definition of natural cubic spline: 1 S(x) cubic on each subinterval [x j 1, x j ]; 2 S(x), S (x) and S (x) are continuous on [a, b]; 3 S (x 1 ) = S (x n ) = 0. We say that S(x) is a cubic spline on [a, b] if 1 S(x) cubic on each subinterval [x j 1, x j ]; 2 S(x), S (x) and S (x) are continuous on [a, b]; With the interpolating conditions S(x i ) = y i, i = 1,..., n the representation formula (5.26) and the tridiagonal system (5.27) are still valid.

> 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions This system has n 2 equations and n unknowns: M 1,..., M n. By replacing the end conditions (5.28) S (x 1 ) = S (x n ) = 0, we can obtain other interpolating cubic splines. If the data (x i, y i ) is obtained by evaluating a function f(x) y i = f(x i ), i = 1,..., n then we choose endpoints (boundary conditions) for S(x) that would yield a better approximation to f(x). We require S (x 1 ) = f (x 1 ), S (x n ) = f (x n ) or S (x 1 ) = f (x 1 ), S (x n ) = f (x n ) When combined with(5.26-5.27), either of these conditions leads to a unique interpolating spline S(x), dependent on which of these conditions is used. In both cases, the RHS of (5.29) can be replaced by ch 4, where c depends on max x [a,b] f (4) (x).

> 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions If the derivatives of f(x) are not known, then extra interpolating conditions can be used to ensure that the error bond of (5.29) is proportional to h 4. In particular, suppose that x 1 < z 1 < x 2, x n 1 < z 2 < x n and f(z 1 ), f(z 2 ) are known. Then use the formula for S(x) in (5.26) and s(z 1 ) = f(z 1 ), s(z 2 ) = f(z 2 ) (5.30) This adds two new equations to the system (5.27), one for M 1 and M 2, and another equation for M n 1, M n. This form is preferable to the interpolating natural cubic spline, and is almost equally easy to produce. This is the default form of spline interpolation that is implemented in MATLAB. The form of spline formed in this way is said to satisfy the not-a-knot interpolation boundary conditions.

> 4. Interpolation and Approximation > 4.3.3 Other interpolation spline functions Interpolating cubic spline functions are a popular way to represent data analytically because 1 are relatively smooth: C 2, 2 do not have the rapid oscillation that sometime occurs with the high-degree polynomial interpolation, 3 are relatively easy to work with on a computer. They do not replace polynomials, but are a very useful extension of them.

> 4. Interpolation and Approximation > 4.3.4 The MATLAB program spline The standard package MATLAB contains the function rm spline. The standard calling sequence is y = spline(x nodes, y nodes, x) which produces the cubic spline function S(x) whose graph passes through the points {(ξ i, η i ) : i = 1,..., n} with (ξ i, η i ) = (x nodes(i), y nodes (i)) and n the length of x nodes (and y nodes). The not-a-knot interpolation conditions of (5.30) are used. The point (ξ 2, η 2 ) is the point (z 1, f(z 1 )) of (5.30) and (ξ n 1, η n 1 ) is the point (z 2, f(z 2 )).

> 4. Interpolation and Approximation > 4.3.4 The MATLAB program spline Example Approximate the function f(x) = e x on the interval [a, b] = [0, 1]. For n > 0, define h = 1/n and interpolating nodes x 1 = 0, x 2 = h, x 3 = 2h,..., x n+1 = nh = 1 Using spline, we produce the cubic interpolating spline interpolant S n,1 to f(x). With the not-a-knot interpolation conditions, the nodes x 2 and x n are the points z 1 and z 2 in (5.30). For a general smooth function f(x), it turns out that te magnitude of the error f(x) S n,1 (x) is largest around the endpoints of the interval of approximation.

> 4. Interpolation and Approximation > 4.3.4 The MATLAB program spline Example Two interpolating nodes are inserted, the midpoints of the subintervals [0, h] and [1 h, 1]: x 1 =0, x 2 = 1 2 h, x 3 =h, x 4 =2h,..., x n+1 =(n 1)h, x n+2 =1 1 2 h, x n+3 =1 Using spline results in a cubic spline function S n,2 (x); with the not-a-knot interpolation conditions conditions, the nodes x 2 and x n+2 are the points z 1 and z 2 of (5.30). Generally, S n,2 is a more accurate approximation than is S n,1 (x). The cubic polynomials produced for S n,2 (x) by spline for the intervals [x 1, x 2 ] and [x 2, x 3 ] are the same, thus we can use the polynomial for [0, 1 2 h] for the entire interval [0, h]. Similarly for [1 h, h]. n E n (1) Ratio E n (2) Ratio 5 1.01E-4 1.11E-5 10 6.92E-6 14.6 7.88E-7 14.1 20 4.56E-7 15.2 5.26E-8 15.0 40 2.92E-8 15.6 3.39E-9 15.5 Table: Cubic spline approximation to f(x) = e x