1 Cubic Hermite Spline Interpolation



Similar documents
1 Review of Newton Polynomials

1 Review of Least Squares Solutions to Overdetermined Systems

4.3 Lagrange Approximation

Piecewise Cubic Splines

November 16, Interpolation, Extrapolation & Polynomial Approximation

the points are called control points approximating curve

Zeros of Polynomial Functions

DERIVATIVES AS MATRICES; CHAIN RULE

Natural cubic splines

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

Examples of Functions

AP CALCULUS AB 2009 SCORING GUIDELINES

(Refer Slide Time: 1:42)

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

8. Linear least-squares

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Integration. Topic: Trapezoidal Rule. Major: General Engineering. Author: Autar Kaw, Charlie Barker.

5 Numerical Differentiation

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

1 Lecture: Integration of rational functions by decomposition

Question 2: How do you solve a matrix equation using the matrix inverse?

Lecture 3: Finding integer solutions to systems of linear equations

Numerical integration of a function known only through data points

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

4.5 Chebyshev Polynomials

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Arithmetic and Algebra of Matrices

3.2 The Factor Theorem and The Remainder Theorem

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Lecture 21 Integration: Left, Right and Trapezoid Rules

Homework 2 Solutions

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Continued Fractions and the Euclidean Algorithm

Figure 2.1: Center of mass of four points.

The Method of Partial Fractions Math 121 Calculus II Spring 2015

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface?

Matrix Differentiation

Name: Section Registered In:

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Numerical Analysis An Introduction

Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)

Manifold Learning Examples PCA, LLE and ISOMAP

Zeros of Polynomial Functions

Inner Product Spaces

7 Gaussian Elimination and LU Factorization

Polynomial and Rational Functions

1. Volatility Index. 2. India VIX* 3. India VIX :: computation methodology

Introduction to Matrix Algebra

DATA ANALYSIS II. Matrix Algorithms

Introduction to the Finite Element Method (FEM)

Stress Recovery 28 1

Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler

Lectures 5-6: Taylor Series

Linear and quadratic Taylor polynomials for functions of several variables.

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

Average rate of change of y = f(x) with respect to x as x changes from a to a + h:

1 Solving LPs: The Simplex Algorithm of George Dantzig

Planar Curve Intersection

Geometric Modelling & Curves

AC : MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT

1.7 Graphs of Functions

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

CHAPTER 1 Splines and B-splines an Introduction

Conductance, the Normalized Laplacian, and Cheeger s Inequality

11 Multivariate Polynomials

(67902) Topics in Theory and Complexity Nov 2, Lecture 7

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Finite Elements for 2 D Problems

Solving Systems of Linear Equations

POLYNOMIAL FUNCTIONS

is the degree of the polynomial and is the leading coefficient.

Numerical Analysis Lecture Notes

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

AMATH 352 Lecture 3 MATLAB Tutorial Starting MATLAB Entering Variables

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Real Roots of Univariate Polynomials with Real Coefficients

Practice with Proofs

Math 4310 Handout - Quotient Vector Spaces

Cubic Functions: Global Analysis

Modeling, Computers, and Error Analysis Mathematical Modeling and Engineering Problem-Solving

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Roots of Polynomials

Review of Fundamental Mathematics

2.2. Instantaneous Velocity

2.2 Derivative as a Function

Matrices 2. Solving Square Systems of Linear Equations; Inverse Matrices

Matrix Calculations: Applications of Eigenvalues and Eigenvectors; Inner Products

NUMERICAL METHODS TOPICS FOR RESEARCH PAPERS

1.2 Solving a System of Linear Equations

Solving Quadratic Equations

6. Cholesky factorization

CS 221. Tuesday 8 November 2011

We shall turn our attention to solving linear systems of equations. Ax = b

Excel Basics By Tom Peters & Laura Spielman

Useful Number Systems

Solution of Linear Systems

Transcription:

cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite Spline Interpolation Recall in the last lecture we presented a special polynomial interpolation problem. Problem 1.1. Given an interval [L, R] and a function f : [L, R] R, find the cubic polynomial p Π 3 with p(l) = f(l) p(r) = f(r) p (L) = f (L) p (R) = f (R) Recall that we found a solution to problem 1.1 of the form: p(t) = ã 1 (t L) 2 (t R) + ã 2 (t L) 2 + ã 3 (t L) + ã 4 (1) Last time we saw that we can bound the error on p for t [L, R] using: f (4),[L,R] E(t) (R L) 4 384 To approximate a function f over an interval [a, b], we can split the interval [a, b] into N subintervals using partition points x = (x 0, x 1,..., x N ), and solve problem 1.1 for every subinterval [x i, x i+1 ]. More formally, we can define the following cubic Hermite spline interpolation problem. Problem 1.2 (Cubic Hermite Spline Interpolation). Given an interval [a, b], a function f : [a, b] R, with derivative f : [a, b] R, and a set of partition points x = (x 0, x 1,..., x N ) with a = x 0 < x 1 < < x N = b, find a set of polynomials p 0, p 1,..., p N 1 (a cubic Hermite spline) with p i (x i ) = f(x i ) p i (x i+1 ) = f(x i+1 ) p i(x i ) = f (x i ) p i(x i+1 ) = f (x i+1 ) for i = 0, 1,..., N 1. 1

If we assume that each subinterval [x i, x i+1 ] has equal width h = (b a)/n, then we can bound the error for t [x i, x i+1 ] as follows. E(t) f (4),[xi 1,x i+1 ] 384 h4 } {{ } (2) ( ) Note that we bound the error of the cubic Hermite spline over each subinterval [x i, x i+1 ] individually, yielding a very fine-grained error bound. Since the error depends on the fourth derivative of f only over a small interval (*), large fluctuations of the function in one subinterval will not affect the error in a different subinterval. We can also use a more coarse error bound, using the maximum of the 4th derivative of f over the entire interval, but this may be a slight overestimate of the error. To illustrate the effect of the 4th power of h in equation 2, consider the following example. Example 1.1. Given a function f : [0, 4], if we partition [0, 4] into N equal size partitions and perform cubic Hermite spline interpolation, we obtain an (experimental) error of 10 3. Approximately how many subintervals M (as a function of N) must we use to obtain an error of 10 5? Examining the error function for t [a, b] E(t) f (4),[a,b] h 4 } 384 {{ } ( ) we see that (*) is simply a constant, so that, in the first case, we have: ( 4 0 E 1 (t) 10 3 = c f h 4 = c f N where c f is a constant that depends on f. Similarly, in the second case: ( 4 0 E 2 (t) 10 5 = c f h 4 = c f M To determine M, we take the quotient of the two errors and solve for M: E 1 (t) E 2 (t) = 10 3 = 100 10 5 ) 4 100 = c f ( 4 N c f ( ) 4 4 M ( ) M 4 100 = N M N = 4 100 3.1 Thus, it will take approximately M = 3.1 N intervals to reduce the error by a factor of 100. 2 ) 4 ) 4

2 Comparison of Interpolation Methods Now that we have seen two methods for interpolation using two different kinds of splines (cubic splines and cubic Hermite splines), how can we choose which method to use for a particular problem? Table 1 compares some properties of each method. Cubic Hermite Splines Cubic Splines Error Bound: 1 384 f (4) [a,b] h 4 5 384 f (4) [a,b] h 4 Smoothness: C 1 C 2 Locality: very local local Table 1: Comparison of Interpolation by Cubic Hermite Splines and Cubic Splines As we can see from the table, both methods have an error bound that is O(h 4 ). But it would seem that cubic Hermite spline interpolation uses a smaller constant: 1 384 f (4),[a,b] < 5 384 f (4),[a,b] so that it would provide more accuracy for the same interval size h. However, to truly gauge the speed of each algorithm (the amount of cost required to achieve a desired accuracy), we need to look more closely at the cost of each algorithm. As before, we ll use the number of function evaluations to gauge the cost. It is easy to see that cubic spline interpolation over N intervals requires N + 1 function evaluations (one for each partition point). On the other hand, cubic Hermite spline interpolation over N subintervals requires N + 1 function evaluations and an additional N + 1 evaluations of the derivative. Assuming that evaluating the derivative is at least as costly as evaluating the function, this means that finding cubic Hermite splines over N subintervals is twice as expensive as finding cubic splines. So, for the same cost, we can use twice as many subintervals (subintervals of half the size) to find cubic splines as we use for cubic Hermite splines. So, the error for cubic splines normalized for cost is: E(t) 5 384 f (4) [a,b] = ( ) h 4 2 5 16 384 f (4) [a,b] h 4 and thus, cubic splines can produce slightly more accuracy for the same cost as cubic Hermite splines. Note that this analysis applies to situations in which the cost of evaluating the function dominates the cost of running the algorithm given the function values. This covers most interesting cases, however, if function and derivative values are provided in advance, the algorithm for finding cubic Hermite splines over N subintervals is actually faster than the algorithm for finding cubic splines over the same subintervals. From the table, we can see that both kinds of splines are differentiable, but cubic splines are doubly differentiable, so they may provide a more pleasing visual representation of the function. The last row in the table compares the locality of the methods. Locality measures the degree to which the function approximation is affected by small, local changes in the interpolation data. 3

A method is local if small, local changes in the interpolation data have limited affects outside the area near the change. A method is global if small, local changes in interpolation data may affect the entire approximation. An example of locality is shown in Figure 1. We can tell from the algorithm for cubic Hermite spline interpolation that the method is extremely local. If we change a single interpolation point x i, the approximation can only be affected in the two intervals [x i 1, x i ] and [x i, x i+1 ] that share that partition point. It turns out that cubic spline interpolation is also local, but not quite to the degree of cubic Hermite spline interpolation. In cubic spline interpolation, local changes to the interpolation data may have small effects outside the area of change, but these effects diminish rapidly as the distance from the area of change increases. In contrast, polynomial interpolation is a global method - local changes in interpolation data can affect the entire approximation. function f function f interpolant function new interpolation point interpolant function Figure 1: An Example of a Local Interpolation Method 3 Cubic Hermite Spline Interpolation in MATLAB There are two methods of doing interpolation using cubic Hermite splines in Matlab. The first is the function pchip. pp = pchip(x, f(x)) pchip takes a vector of nodes x and the corresponding function values f(x), and produces a cubic Hermite spline in Matlab s internal format. One can then use ppval to evaluate the cubic Hermite spline over a dense mesh of points. The careful reader will notice that pchip takes function values as input, but no derivative values. This is because pchip uses the function values f(x) to estimate the derivative values. Recall that we can approximate the derivative at a point x 1 using the slope of a secant line between adjacent points x 0 and x 2 : f (x 1 ) f(x 2) f(x 0 ) x 2 x 0 However, this would produce an error O(h 2 ), which is much worse than the interpolation error. To do a good derivative approximation, the function has to use an approximation using 4 4

or more points - the exact algorithm is beyond the scope of this class. It is important to realize, however, that approximating the derivative introduces an additional error into the interpolation. Luckily, using Matlab we can write our own functions to do interpolation using real cubic Hermite splines. 3.1 Creating a Function for Cubic Hermite Spline Interpolation Recall that, to do spline interpolation, we use the command spline, for example: s=spline(x,f(x)) We can evaluate the spline s using the function ppval. fv=ppval(s,z) We can access the individual polynomial pieces of a spline using the unmkpp command. [X, C,... ] = unmkpp(s) unmkpp returns an array of 5 elements, but we only need the first two. (1) X (2) C The vector of input nodes that were used in creating the spline. A matrix containing a representation of the polynomials for each subinterval. Each row i of the matrix contains the coefficients of the polynomial for the ith interval. Remembering that Matlab indexes vectors and matrices starting with 1, the first row C(1) represents the interval [X(1), X(2)] (the first two elements of the vector X). In our previous notation, this is the interval [x 0, x 1 ]. Thus, C has the form: C = a 11 a 12 a 13 a 14 a 21 a 22 a 23 a 24.... a i1 a i2 a i3 a i4.... a N1 a N2 a N3 a N4 The polynomial for row i, p i for the interval [x i 1, x i ] is represented as: p i (t) = a i1 (t x i 1 ) 3 + a i2 (t x i 1 ) 2 + a i3 (t x i 1 ) + a i4 If we replace x i 1 with L, this becomes: p(t) = a 1 (t L) 3 + a 2 (t L) 2 + a 3 (t L) + a 4 (3) 5

If we can reconstruct this matrix C, then we can create a cubic Hermite spline s using the Matlab function mkpp: s = mkpp(x, C) Recall that the result of cubic Hermite spline interpolation was a polynomial for each interval [L, R] of the form: p(t) = ã 1 (t L) 2 (t R) + ã 2 (t L) 2 + ã 3 (t L) + ã 4 (4) Substituting [t L (R L)] for (t R) in equation 4 yields: p(t) = ã 1 (t L) 2 [t L (R L)] + ã 2 (t L) 2 + ã 3 (t L) + ã 4 = ã 1 (t L) 3 ã 1 (t L) 2 (R L) + ã 2 (t L) 2 + ã 3 (t L) + ã 4 = ã 1 (t L) 3 + [ã 2 ã 1 (R L)](t L) 2 + ã 3 (t L) + ã 4 Thus we can represent each polynomial p we obtain from cubic Hermite spline interpolation as a row of matrix C using: a 4 = ã 4 a 3 = ã 3 a 2 = ã 2 ã 1 (R L) a 1 = ã 1 Using this transformation, we can create the matrix C, and use it to create true cubic Hermite splines in Matlab. 6