Natural cubic splines



Similar documents
Piecewise Cubic Splines

(Refer Slide Time: 1:42)

Operation Count; Numerical Linear Algebra

5 Numerical Differentiation

4.3 Lagrange Approximation

EECS 556 Image Processing W 09. Interpolation. Interpolation techniques B splines

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

1 Cubic Hermite Spline Interpolation

Building a Smooth Yield Curve. University of Chicago. Jeff Greco

4.5 Chebyshev Polynomials

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

the points are called control points approximating curve

Homework # 3 Solutions

1 Review of Least Squares Solutions to Overdetermined Systems

The Mean Value Theorem

SOLVING LINEAR SYSTEMS

7 Gaussian Elimination and LU Factorization

1.2 Solving a System of Linear Equations

2.4 Real Zeros of Polynomial Functions

1 Solving LPs: The Simplex Algorithm of George Dantzig

Smoothing. Fitting without a parametrization

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

PRACTICAL GUIDE TO DATA SMOOTHING AND FILTERING

November 16, Interpolation, Extrapolation & Polynomial Approximation

Factoring Cubic Polynomials

SOLVING POLYNOMIAL EQUATIONS

Corollary. (f є C n+1 [a,b]). Proof: This follows directly from the preceding theorem using the inequality

1 Lecture: Integration of rational functions by decomposition

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Solving Linear Systems, Continued and The Inverse of a Matrix

Some facts about polynomials modulo m (Full proof of the Fingerprinting Theorem)

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Euler s Method and Functions

1.7 Graphs of Functions

Factoring Polynomials

H/wk 13, Solutions to selected problems

Indiana State Core Curriculum Standards updated 2009 Algebra I

Lecture Notes on Polynomials

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

by the matrix A results in a vector which is a reflection of the given

In other words the graph of the polynomial should pass through the points

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

Math 55: Discrete Mathematics

Linearly Independent Sets and Linearly Dependent Sets

Practice with Proofs

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

3 1. Note that all cubes solve it; therefore, there are no more

8 Divisibility and prime numbers

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

Introduction to the Finite Element Method (FEM)

Basic Properties of Rational Expressions

Zeros of a Polynomial Function

POLYNOMIAL FUNCTIONS

Rolle s Theorem. q( x) = 1

Scientific Data Visualization with Shape Preserving C 1 Rational Cubic Interpolation

Logo Symmetry Learning Task. Unit 5

Smoothing and Non-Parametric Regression

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Least-Squares Intersection of Lines

2.2 Derivative as a Function

Solution of Linear Systems

3. Mathematical Induction

Discrete Mathematics: Homework 7 solution. Due:

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Polynomial and Rational Functions

Homework until Test #2

The degree of a polynomial function is equal to the highest exponent found on the independent variables.

calculating the result modulo 3, as follows: p(0) = = 1 0,

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Computer Graphics CS 543 Lecture 12 (Part 1) Curves. Prof Emmanuel Agu. Computer Science Dept. Worcester Polytechnic Institute (WPI)

NUMERICAL ANALYSIS PROGRAMS

f(x) = g(x), if x A h(x), if x B.

Methods for Finding Bases

6.4 Logarithmic Equations and Inequalities

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

UNCORRECTED PAGE PROOFS

Polynomial and Synthetic Division. Long Division of Polynomials. Example 1. 6x 2 7x 2 x 2) 19x 2 16x 4 6x3 12x 2 7x 2 16x 7x 2 14x. 2x 4.

Finite Element Formulation for Beams - Handout 2 -

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

ON GALOIS REALIZATIONS OF THE 2-COVERABLE SYMMETRIC AND ALTERNATING GROUPS

Math 120 Final Exam Practice Problems, Form: A

Row Echelon Form and Reduced Row Echelon Form

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.

Correlation and Convolution Class Notes for CMSC 426, Fall 2005 David Jacobs

Numerical integration of a function known only through data points

Chapter 20. Vector Spaces and Bases

2 When is a 2-Digit Number the Sum of the Squares of its Digits?

3.2 The Factor Theorem and The Remainder Theorem

Lecture 3: Finding integer solutions to systems of linear equations

Equations, Inequalities & Partial Fractions

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Real Roots of Univariate Polynomials with Real Coefficients

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

Transcription:

Natural cubic splines Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology October 21 2008

Motivation We are given a large dataset, i.e. a function sampled in many points. We want to find an approximation in-between these points. Until now we have seen one way to do this, namely high order interpolation - we express the solution over the whole domain as one polynomial of degree N for N + 1 data points. a = t 0 t 1 t 2 t 3 t n 1 b = t n x

Motivation Let us consider the function Known as Runge s example. f (x) = 1 1 + x 2. While what we illustrate with this function is valid in general, this particular function is constructed to really amplify the problem.

Motivation 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 5 4 3 2 1 0 1 2 3 4 5 Figure: Runge s example plotted on a grid with 100 equidistantly spaced grid points.

Motivation 8 7 exact interpolated 6 5 4 3 2 1 0 1 5 4 3 2 1 0 1 2 3 4 5 Figure: Runge s example interpolated using a 15th order polynomial based on equidistant sample points.

Motivation It turns out that high order interpolation using a global polynomial often exhibit these oscillations hence it is dangerous to use (in particular on equidistant grids). Another strategy is to use piecewise interpolation. For instance, piecewise linear interpolation. y 0 s 0 (x) y 1 s 1 (x) s n 1 (x) x 0 x 1 x 2 x n 1 x n x

Motivation 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 5 4 3 2 1 0 1 2 3 4 5 Figure: Runge s example interpolated using piecewise linear interpolation. We have used 7 points to interpolate the function in order to ensure that we can actually see the discontinuities on the plot.

A better strategy - spline interpolation We would like to avoid the Runge phenomenon for large datasets we cannot do higher order interpolation. The solution to this is using piecewise polynomial interpolation. However piecewise linear is not a good choice as the regularity of the solution is only C 0. These desires lead to splines and spline interpolation. s 0 (x) s 1 (x) s 2 (x) s n 1 (x) a = t 0 t 1 t 2 t 3 t n 1 b = t n x

Splines - definition A function S(x) is a spline of degree k on [a, b] if S C k 1 [a, b]. a = t 0 < t 1 < < t n = b and S 0 (x), t 0 x t 1 S 1 (x), t 1 x t 2 S(x) =. S n 1 (x), t n 1 x t n where S i (x) P k.

Cubic spline S 0 (x) = a 0 x 3 + b 0 x 2 + c 0 x + d 0, t 0 x t 1 S(x) =. S n 1 (x) = a n 1 x 3 + b n 1 x 2 + c n 1 x + d n 1, t n 1 x t n. which satisfies S(x) C 2 [t 0, t n ] : S i 1 (x i ) = S i (x i ) S i 1(x i ) = S i (x i ) S i 1(x i ) = S i (x i ), i = 1, 2,, n 1.

Cubic spline - interpolation Given (x i, y i ) n i=0. Task: Find S(x) such that it is a cubic spline interpolant. The requirement that it is to be a cubic spline gives us 3(n 1) equations. In addition we require that S(x i ) = y i, i = 0,, n which gives n + 1 equations. This means we have 4n 2 equations in total. We have 4n degrees of freedom (a i, b i, c i, d i ) n 1 i=0. Thus we have 2 degrees of freedom left.

Cubic spline - interpolation We can use these to define different subtypes of cubic splines: S (t 0 ) = S (t n ) = 0 - natural cubic spline. S (t 0 ), S (t n ) given - clamped cubic spline. } S 0 (t 1 ) = S 1 (t 1 ) S n 2 (t n 1 ) = S n 1 (t n 1 ) - Not a knot condition (MATLAB)

Natural cubic splines Task: Find S(x) such that it is a natural cubic spline. Let t i = x i, i = 0,, n. Let z i = S (x i ), i = 0,, n. This means the condition that it is a natural cubic spline is simply expressed as z 0 = z n = 0. Now, since S(x) is a third order polynomial we know that S (x) is a linear spline which interpolates (t i, z i ). Hence one strategy is to first construct the linear spline interpolant S (x), and then integrate that twice to obtain S(x).

Natural cubic splines The linear spline is simply expressed as S i x t i+1 x t i (x) = z i + z i+1. t i t i+1 t i+1 t i We introduce h i = t i+1 t i, i = 0,, n which leads to S (x) = z i+1 x t i h i + z i t i+1 x h i. We now integrate twice S i (x) = z i+1 (x t i ) 3 + z i (t i+1 x) 3 6h i 6h i + C i (x t i ) + D i (t i+1 x).

Natural cubic splines Interpolation gives: Continuity yields: S i (t i ) = y i z i 6 h2 i + D i h i = y i, i = 0,, n. S i (t i+1 ) = y i+1 z i+1 6 h2 i + C i h i = y i+1.

Natural cubic splines We insert these expressions to find the following form of the system S i (x) = z i+1 (x t i ) 3 + z i (t i+1 x) 3 6h i 6h ( i yi+1 + z ) i+1 h i 6 h i (x t i ) ( yi + h ) i h i 6 z i (t i+1 x). We then take the derivative.

The derivative reads Natural cubic splines S i (x) = z i+1 2h i In our abscissas this gives (x t i ) 2 z i 2h i (t i+1 x) 2 + 1 (y i+1 y i ) h i h } i 6 (z i+1 z i ). {{ } b i S i (t i ) = 1 2 z ih i + b i h i 6 z i+1 + 1 6 h iz i S i (t i+1 ) = z i+1 2 h i + b i h i 6 z i+1 + 1 6 h iz i S i 1 (t i ) = 1 3 z ih i+1 + 1 6 h i 1z i 1 + b i 1 S i (t i ) = S i 1 (t i ) 6 (b i b i 1 ) = h i 1 z 1 + 2 (h i 1 + h i ) z i + h i z i+1.

Natural cubic splines - algorithm This means that we can find our solution using the following procedure: First do some precalculations h i = t i+1 t i, i = 0,, n 1 b i = 1 h i (y i+1 y i ), i = 0,, n 1 v i = 2 (h i 1 + h i ), i = 1,..., n 1 u i = 6 (b i b i 1 ), i = 1,, n 1 z 0 = z n = 0

Natural cubic splines - algorithm Then solve the tridiagonal system v 1 h 1 h 1 v 2 h 2 h 2 v 3 h 3............... hn 2 h n 2 v n 1 z 1 z 2 z 3. z n 2 z n 1 = u 1 u 2 u 3. u n 2 u n 1.

Natural cubic splines - example Given the dataset i 0 1 2 3 x i 0.9 1.3 1.9 2.1 y i 1.3 1.5 1.85 2.1 h i = x i+1 x i 0.4 0.6 0.2 b i = 1 h i (y i+1 y i ) 0.5 0.5833 1.25 v i = 2 (h i 1 + h i ) 2.0 1.6 u i = 6 (b i b i 1 ) 0.5 4 The linear system reads [ ] [ ] 2.0 0.4 z1 = 0.4 1.6 z 2 [ ] 0.5 0.4

Natural cubic splines - example We find z 0 = 0.5, z 1 = 0.125. This gives us our spline functions S 0 (x) = 0.208 (x 0.9) 3 + 3.78 (x 0.9) + 3.25 (1.3 x) S 1 (x) = 0.035 (x 1.3) 3 + 0.139 (1.9 x) 3 + 0.664 0.62x S 2 (x) = 0.104 (x 1.9) 3 + 10.5 (x 1.9) + 9.25 (2.1 x)

Gaussian elimination of tridiagonal systems Assume we are given a general tridiagonal system d 1 c 1 b 1 a 1 d 2 c 2 b 2.........,.. cn 1 b n 1 b n a n 1 d n First elimination (second row) yields d 1 c 1 0 d2 c 2........., cn 1 a n 1 d n b 1 b 2. b n 1 b n, d 2 = d 2 a 1 d 1 c 1 b 2 = b 2 a 1 d 1 b 1

Gaussian elimination of tridiagonal systems This means that the elimination stage is for i =2,, n end m = a i 1 /d i 1 d i = d i mc i 1 b i = b i mb i 1 And the backward substitution reads where b 1 = b 1. x n = b n /d n for i =n 1,, 1 ) x i = ( b i c i x i+1 / d i end

Gaussian elimination of tridiagonal systems This will work out fine as long as d i 0. Assume that d i > a i 1 + c i - i.e. diagonal dominance. For the eliminated system diagonal domiance means that d i < c i. We now want to show that diagonal domiance of the original system implies that the eliminated system is also diagonal dominant.

Gaussian elimination of tridiagonal systems We now assume that d i 1 > c i 1. This is obviously satisfied for d 1 = d 1. d i = d i a i 1 d i 1 c i 1 d i a i 1 d i 1 c i 1 > a i 1 c i a i 1 = c i. Hence the diagonal domiance is preserved which means that d i 0. The algorithm produces a unique solution.

Why cubic splines? Now to motivate why we use cubic splines. First, let us introduce a measure for the smoothness of a function: µ(f ) = b We then have the following theorem Theorem a (f (x)) 2 dx. (1) Given interpolation data (t i, y i ) n i=0. Among all functions f C 2 [a, b] which interpolates (t i, y i ), the natural cubic spline is the smoothest, where smoothness is measured through (1).

We need to prove that Introduce Why cubic splines? µ(f ) µ(s) f C 2 [a, b]. g(x) = S(x) f (x), Inserting this yields µ(f ) = b a g(x) C 2 [a, b] g (t i ) = 0, i = 0,, n. ( S (x) g (x) ) 2 dx = µ(s) + µ(g) 2 b a S (x)g (x) dx Now since µ(g) > 0, we have proved our result if we can show that b a S (x)g (x) dx = 0.

Why cubic splines? We have that b a S (x)g (x) dx = g (x)s (x) b b a g (x)s (x) dx First part on the right hand side is zero since z 0 = z n = 0. Second part we split in an integral over each subdomain b a n 1 g (x)s (x) dx = i=0 i=0 t i+1 t i a g (x)s (x) dx n 1 ti+1 = 6a i g (x) dx n 1 = 6a i g(x) t i+1 t i = 0. i=0 t i

Cubic spline result 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 5 4 3 2 1 0 1 2 3 4 5 Figure: Runge s example interpolated using cubic spline interpolation based on 15 equidistant samples.