11 Polynomial and Piecewise Polynomial Interpolation

Similar documents
4.3 Lagrange Approximation

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

4.5 Chebyshev Polynomials

Piecewise Cubic Splines

Zeros of Polynomial Functions

An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate.

1 Review of Least Squares Solutions to Overdetermined Systems

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

3.2 The Factor Theorem and The Remainder Theorem

Zeros of a Polynomial Function

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

November 16, Interpolation, Extrapolation & Polynomial Approximation

1 Review of Newton Polynomials

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Estimating the Average Value of a Function

Operation Count; Numerical Linear Algebra

1 Error in Euler s Method

5 Numerical Differentiation

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Lies My Calculator and Computer Told Me

the points are called control points approximating curve

CHAPTER 1 Splines and B-splines an Introduction

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

Natural cubic splines

7.6 Approximation Errors and Simpson's Rule

Taylor and Maclaurin Series

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

Math 120 Final Exam Practice Problems, Form: A

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

1 Cubic Hermite Spline Interpolation

Numerical integration of a function known only through data points

1.7 Graphs of Functions

Equations, Inequalities & Partial Fractions

SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

Section 1.1. Introduction to R n

1 if 1 x 0 1 if 0 x 1

1.7. Partial Fractions Rational Functions and Partial Fractions. A rational function is a quotient of two polynomials: R(x) = P (x) Q(x).

Review of Fundamental Mathematics

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

Mathematics Pre-Test Sample Questions A. { 11, 7} B. { 7,0,7} C. { 7, 7} D. { 11, 11}

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

The Heat Equation. Lectures INF2320 p. 1/88

POLYNOMIAL FUNCTIONS

Integrals of Rational Functions

Linear and quadratic Taylor polynomials for functions of several variables.

Zeros of Polynomial Functions

LS.6 Solution Matrices

2.4 Real Zeros of Polynomial Functions

Introduction to the Finite Element Method (FEM)

Solving Linear Systems, Continued and The Inverse of a Matrix

Basics of Polynomial Theory

Algebra II End of Course Exam Answer Key Segment I. Scientific Calculator Only

MA107 Precalculus Algebra Exam 2 Review Solutions

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Continued Fractions and the Euclidean Algorithm

Polynomials. Dr. philippe B. laval Kennesaw State University. April 3, 2005

South Carolina College- and Career-Ready (SCCCR) Pre-Calculus

Solutions to Homework 10

Section 4.4 Inner Product Spaces

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

The degree of a polynomial function is equal to the highest exponent found on the independent variables.

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Trigonometric Functions and Equations

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

MATH 132: CALCULUS II SYLLABUS

Figure 2.1: Center of mass of four points.

Zeros of Polynomial Functions

Representation of functions as power series

Solving Quadratic Equations

Interpolation. Chapter The Interpolating Polynomial

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

DRAFT. Algebra 1 EOC Item Specifications

Roots of Polynomials

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Understanding Basic Calculus

Student Performance Q&A:

5.3 Improper Integrals Involving Rational and Exponential Functions

Precalculus REVERSE CORRELATION. Content Expectations for. Precalculus. Michigan CONTENT EXPECTATIONS FOR PRECALCULUS CHAPTER/LESSON TITLES

The Method of Least Squares. Lectures INF2320 p. 1/80

South Carolina College- and Career-Ready (SCCCR) Algebra 1

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

Second-Order Linear Differential Equations

Rolle s Theorem. q( x) = 1

Real Roots of Univariate Polynomials with Real Coefficients

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

THE COMPLEX EXPONENTIAL FUNCTION

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

SOLVING POLYNOMIAL EQUATIONS

by the matrix A results in a vector which is a reflection of the given

Polynomial Expressions and Equations

Partial Fractions. p(x) q(x)

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Transcription:

Polynomial and Piecewise Polynomial Interpolation Let f be a function, which is only known at the nodes x, x 2,...,x n, i.e., all we know about the function f are its values y j = f(x j ), j =, 2,...,n. For instance, we may have obtained these values through measurements and now would like to determine f(x) for other values of x. Example. Assume that we need to evaluate cos(π/6), but the trigonometric function-key on our calculator is broken and we do not have access to a computer. We recall that cos() =, cos(π/4) = / 2, and cos(π/2) =. How can we use this information about the cosine function to determine an approximation of cos(π/6)? Example.2 Let x represent time (in hours) and f(x) be the amount of rain falling at time x. Assume that f(x) is measured once an hour at a weather station. We would like to determine the total amount of rain fallen during a 24-hour period, i.e., we would like to compute 24 f(x)dx. How can we determine an estimate of this integral? Example.3 Let f(x) represent the position of a car at time x and assume that we know f(x) at the times x, x 2,...,x n. How can we determine the velocity at time x? Can we also find out the acceleration? Interpolation by polynomials or piecewise polynomials provide approaches to solving the problems in the above examples. We first discuss polynomial interpolation and then turn to interpolation by piecewise polynomials. Polynomial least-squares approximation is another technique for computing a polynomial that approximates given data. Least-squares approximation was discussed and illustrated in Lecture 6.. Polynomial interpolation Given n distinct nodes x, x 2,...,x n and associated function values y, y 2,...,y n, determine the polynomial p(x) of degree at most n, such that Version January 29, 24 p(x j ) = y j, j =, 2,...,n. ()

The polynomial is said to interpolate the values y j at the nodes x j, and is referred to as the interpolating polynomial. The nodes x j are referred to as interpolation points. Example.4 Let n =. Then the interpolation polynomial reduces to the constant y. When n = 2, the interpolating polynomial is linear and can be expressed as p(x) = y + y 2 y x 2 x (x x ). Example. cont d We may seek to approximate cos(π/6) by first determining the polynomial p of degree at most 2, which interpolates cos(x) at x =, x = π/4, and x = π/2, and then evaluating p(π/6). Before dwelling more on applications of interpolating polynomials, we have to establish that they exist and are unique. We also will consider several representations of the interpolating polynomial, starting with the power form p(x) = a + a 2 x + a 3 x 2 + + a n x n. (2) This is a polynomial of degree at most n. We would like to determine the coefficients a j, which multiply powers of x, so that the conditions () are satisfied. This gives the equations a + a 2 x j + a 3 x 2 j + + a n x n j = y j, j =, 2,...,n. They can be expressed as a linear system of equations with a Vandermonde matrix, x x 2... x n 2 x n x 2 x 2 2... x2 n 2 x n 2..... x n x 2 n... xn n 2 xn n x n x 2 n... xn n 2 xn n a a 2. a n a n = y y 2. y n y n. (3) As noted in Section 7.5, Vandermonde matrices are nonsingular when the nodes x j are distinct. This secures the existence of a unique interpolation polynomial. The representation of a polynomial p(x) in terms of the powers of x, like in (2), is convenient for many applications, because this representation easily can be integrated or differentiated. Moreover, the polynomial (2) easily can be evaluated by nested multiplication without explicitly computing 2

the powers x j. For instance, pulling out common powers of x from the terms of a polynomial of degree three gives p(x) = a + a 2 x + a 3 x 2 + a 4 x 3 = a + (a 2 + (a 3 + a 4 x)x)x. (4) Exercise.2 is concerned with the evaluation of polynomials of arbitrary degree by nested multiplication. However, Vandermonde matrices generally are severely ill-conditioned. This is illustrated in Exercise.3. When the function values y j are obtained by measurements, and therefore are contaminated by measurement errors, the ill-conditioning implies that the computed coefficients a j may differ significantly from the coefficients that would have been obtained with error-free data. Moreover, round-off errors introduced during the solution of the linear system of equations (3) also can give rise to a large propagated error in the computed coefficients. We are therefore interested in investigating other polynomial bases than the power basis. The Lagrange basis for polynomials of degree n is given by l k (x) = n j= j k x x j x k x j, k =, 2,...,n. The l k (x) are known as Lagrange polynomials. They are of degree n. It is easy to verify that the Lagrange polynomials satisfy {, k = j, l k (x j ) = (5), k j. This property makes it possible to determine the interpolation polynomial without solving a linear system of equations. It follows from (5) that the interpolation polynomial is given by p(x) = n y k l k (x). (6) k= We refer to this expression as the interpolation polynomial in Lagrange form. This representation establishes the existence of an interpolation polynomial without using properties of Vandermonde matrices. Unicity also can be shown without using Vandermonde matrices: assume that there are two polynomials p(x) and q(x) of degree at most n, such that p(x j ) = q(x j ) = y j, j n. Then the polynomial r(x) = p(x) q(x) is of degree at most n and vanishes at the n distinct nodes x j. According to the fundamental theorem of algebra, a polynomial of degree n has at most n zeros or vanishes identically. Hence, r(x) vanishes identically, and it follows that p and q are the same polynomial. Thus, the interpolation polynomial is unique. 3

The only drawback of the representation (6) of the interpolation polynomial is that its evaluation is cumbersome; straightforward evaluation of each Lagrange polynomial l k (x) at a point x requires O(n) arithmetic floating point operations, which suggests that the evaluation of the sum (6) requires O(n 2 ) arithmetic floating point operations. The latter operation count can be reduced by expressing the Lagrange polynomials in a different way. Introduce the nodal polynomial and define the weights l(x) = w k = n (x x j ) j=. (7) n (x k x j ) j= j k Then the Lagrange polynomials can be written as w k l k (x) = l(x), x x k k =, 2,...,n. Since the value of the interpolating polynomial p(x) is known to be y k at the interpolation point x k, we only are interested in evaluating p(x) for x-values different from interpolation points. Therefore, we may assume that x x k for k =, 2,...,n. All terms in the sum (6) contain the factor l(x), which is independent of k. We therefore can move this factor outside the sum, and obtain p(x) = l(x) n k= y k w k x x k. (8) We noted above that the interpolation polynomial is unique. Therefore, interpolation of the constant function f(x) =, which is a polynomial, gives the interpolation polynomial p(x) =. Since f(x) =, we have y k = for all k, and the expression (8) simplifies to = l(x) n k= w k x x k. It follows that l(x) = n k= w k x x k. O(n) stands for an expression bounded by cn as n, where c > is a constant independent of n. 4

Substituting the above expression into (8) yields p(x) = n k= n k= y k w k x x k. (9) w k x x k This formula is known as the barycentric representation of the Lagrange interpolating polynomial, or simply as the interpolating polynomial in barycentric form. It requires that the weights be computed, e.g., by using the definition (7). This requires O(n 2 ) arithmetic floating point operations. Given the weights, p(x) can be evaluated at any point x in only O(n) arithmetic floating point operations. Exercises.4 and.5 are concerned with these computations. The representation (9) can be shown to be quite insensitive to round-off errors and therefore can be used to represent polynomials of high degree, provided that overflow and underflow are avoided during the computation of the weights w k. This easily can be achieved by rescaling all the weights when necessary; note that the formula (9) allows all weights to be multiplied by an arbitrary nonzero constant..2 The approximation error When carrying out polynomial interpolation, one generally assumes that the y, y 2,...,y n in () are values of a smooth but unknown function f at the points x, x 2,...,x n. We would like to know the function f and hope that the interpolating polynomial p determined by () will provide an accurate approximation of f at points x of interest. Let us assume that we are interested in knowing how f looks in the interval [, ] and that the interpolation points x j all live in this interval. Example.5 Let the n interpolation points be equidistant in the interval [, ]. The interval end points are among the interpolation points. Then the interpolation points are x j = + 2 j, j =, 2,...,n. () n We would like to approximate the exponential function f(x) = exp(x) on [, ] by the polynomial of degree at most n that is determined by interpolating f at the n interpolation points x j. First let n = 3. Figure displays the function f, the interpolating polynomial p, and the three interpolation points. We can see that f and p agree at the interpolation points and are quite close on the whole interval. If we know f at more points, then we can use this information to determine an interpolation polynomials that provides a more accurate approximation of f. Assume that we know f at five 5

3 2.5 2.5.5.8.6.4.2.2.4.6.8 Figure : The exponential function f(x) = exp(x) (dashed red graph), three equidistant interpolation points {x j, exp(x j )} 3 j= (blue ), and the interpolating polynomial p (solid blue graph). 3 2.5 2.5.5.8.6.4.2.2.4.6.8 Figure 2: The exponential function f(x) = exp(x) (dashed red graph), five equidistant interpolation points {x j, exp(x j )} 5 j= (blue ), and the interpolating polynomial p (solid blue graph). equidistant points on [, ], that is we let n = 5 in (). Our interpolation points are the ones from Figure and two additional ones. Figure 2 is analogous to Figure and shows f, the five interpolation points, and the interpolating polynomial p. The graph for p is on top of the graph for f. Therefore these graphs cannot be distinguished. Clearly, p is a much more accurate 6

approximation of f than in Figure. If f were known at more interpolation points, then we could use them to determine an interpolation polynomial of higher degree that would approximate f even more accurately..8.6.4.2.2.4.8.6.4.2.2.4.6.8 Figure 3: The Runge function () (dashed red graph), five equidistant interpolation points {x j, exp(x j )} 5 j= (blue ), and the interpolating polynomial p (solid blue graph)..8.6.4.2.2.4.8.6.4.2.2.4.6.8 Figure 4: The Runge function () (dashed red graph), five equidistant interpolation points {x j, exp(x j )} j= (blue ), and the interpolating polynomial p (solid blue graph). 7

9 8 7 6 5 4 3 2.8.6.4.2.2.4.6.8 Figure 5: The Runge function () (dashed red graph), five equidistant interpolation points {x j, exp(x j )} 2 j= (blue ), and the interpolating polynomial p (solid blue graph). Example.6 We consider interpolation of the function f(x) = + 25x 2 () at n = 5 equidistant points () in the interval [, ]. This function is known as the Runge function 2 Figure 3 is similar to Figure 2; the only difference is that the exponential function in the latter figure is replaced by the Runge function. The interpolating polynomial (blue continuous graph) is seen not to approximate the Runge function (red dashed graph) well. In Example.5 we obtained a more accurate polynomial approximation of the exponential function by interpolating the function at more points. We will try to do the same for the Runge function. Figure 4 differs from Figure 3 in that we have doubled the number of interpolation points. The interpolation points now are given by () with n =. The interpolating polynomial p of Figure 4 might approximate the Runge function f on the interval [, ] slightly better than interpolating polynomial of Figure 3, but clearly the difference f(x) p(x) is large for many points x in the interval. In our quest for an accurate polynomial approximation of f, we therefore double the number of interpolation points one more time. Figure 5 is similar to Figure 4; the only difference is that we have increased the number of equidistant interpolation points () to n = 2. Unfortunately, the interpolation polynomial p does not behave 2 The function is named after Carl Runge, a German mathematician, who studied polynomial interpolation of this function around 9. 8

as we would like. The difference f(x) p(x) is very large for x close to the end points of the interval. Moreover, increasing the number of equidistant interpolation points further does not reduce the error close to the end points. We conclude that there are functions that cannot be approximated accurately by polynomial interpolation at equidistant interpolation points. How well an interpolating polynomial p approximates the interpolated function f on the interval where the interpolation points x j live depends both on the function and on the distribution of the interpolation points. The following error formula sheds some light on this issue. Let the nodes x j be distinct in the real interval [a, b], and let f(x) be an n times continuously differentiable function in [a, b]. Denote the nth derivative by f (n) (x). Assume that y j = f(x j ), j =, 2,...,n, and let the polynomial p(x) of degree at most n satisfy the interpolation conditions (). Then the error f(x) p(x) can be expressed as f(x) p(x) = n (x x j ) f(n) (ξ), a x b, (2) n! j= where ξ is a function of the nodes x, x 2,...,x n and x. The exact value of ξ is difficult to pin down, however, it is known that ξ is in the interval [a, b] when x and x, x 2,...,x n are there. One can derive the expression (2) by using a variant of the mean-value theorem from Calculus. We will not prove the error-formula (2) in this course. Instead, we will use the formula to learn about some properties of the polynomial interpolation problem. Usually, the nth derivative of f is not available and only the product over the nodes x j can be studied easily. It is remarkable how much useful information can be gained by investigating this product! First we note that the approximation error max f(x) p(x) (3) a x b is likely to be larger when the interval [a, b] is long than when it is short. We can see this by doubling the size of the interval, i.e., we multiply a, b, x and the x j by 2. Then the product in the right-hand side of (2) is replaced by n n (2x 2x j ) = 2 n (x x j ), j= which shows that the interpolation error might be multiplied by 2 n when doubling the size of the interval. Actual computations show that, indeed, the error typically increases with the length of the interval [a, b] when other relevant quantities remain unchanged. The error-formula (2) also can be used to investigate how the nodes x j should be distributed in the interval [a, b] in order to give a small approximation error (3). The function f is only assumed to be known at the interpolation points. We therefore neither know the derivative f (n) nor the value of ξ in the right-hand side of (2). However, we do know the product in the right-hand side and can evaluate it for different choices of interpolation points x j. To achieve a small difference j= 9

f(x) p(x) for all x in [a, b], we could try to distribute the interpolation points x j so that the product n j= x x j is small for all x in [a, b]. In fact, we would like this product to be as small as possible. This leads to the following minimization problem: min x j max a x b j= n x x j. (4) It would be quite difficult to solve this problem by a numerical optimization method, because the product is a nonlinear function of the x j. Moreover, the function x n x x j j= is not differentiable, because the function x is not differentiable at the origin. However, it turns out that this complicated optimization problem has a simple solution! Let for the moment a = and b =. Then the solution is given by ( ) 2j x j = cos 2n π, j =, 2,...,n. (5) These points are the projection of n equidistant points on the upper half of the unit circle onto the x-axis; see Figure 6. The x j are known as Chebyshev points..2.8.6.4.2.2.8.6.4.2.2.4.6.8 Figure 6: Upper half of the unit circle with 8 equidistant points marked in red, and their projection onto the x-axis marked in magenta. The latter are, from left to right, the points x, x 2,...,x 8 defined by (5) for n = 8.

2 x 3.5.5.5.5 2.8.6.4.2.2.4.6.8 Figure 7: The product in the error formula (2) for x when n = and the interpolation points are Chebyshev points. The function x n j= (x x j) with x j Chebyshev points and is shown in Figure 7 for n = and x. The product has a local maximum and a local minimum between each pair of adjacent interpolation points. What make the Chebyshev points special is that the magnitude of each local maximum and minimum is the same. This property holds for any set of n Chebyshev points and can be used to show that the Chebyshev points solve the minimization problem (4). For comparison, we depict in Figure 8 the function x j= (x x j) with x j equidistant points defined by () with n =. The function achieves a local maximum or minimum between each pair of adjacent interpolation points, but the magnitude of the minima close to the interval end points are much larger than the magnitude of the other maxima and minima. The magnitudes of the local maxima and minima differ even more when the number of equidistant interpolation points is increased. Figure 9 is analogous to Figure 8 and shows the product 2 j= (x x j) for 2 equidistant interpolation points x j. The large magnitude of the product close to the end points of the interval results in a large error in the interpolation polynomial p(x) for x near the end points. This was illustrated by Figure 5. Example.6 This example discusses polynomial interpolation at Chebyshev points. Figure is analogous to Figure 4 and displays the polynomial obtained by interpolating the Runge function at Chebyshev points. The interpolating polynomial approximates the Runge function better at the end points of the interval than when equidistant interpolation points are used, however, the polynomial is a poor approximation of the Runge function around x =.

2 x 3 2 4 6 8 2 4.8.6.4.2.2.4.6.8 Figure 8: The product in the error formula (2) for x when n = and the interpolation points are equidistant..5 x 4.5.5 2 2.5 3.8.6.4.2.2.4.6.8 Figure 9: The product in the error formula (2) for x when n = and the interpolation points are equidistant. When we interpolate the Runge function at 2 Chebyshev points, we obtain the interpolation polynomial of Figure. It can be seen to be a fairly good approximation of the Runge function in the whole interval [, ] and, in particular, furnishes a much better approximation of the Runge function than the interpolation polynomial obtained by interpolating at 2 equidistant points. The 2

.9.8.7.6.5.4.3.2..8.6.4.2.2.4.6.8 Figure : The Runge function () (dashed red graph), Chebyshev interpolation points {x j, exp(x j )} j= (blue ) (5), and the interpolating polynomial p (solid blue graph)..9.8.7.6.5.4.3.2..8.6.4.2.2.4.6.8 Figure : The Runge function () (dashed red graph), Chebyshev interpolation points {x j, exp(x j )} 2 j= (blue ) (5), and the interpolating polynomial p (solid blue graph). latter is shown in Figure 5. An even more accurate approximation of the Runge function can be achieved by increasing the number of interpolation points. Figure 2 shows the interpolation polynomial obtained by interpolating the Runge functions at 3 Chebyshev points. The graphs for the Runge function and the interpolation polynomial are so close that they cannot be distinguished 3

.9.8.7.6.5.4.3.2..8.6.4.2.2.4.6.8 Figure 2: The Runge function () (dashed red graph), Chebyshev interpolation points {x j, exp(x j )} 3 j= (blue ) (5), and the interpolating polynomial p (solid blue graph). in the figure. The above example shows that interpolating a function f at n Chebyshev points can be a good idea. Indeed, one can show that the interpolating polynomial of degree at most n obtained is close to the best polynomial approximant of f of degree at most n. This also holds if the function does not have n continuous derivatives and, therefore, the remainder formula (2) does not hold. For intervals with endpoints a < b, the solution of (4) is given by shifting and scaling the Chebyshev points: x j = 2 (b + a) + ( ) 2j 2 (b a)cos 2n π, j =, 2,...,n. (6) The presence of a high-order derivative in the error-formula (2) indicates that interpolation polynomials are likely to give small approximation errors when the function has many continuous derivatives that are not very large in magnitude. Conversely, equation (2) suggests that interpolating a function with few or no continuous derivatives in [a, b] by a polynomial of small to moderate degree might not yield an accurate approximation of f(x) on [a, b]. This, indeed, often is the case. We therefore in the next section discuss an extension of polynomial interpolation which typically gives more accurate approximations than standard polynomial interpolation when the function to be approximated is not smooth. 4

Exercise. Solve the interpolation problem of Example.. Exercise.2 Write a MATLAB/Octave function for evaluating a polynomial of degree at most n in nested form (4). The input are the coefficients a, a 2,...,a n and x; the output is the value p(x). Exercise.3 Let V n be an n n Vandermonde matrix determined by n equidistant nodes in the interval [, ]. How quickly does the condition number of V n grow with n? Linearly, quadratically, cubically,..., exponentially? Use the MATLAB/Octave functions vander and cond. Determine the growth experimentally. Describe how you designed the experiments. Show your MATLAB/Octave codes and relevant input and output. Exercise.4 Write a MATLAB/Octave function for computing the weights of the barycentric representation (9) of the interpolation polynomial, using the definition (7). The code should avoid overflow and underflow. Exercise.5 Given the weights (7), write a MATLAB/Octave function for evaluating the polynomial (9) at a point x. Exercise.6 (Bonus exercise.) Assume that the weights (7) are available for the barycentric representation of the interpolation polynomial (9) for the interpolation problem (). Let another data point {x n+, y n+ } be available. Write a MATLAB/Octave function for computing the barycentric weights for the interpolation problem () with n replaced by n +. The computations can be carried out in only O(n) arithmetic floating point operations. Exercise.7 The Γ-function is defined by Γ(x) = t x e t dt. 5

2 3 2 4 6 5 24 6 2 Table : n and Γ(n). Direct evaluation of the integral yields Γ() = and integration by parts shows that Γ(x + ) = xγ(x). In particular, for integer-values n >, we obtain that Γ(n + ) = nγ(n) and therefore Γ(n + ) = n(n )(n 2). We would like to determine an estimate of Γ(4.5) by using the tabulated values of Table. (a) Determine the actual value of Γ(4.5) by interpolation in 3 and 5 nodes. Which 3 nodes should be used? Determine the actual value of Γ(4.5). Are the computed approximations close? Which one is more accurate. (b) Also, investigate the following approach. Instead of interpolating Γ(x), interpolate ln(γ(x)) by polynomials at 3 and 5 nodes. Evaluate the computed polynomial at 4.5 and exponentiate. How do the computed approximations in (a) and (b) compare? Explain! Exercise.8 (a) Interpolate the function f(x) = e x at 2 equidistant nodes in [, ]. This gives an interpolation polynomial p of degree at most 9. Measure the approximation error f(x) p(x) by measure the difference at 5 equidistant nodes t j in [, ]. We refer to the quantity max f(t j) p(t j ) t j, j=,2...,5 as the error. Compute the error. (b) Repeat the above computations with the function f(x) = e x /( 25x 2 ). Plot p(t j ) f(t j ), j =, 2,...,5. Where in the interval [, ] is the error the largest? (c) Repeat the computations in (a) using 2 Chebyshev points (5) as interpolation points. How do the errors compare for equidistant and Chebyshev points? Plot the error. (d) Repeat the computations in (b) using 2 Chebyshev points (5) as interpolation points. How do the errors compare for equidistant and Chebyshev points? 6

2 2 6 Table 2: t and f(t). Exercise.9 Compute an approximation of the integral xexp(x 2 )dx by first interpolating the integrand by a polynomial of degree at most 3 and then integrating the polynomial. Which representation of the polynomial is most convenient to use? Use four interpolation points. Which points would be suitable? Justify your choice. Exercise. The function f(t) gives the position of a ball at time t. Table 2 displays a few values of f and t. Interpolate f by a quadratic polynomial and estimate the velocity and acceleration of the ball at time t =..3 Interpolation by piecewise polynomials In the above section, we sought to determine one polynomial that approximates a function on a specified interval. This works well if either one of the following conditions hold: The polynomial required to achieve desired accuracy is of fairly low degree. The function has several continuous derivatives and interpolation can be carried out at the Chebyshev points (5) or (6). A quite natural and different approach to approximate a function on an interval is to first split the interval into subintervals and then approximate the function by a polynomial of fairly low degree on each subinterval. We now discuss this approach. Example.5 We would like to approximate a function on the interval [, ]. Let the function values y j = f(x j ) be available, where x =, x 2 =, x 3 =, and y = y 3 =, y 2 =. It is easy to approximate 7

f(x) by a linear function on each subinterval [x, x 2 ] and [x 2, x 3 ]. We obtain, by using the Lagrange form (6), that p(x) = y x x 2 x x 2 + y 2 x x x 2 x = x +, x, p(x) = y 2 x x 3 x 2 x 3 + y 3 x x 2 x 3 x 2 = x, x. The MATLAB command plot([-,,],[,,]) gives the continuous graph of Figure 3. This is a piecewise linear approximation of the unknown function f(x). If f(x) indeed is a piecewise linear function with a kink at x =, then the computed approximation is appropriate. On the other hand, if f(x) displays the trajectory of a baseball, then the smoother function p(x) = x 2, which is depicted by the dashed curve, may be a more suitable approximation of f(x), since baseball trajectories do not exhibit kinks - even if some players occasionally may wish they do. Piecewise linear functions give better approximations of a smooth function if more interpolation points {x j, y j } are used. We can increase the accuracy of the piecewise linear approximant by reducing the lengths of the subintervals and thereby increasing the number of subintervals. We conclude that piecewise linear approximations of functions are easy to compute. However, piecewise linear approximants display kinks. Therefore, many subintervals may be required to determine a piecewise linear approximant of high accuracy..9.8.7.6.5.4.3.2..8.6.4.2.2.4.6.8 Figure 3: Example.5: Quadratic polynomial p(x) = x 2 (red dashed graph) and piecewise linear approximation (continuous blue graph). There are several ways to modify piecewise linear functions to give them a more pleasing look. Here we will discuss how to use derivative information to obtain smoother approximants. A different approach, which uses Bézier curves, is described in the next lecture. 8

Assume that not only the function values y j = f(x j ), but also the derivative values y j = f (x j ), are available at the nodes a x < x 2 <... < x n b. We can then on each subinterval, say [x j, x j+ ], approximate f(x) by a polynomial that interpolates both f(x) and f (x) at the endpoints of the interval. Thus, we would like to determine a polynomial p j (x), such that p j (x j ) = y j, p j (x j+ ) = y j+, p j(x j ) = y j, p j(x j+ ) = y j+. (7) These are 4 conditions, and we seek to determine a polynomial of degree 3, p j (x) = a + a 2 x + a 3 x 2 + a 4 x 3, (8) which satisfies these conditions. Our reason for choosing a polynomial of degree 3 is that it has 4 coefficients, the same number as the number of condition. Substituting the polynomial (8) into the conditions (7) gives the linear system of equations, x j x 2 j x 3 j x j+ x 2 j+ x 3 j+ 2x j 3x 2 j 2x j+ 3x 2 j+ a a 2 a 3 a 4 = y j y j+ y j y j+. (9) The last two rows impose interpolation of the derivative values. The matrix can be shown to be nonsingular when x j x j+. Matrices of the form (9) are referred to as confluent Vandermonde matrices. The polynomials p (x), p 2 (x),..., p n (x) provide a piecewise cubic polynomial approximation of f(x) on the whole interval [a, b]. They can be computed independently and yield an approximation with a continuous derivative on [a, b]. The latter can be seen as follows: The polynomial p j is defined and differentiable on the interval [x j, x j+ ] for j =, 2,...,n. What remains to be established is that our approximant also has a continuous derivative at the interpolation points. This follows from the interpolation conditions (7). We have lim xրx j+ p j(x) = p j(x j+ ) = y j+, lim xցx j+ p j+(x) = p j+(x j+ ) = y j+. The existence of the limit follows from the continuity of each polynomial on the interval where it is defined, and the other equalities are the interpolation conditions. Thus, p j (x j+) = p j+ (x j+), which shows the continuity of the derivative at x j+ of our piecewise cubic polynomial approximant. The use of piecewise cubic polynomials as described gives attractive approximations. However, the approach discussed requires that derivative information be available. When no derivative information is explicitly known, modifications of the scheme outlined can be used. A simple modification is to use estimates of the derivative values of the function f(x) at the nodes; see Exercise.2. Another possibility is to impose the conditions p j (x j ) = y j, p j (x j+ ) = y j+, p j(x j ) = p j (x j ), p j(x j ) = p j (x j ), 9

for j = 2, 3,...,n. Thus, at the subinterval boundaries at x 2, x 3,...,x n, we require the piecewise cubic polynomial to have continuous first and second derivatives. However, these derivatives are not required to take on prescribed values. The piecewise cubic polynomials obtained in this manner are known as splines. They are popular design tools in industry. Their determination requires the solution of a linear system of equations, which is somewhat complicated to derive. We will therefore omit its derivation. Extra conditions at the interval endpoints have to be imposed in order to make the resulting linear system of equations uniquely solvable. Exercise. Consider the function in Example.5. Assume that we also know the derivative values y = 2, y 2 =, and y 3 = 2. Determine a piecewise polynomial approximation on [, ] by using the interpolation conditions (7). Plot the resulting function. Exercise.2 Assume the derivative values in the above exercise are not available. How can one determine estimates of these values? Use these estimates in the interpolation conditions (7) and compute a piecewise cubic approximation. How does it compare with the one from Exercise. and with the piecewise linear approximation of Example.5. Plot the computed approximant. Exercise.3 Compute a spline approximant of the function of Example.5, e.g., by using the function spline in MATLAB or Octave. Plot the computed spline. 2