3 Polynomial Interpolation

Size: px
Start display at page:

Download "3 Polynomial Interpolation"

Transcription

1 3 Polynomial Interpolation Read sections 7., 7., (up to p. 39), Review questions , , , 7.7, 7.8. All methods for solving ordinary differential equations which we considered before were socalled one-step methods. This means that the new approximation y k of the solution y t k depends only on the approximation y k at t k (and the differential equation, the step size, and t k, of course). In order to obtain higher order methods, it was necessary to evaluate the right hand side f many times in each step. If it is expensive to evaluate f, this procedure can be rather timeconsuming. Hence, one might wonder if it would be possible to use previous approximations y k y k m and their function values f t j y j. Such methods are called multi-step methods. In order to describe such methods we need to introduce polynomial interpolation. 3. The Interpolation Problem The general interpolation problem can be described as follows: Let t i y i i n with t t t n be given (discrete) points. Vi assume that the values y i are a function of t i. We are looking for a function y f t such that f t i y i i n Such a function f is called an interpolating function or simply an interpolant. A table with discrete points can be obtained in different applications: The points can be measured values where one wants to find a smooth curve in order to plot the function in a nice way. The points are given rows in a table. One needs function values between the rows of that table. One wants to replace a complex mathematical function by a simpler one which is close to the original one but easier to handle. The table is obtained by sampling the complex function. We have already seen an example of interpolation: When we solved a differential equation numerically, we got precisely such a set of discrete points. The easiest thing to do is to assume that the function behaves linearly between two successive approximations t k y k and t k y k. We may know for sure that the exact solution to the differential equation is not a linear function, but we hope that the linear function lies close to the local exact solution. This assumption can be more or less justified depending on the discrete values and the solution at hand. In Figure we

2 4 Different interpolations of raw data 3 y y Figure : Interpolation of a numerical solution compare the piecewise linear interpolation of our numerical solution to the van der Pol equation with the exact solution. The piecewise linear interpolation is built-in into MATLAB s plot command. I received the exact solution by using a sophisticated interpolation algorithm which is included with ode Polynomial Interpolation by Monomials As we have seen before, piecewise linear interpolation is a relatively bad way of interpolating a function. How can one construct better interpolating functions? Taking into account the computational expense one should choose the functions as simple as possible. If we do not have other informations about the function available, the best what we can do is probably to use polynomials. They are very easy to handle and hopefully flexible enough. The polynomial interpolation problems reads: Find a polynomial p n t with a degree of at most n such that p n t i y i i n A polynomial can be written down as p n t x t x 3 t x n t n where x n are the coefficients. Such a representation of p n is called a monomial basis representation. In the above ansatz, the coefficients are the unknowns. More precisely, x n must be determined such that the interpolating conditions p t i y i are fulfilled, that is, x t i x 3 t i x n t n i y i i n

3 These equations can be written down as a linear system of equations in the unknown coefficients: t t t n t t t n t n t n t n n x. x n y y. y n A x y This linear system of equations can be solved uniquely for the coefficients x n if the matrix A is nonsingular. One can show that the matrix A is nonsingular if all t i are pairwise different. The matrix A is called a Vandermonde matrix. Example 3.. Let our given points be (-,-7), (0,-), and (,0). We want to find a second degree polynomial which interpolates these points. The linear system to be solved is t t x t t y x y t 3 t3 x 3 y 3 where t, t 0, t 3, and y 7, y, y 3 0. We obtain the system 4 x x x 3 0 The solution is, x 5, x 3 4. The polynomial is p t 5t 4t For completeness, I include a short MATLAB program which realizes monomial interpolation. For reasons that will be clear later avoid to use it in practice! As usual, all vectors are assumed to be column vectors. function x = PolyM(ti,yi) n = length(ti); A = ones(n); for i = :n A(:,i) = A(:,i-).*ti; end x = A yi; The interpolation polynomial by itself is not very interesting. What one wants to do is to evaluate the polynomial for a set of values for the independent variable t. If one does this in the naive way, the amount of work may be rather large. For a given t and set of coefficients x i, one 3

4 needs i multiplications in order to form the term x i t i. The overall computational cost is the sum of all these multiplications and, additionally, n additions: n i i n n n n O n This is too expensive! Fortunately, there is a better algorithm called Horner s algorithm. The idea is to introduce parentheses in the following way (here demonstrated for n 6): p 5 t x 6 t 5 x 5 t 4 x 4 t 3 x 3 t x t x 6 t 4 x 5 t 3 x 4 t x 3 t x t x 6 t 3 x 5 t x 4 t x 3 t x t x 6 t x 5 t x 4 t x 3 t x t The number of operations is now proportional to the degree n such that the computation is much faster. The following MATLAB function realizes Horner s algorithm. function pt = HornerM(t,x) n = length(x); pt = x(n); for i = n-:-: pt = pt*t+x(i); end What are the most important properties of monomial interpolation? Advantages of monomial interpolation The polynomial is easy to write down. There is an easy algorithm for the computation of the system matrix A. The polynomial can be easily and efficiently evaluated. Disadvantages of monomial interpolation It is very expensive to solve the linear system Ax b. This amounts to order O n 3 operations. The linear system is very often ill-conditioned. This means that the algorithm is numerically unstable. An example will be given later. 4

5 3.3 Lagrange Polynomial In order to avoid the drawbacks with the monomial representation, one can try to write down the interpolating polynomial in such a way that the linear system of equations is much better conditioned and much easier to solve. The simplest ansatz in this respect would be if the matrix A in the linear system were the unit matrix. In that case, we would simply obtain x i y i such that there are no computations involved. Such a representation exists. It is called Lagrange interpolation. The interpolation polynomial is written down as p n t y l t y n l n t The polynomials l j t are chosen in such a way that i l j t i i j 0 j If this property holds true, obviously p n t i y i. The polynomials l j t can be explicitly written down: l j t k k n k j n k j t t k t j t k Example 3.. We will take the same data as before: (-,-7), (0,-), (,0). The Lagrange polynomial is t t t t 3 t t p t y t t 3 t t y t t y 3 t t t t 3 t t t t 3 t 3 t t 3 t For our particular data, this becomes t t t t p t 7 9 t t t t Since the interpolating polynomial is unique, this is just the same polynomial as in the previous example written down in a different way. Advantages of Lagrange interpolation The interpolation polynomial can be written down without the solution of a linear system of equations. The basis polynomials l j t depend only on t t n and not on the values y i. Disadvantages of Lagrange interpolation It is extremely expensive to evaluate the polynomial. The Lagrange polynomial is only used for theoretical considerations. 5

6 3.4 Newton Polynomial There is a third possibility of writing down the interpolating polynomial which combines numerical stability with efficient algorithms. This is Newton interpolation. The ansatz for the polynomial reads p n t x t t x 3 t t t t x n t t t t t t n Let us now write down the linear system of equations Ax b which one obtains if one introduces the interpolation conditions p n t i y i. The resulting matrix A is triangular! A t t 0 0 t 3 t t 3 t t 3 t 0 t n t t n t t n t t n t t n t t n t n Hence, the system is easy to solve and the number of operations is O n. Moreover, the condition number is moderate. Example 3.3. We take the same data as before, (-,-7), (0,-), (,0). The linear system becomes 0 0 x 0 7 x 3 3 x 3 0 whose solution is easy to obtain: 7, x 3, x 3 4. This leads to the interpolant p t 7 3 t 4 t t If one combines the terms with equal powers of t, the result is p t 5t 4t. This is identical to the monomial representation as we expect because of the uniqueness of the interpolating polynomial. An algorithm for computing the Newton polynomial could look like this: function x = PolyN(ti,yi) n = length(ti); A = zeros(n); A(:,) = ones(n,); for i = :n A(i:end,i) = A(i:end,i-).*(ti(i:end)-ti(i-)); end x = A yi; 6

7 This algorithm uses the fact that MATLAB s backslash operator sees that A is a triangular matrix and uses this information. Implementations in other programming languages which are not vector-oriented use a different algorithm basing on divided differences which makes the computation even more efficient. The computation of a polynomial value for a given argument can be done by a modified Horner s algorithm. The expression used can be derived as follows: p 5 t x 6 t t 5 t t 4 t t 3 t t t t x 5 t t 4 t t 3 t t t t x 4 t t 3 t t t t x 3 t t t t x t t x 6 t t 5 t t 4 t t 3 t t x 5 t t 4 t t 3 t t x 4 t t 3 t t x 3 t t x t t x 6 t t 5 t t 4 t t 3 x 5 t t 4 t t 3 x 4 t t 3 x 3 t t x t t x 6 t t 5 x 5 t t 4 x 4 t t 3 x 3 t t x t t A MATLAB implementation could look like this: 7

8 3.5 Interpolating a Function function pt = HornerN(t,x,ti) n = length(ti); pt = x(n); for i = n-:-: pt = pt*(t-ti(i))+x(i); end Here we are shifting our point of view a little bit. Instead of taking the data points t i y i as given somehow, we assume that we want to interpolate a (maybe complex) function by polynomials such that the interpolant is a good approximation to our function. Let f t be the given function. The data points are now t i f t i sampled for n selected arguments. One could ask the question if a growing number of sample points leads to a better approximation to our function. The following example shows that this is not true in general. Example 3.4. Runge s Example. The function we are interested in approximating by polynomials is f t 5t where t We choose n equidistant points in, that is, h n; t i i h i 0 n Figure contains the exact function as well as interpolation results for n 5 0. Runge s Example function 0th order 5th order Figure : Runge s example Such large oscillations around the interval boundaries are called Runge s phenomenon. These oscillations are typical for interpolating polynomials of higher degree and equidistant interpolation nodes. They can only be avoided if special interpolation nodes (known as Chebyshev points) 8

9 are chosen. In that case, even convergence can be assured. But there is another possibility for avoiding a too high degree of an interpolating polynomial. For that aim, one can apply piecewise polynomials of lower degree. This will be the topic of later lectures. 3.6 Symptoms of Ill-Conditioning We will describe shortly how ill-conditioning can effect the results of polynomial interpolation. The following subsection is motivated by Gerd Eriksson, Numeriska algoritmer med MATLAB. We will study how the interpolation algorithms work with the following input data: t y We will compare two different algorithms for computing the 5th-order polynomial which interpolates the given data. The first algorithm is Newton interpolation. It is easy to see that we obtain a good approximation. The second algorithm is the monomial representation. In Figure 3, we plotted the two curves which are the result of the interpolation algorithms. Note that, in theory, they should be identical Two Interpolation Algorithms Monomial Newton Figure 3: A bad interpolation algorithm For completeness, I will include the algorithm which I used to obtain the plot: 9

10 clear npl = 00; ti = [00.5;0.5;0.5;03;04;05]; yi = [3;.5;.5;;;0]; xn = PolyN(ti,yi); xm = PolyM(ti,yi); L = ti(end)-ti(); h = L/npl; tpl = (0:npl) *h+ti(); fn = zeros(size(tpl)); fm = zeros(size(tpl)); for i = :length(tpl) fn(i) = HornerN(tpl(i),xn,ti); fm(i) = HornerM(tpl(i),xm); end plot(tpl,fm,tpl,fn, -- ) title( Two Interpolation Algorithms ) legend( Monomial, Newton,0) We will investigate the reason for this unexpected result. There are three main algorithmic steps involved: Computation of the system matrix A; solution of the linear system Ax y; computation of the polynomial values. The columns of the Vandermonde matrix consist of successively increasing potentials of the arguments. The magnitude of the elements increases from one in the first column to around 0 5 in the last column. This matrix becomes very ill-conditioned. An explicit computation of the condition number gives The next step consists of the solution of the linear system. The accuracy of the computer is not sufficient to provide a reliable answer. MATLAB gives a warning during program execution this should be taken seriously! Taking this condition number literally, we would not expect any valid digit in the final result. The figure above is a little bit more optimistic. In fact, one can see experimentally that, in practice, one correct digit can be expected. We give the coefficients with three digits even if they are not reliable. It is only to show the huge difference in magnitude between them x 35 0 x x x x The insecurity in the coefficients is not the reason for the jagged behavior of the plot. Even if the coefficients are perturbed, a fifth degree polynomial is a smooth curve. It cannot have more than two maxima and two minima. Therefore, the computation by Horner s algorithm must be prone 0

11 to errors. In order to show what is happening, let us compute the function value p The computed polynomial value amounts to e 00 which is in error by 44%. The intermediate results in algorithm HornerM are: e e e e e e+00 The intermediate results are of a rather huge magnitude and have different sign. It is only in the last step where a result of the magnitude one appears. Compared to the result before which has magnitude 0 this indicates a catastrophic cancellation of terms. It appears even in the previous steps although is this not seen explicitly.

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 =0, x 1 = π 4, x

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

Natural cubic splines

Natural cubic splines Natural cubic splines Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology October 21 2008 Motivation We are given a large dataset, i.e. a function sampled

More information

Numerical Analysis An Introduction

Numerical Analysis An Introduction Walter Gautschi Numerical Analysis An Introduction 1997 Birkhauser Boston Basel Berlin CONTENTS PREFACE xi CHAPTER 0. PROLOGUE 1 0.1. Overview 1 0.2. Numerical analysis software 3 0.3. Textbooks and monographs

More information

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 16, 2015 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials 3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines

More information

1 Cubic Hermite Spline Interpolation

1 Cubic Hermite Spline Interpolation cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite

More information

(Refer Slide Time: 1:42)

(Refer Slide Time: 1:42) Introduction to Computer Graphics Dr. Prem Kalra Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 10 Curves So today we are going to have a new topic. So far

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

CHAPTER 1 Splines and B-splines an Introduction

CHAPTER 1 Splines and B-splines an Introduction CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

4.5 Chebyshev Polynomials

4.5 Chebyshev Polynomials 230 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION 4.5 Chebyshev Polynomials We now turn our attention to polynomial interpolation for f (x) over [ 1, 1] based on the nodes 1 x 0 < x 1 < < x N 1. Both

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

4.3 Lagrange Approximation

4.3 Lagrange Approximation 206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

More information

An Overview of the Finite Element Analysis

An Overview of the Finite Element Analysis CHAPTER 1 An Overview of the Finite Element Analysis 1.1 Introduction Finite element analysis (FEA) involves solution of engineering problems using computers. Engineering structures that have complex geometry

More information

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON John Burkardt Information Technology Department Virginia Tech http://people.sc.fsu.edu/ jburkardt/presentations/cg lab fem basis tetrahedron.pdf

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better!

The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better! The Fourth International DERIVE-TI9/89 Conference Liverpool, U.K., -5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue Notre-Dame Ouest Montréal

More information

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy

The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy BMI Paper The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy Faculty of Sciences VU University Amsterdam De Boelelaan 1081 1081 HV Amsterdam Netherlands Author: R.D.R.

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Numerical Matrix Analysis

Numerical Matrix Analysis Numerical Matrix Analysis Lecture Notes #10 Conditioning and / Peter Blomgren, blomgren.peter@gmail.com Department of Mathematics and Statistics Dynamical Systems Group Computational Sciences Research

More information

Derive 5: The Easiest... Just Got Better!

Derive 5: The Easiest... Just Got Better! Liverpool John Moores University, 1-15 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de Technologie Supérieure, Canada Email; mbeaudin@seg.etsmtl.ca 1. Introduction Engineering

More information

Euler s Method and Functions

Euler s Method and Functions Chapter 3 Euler s Method and Functions The simplest method for approximately solving a differential equation is Euler s method. One starts with a particular initial value problem of the form dx dt = f(t,

More information

Numerical Analysis. Professor Donna Calhoun. Fall 2013 Math 465/565. Office : MG241A Office Hours : Wednesday 10:00-12:00 and 1:00-3:00

Numerical Analysis. Professor Donna Calhoun. Fall 2013 Math 465/565. Office : MG241A Office Hours : Wednesday 10:00-12:00 and 1:00-3:00 Numerical Analysis Professor Donna Calhoun Office : MG241A Office Hours : Wednesday 10:00-12:00 and 1:00-3:00 Fall 2013 Math 465/565 http://math.boisestate.edu/~calhoun/teaching/math565_fall2013 What is

More information

The Method of Least Squares. Lectures INF2320 p. 1/80

The Method of Least Squares. Lectures INF2320 p. 1/80 The Method of Least Squares Lectures INF2320 p. 1/80 Lectures INF2320 p. 2/80 The method of least squares We study the following problem: Given n points (t i,y i ) for i = 1,...,n in the (t,y)-plane. How

More information

Piecewise Cubic Splines

Piecewise Cubic Splines 280 CHAP. 5 CURVE FITTING Piecewise Cubic Splines The fitting of a polynomial curve to a set of data points has applications in CAD (computer-assisted design), CAM (computer-assisted manufacturing), and

More information

Data Mining Lab 5: Introduction to Neural Networks

Data Mining Lab 5: Introduction to Neural Networks Data Mining Lab 5: Introduction to Neural Networks 1 Introduction In this lab we are going to have a look at some very basic neural networks on a new data set which relates various covariates about cheese

More information

Section 3.2 Polynomial Functions and Their Graphs

Section 3.2 Polynomial Functions and Their Graphs Section 3.2 Polynomial Functions and Their Graphs EXAMPLES: P(x) = 3, Q(x) = 4x 7, R(x) = x 2 +x, S(x) = 2x 3 6x 2 10 QUESTION: Which of the following are polynomial functions? (a) f(x) = x 3 +2x+4 (b)

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

More information

Introduction to Engineering System Dynamics

Introduction to Engineering System Dynamics CHAPTER 0 Introduction to Engineering System Dynamics 0.1 INTRODUCTION The objective of an engineering analysis of a dynamic system is prediction of its behaviour or performance. Real dynamic systems are

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Introduction to time series analysis

Introduction to time series analysis Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples

More information

Corollary. (f є C n+1 [a,b]). Proof: This follows directly from the preceding theorem using the inequality

Corollary. (f є C n+1 [a,b]). Proof: This follows directly from the preceding theorem using the inequality Corollary For equidistant knots, i.e., u i = a + i (b-a)/n, we obtain with (f є C n+1 [a,b]). Proof: This follows directly from the preceding theorem using the inequality 120202: ESM4A - Numerical Methods

More information

Universität des Saarlandes. Fachrichtung 6.1 Mathematik

Universität des Saarlandes. Fachrichtung 6.1 Mathematik Universität des Saarlandes Fachrichtung 6.1 Mathematik Preprint Nr. 273 Newton Interpolation with Extremely High Degrees by Leja Ordering and Fast Leja Points Michael Breuß, Oliver Vogel and Kai Uwe Hagenburg

More information

GINI-Coefficient and GOZINTO-Graph (Workshop) (Two economic applications of secondary school mathematics)

GINI-Coefficient and GOZINTO-Graph (Workshop) (Two economic applications of secondary school mathematics) GINI-Coefficient and GOZINTO-Graph (Workshop) (Two economic applications of secondary school mathematics) Josef Böhm, ACDCA & DERIVE User Group, nojo.boehm@pgv.at Abstract: GINI-Coefficient together with

More information

3.1 State Space Models

3.1 State Space Models 31 State Space Models In this section we study state space models of continuous-time linear systems The corresponding results for discrete-time systems, obtained via duality with the continuous-time models,

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Finite Difference Approach to Option Pricing

Finite Difference Approach to Option Pricing Finite Difference Approach to Option Pricing February 998 CS5 Lab Note. Ordinary differential equation An ordinary differential equation, or ODE, is an equation of the form du = fut ( (), t) (.) dt where

More information

Solving simultaneous equations using the inverse matrix

Solving simultaneous equations using the inverse matrix Solving simultaneous equations using the inverse matrix 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Finite Elements for 2 D Problems

Finite Elements for 2 D Problems Finite Elements for 2 D Problems General Formula for the Stiffness Matrix Displacements (u, v) in a plane element are interpolated from nodal displacements (ui, vi) using shape functions Ni as follows,

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

Nonlinear Iterative Partial Least Squares Method

Nonlinear Iterative Partial Least Squares Method Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for

More information

Dynamic Process Modeling. Process Dynamics and Control

Dynamic Process Modeling. Process Dynamics and Control Dynamic Process Modeling Process Dynamics and Control 1 Description of process dynamics Classes of models What do we need for control? Modeling for control Mechanical Systems Modeling Electrical circuits

More information

AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT

AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT Dr. Nikunja Swain, South Carolina State University Nikunja Swain is a professor in the College of Science, Mathematics,

More information

Matlab Practical: Solving Differential Equations

Matlab Practical: Solving Differential Equations Matlab Practical: Solving Differential Equations Introduction This practical is about solving differential equations numerically, an important skill. Initially you will code Euler s method (to get some

More information

5.5. Solving linear systems by the elimination method

5.5. Solving linear systems by the elimination method 55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

Roots of Equations (Chapters 5 and 6)

Roots of Equations (Chapters 5 and 6) Roots of Equations (Chapters 5 and 6) Problem: given f() = 0, find. In general, f() can be any function. For some forms of f(), analytical solutions are available. However, for other functions, we have

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

Partial Fractions. p(x) q(x)

Partial Fractions. p(x) q(x) Partial Fractions Introduction to Partial Fractions Given a rational function of the form p(x) q(x) where the degree of p(x) is less than the degree of q(x), the method of partial fractions seeks to break

More information

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1 Recursion Slides by Christopher M Bourke Instructor: Berthe Y Choueiry Fall 007 Computer Science & Engineering 35 Introduction to Discrete Mathematics Sections 71-7 of Rosen cse35@cseunledu Recursive Algorithms

More information

Moving Least Squares Approximation

Moving Least Squares Approximation Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the

More information

the points are called control points approximating curve

the points are called control points approximating curve Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

Integrals of Rational Functions

Integrals of Rational Functions Integrals of Rational Functions Scott R. Fulton Overview A rational function has the form where p and q are polynomials. For example, r(x) = p(x) q(x) f(x) = x2 3 x 4 + 3, g(t) = t6 + 4t 2 3, 7t 5 + 3t

More information

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology. Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

Math 348: Numerical Methods with application in finance and economics. Boualem Khouider University of Victoria

Math 348: Numerical Methods with application in finance and economics. Boualem Khouider University of Victoria Math 348: Numerical Methods with application in finance and economics Boualem Khouider University of Victoria Lecture notes: Updated January 2012 Contents 1 Basics of numerical analysis 9 1.1 Notion of

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Number Systems and Radix Conversion

Number Systems and Radix Conversion Number Systems and Radix Conversion Sanjay Rajopadhye, Colorado State University 1 Introduction These notes for CS 270 describe polynomial number systems. The material is not in the textbook, but will

More information

Applied Linear Algebra I Review page 1

Applied Linear Algebra I Review page 1 Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties

More information

Interpolation. Chapter 3. 3.1 The Interpolating Polynomial

Interpolation. Chapter 3. 3.1 The Interpolating Polynomial Chapter 3 Interpolation Interpolation is the process of defining a function that takes on specified values at specified points This chapter concentrates on two closely related interpolants: the piecewise

More information

Journal of Engineering Science and Technology Review 2 (1) (2009) 76-81. Lecture Note

Journal of Engineering Science and Technology Review 2 (1) (2009) 76-81. Lecture Note Journal of Engineering Science and Technology Review 2 (1) (2009) 76-81 Lecture Note JOURNAL OF Engineering Science and Technology Review www.jestr.org Time of flight and range of the motion of a projectile

More information

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur Cryptography and Network Security Prof. D. Mukhopadhyay Department of Computer Science and Engineering Indian Institute of Technology, Kharagpur Module No. # 01 Lecture No. # 12 Block Cipher Standards

More information

Introduction to the Finite Element Method

Introduction to the Finite Element Method Introduction to the Finite Element Method 09.06.2009 Outline Motivation Partial Differential Equations (PDEs) Finite Difference Method (FDM) Finite Element Method (FEM) References Motivation Figure: cross

More information

CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.

CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning. Lecture Machine Learning Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht milos@cs.pitt.edu 539 Sennott

More information

Integration. Topic: Trapezoidal Rule. Major: General Engineering. Author: Autar Kaw, Charlie Barker. http://numericalmethods.eng.usf.

Integration. Topic: Trapezoidal Rule. Major: General Engineering. Author: Autar Kaw, Charlie Barker. http://numericalmethods.eng.usf. Integration Topic: Trapezoidal Rule Major: General Engineering Author: Autar Kaw, Charlie Barker 1 What is Integration Integration: The process of measuring the area under a function plotted on a graph.

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Figure 2.1: Center of mass of four points.

Figure 2.1: Center of mass of four points. Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would

More information

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model

We can display an object on a monitor screen in three different computer-model forms: Wireframe model Surface Model Solid model CHAPTER 4 CURVES 4.1 Introduction In order to understand the significance of curves, we should look into the types of model representations that are used in geometric modeling. Curves play a very significant

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

A three point formula for finding roots of equations by the method of least squares

A three point formula for finding roots of equations by the method of least squares A three point formula for finding roots of equations by the method of least squares Ababu Teklemariam Tiruneh 1 ; William N. Ndlela 1 ; Stanley J. Nkambule 1 1 Lecturer, Department of Environmental Health

More information

F.IF.7b: Graph Root, Piecewise, Step, & Absolute Value Functions

F.IF.7b: Graph Root, Piecewise, Step, & Absolute Value Functions F.IF.7b: Graph Root, Piecewise, Step, & Absolute Value Functions F.IF.7b: Graph Root, Piecewise, Step, & Absolute Value Functions Analyze functions using different representations. 7. Graph functions expressed

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

8. Linear least-squares

8. Linear least-squares 8. Linear least-squares EE13 (Fall 211-12) definition examples and applications solution of a least-squares problem, normal equations 8-1 Definition overdetermined linear equations if b range(a), cannot

More information

Polynomial Operations and Factoring

Polynomial Operations and Factoring Algebra 1, Quarter 4, Unit 4.1 Polynomial Operations and Factoring Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned Identify terms, coefficients, and degree of polynomials.

More information

Investigating Area Under a Curve

Investigating Area Under a Curve Mathematics Investigating Area Under a Curve About this Lesson This lesson is an introduction to areas bounded by functions and the x-axis on a given interval. Since the functions in the beginning of the

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Lecture 6: Logistic Regression

Lecture 6: Logistic Regression Lecture 6: CS 194-10, Fall 2011 Laurent El Ghaoui EECS Department UC Berkeley September 13, 2011 Outline Outline Classification task Data : X = [x 1,..., x m]: a n m matrix of data points in R n. y { 1,

More information

Bits Superposition Quantum Parallelism

Bits Superposition Quantum Parallelism 7-Qubit Quantum Computer Typical Ion Oscillations in a Trap Bits Qubits vs Each qubit can represent both a or at the same time! This phenomenon is known as Superposition. It leads to Quantum Parallelism

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information