1 Interpolation: What is it?

Size: px
Start display at page:

Download "1 Interpolation: What is it?"

Transcription

1 Polynomial Interpolation KEY WORDS interpolation, polynomial interpolation, Lagrange from of interpolation, Newton form of interpolation GOAL To understand the significance of interpolation To understand various forms of polynomial interpolation Interpolation: What is it? In the problem of data approximation, we are given some discrete points (x 0, y 0 ), (x, y ),, (x N, y N ) () and are asked to find a function φ(x) that characterizes the trend of the data points For example, for the following data set: x x 0 = π x = 0 x = π y y 0 = 0 y = y = 0 the function φ(x) = π x satisfies the conditions φ(x i ) = y i for i = 0,,, and therefore may tell something about the trend of the data The trigonometric function φ = cos(x) also satises the conditions φ (x i ) = y i for i = 0,, and may also represent a certain trend of the data Obviously, these two functions behave quite differently for large values of x You may ask which one of the two curves is acceptable It is not clear what the correct answer is to this question because we know nothing else about the data It is possible that both are acceptable, or it might be the case that neither is Notice that one of the functions is a polynomial and the other is a trigonometric function Our objective in this unit is to illustrate how polynomials can be used to predict the trend of the data set This does not mean that polynomials will always do a better job than other functions such as trigonometric functions The procedure that we develop can be generalized to other functions without too much difficulty

2 The Interpolation Problem Suppose we wish to approximate a complicated function f(x) by a simpler function p(x) such that p(x) goes through the data {(x i, f i )} N i=0 (Throughout this discussion, we assume that the data are obtained by evaluating the function f(x); that is, we assume that f i = f(x i ), i = 0,,, N) This means that p(x i ) = f i, i = 0,,, N, and we say that p(x) interpolates the data This is a system of N + equations comprising the interpolating conditions In addition to replacing a complicated function by a simpler one, other purposes for interpolation include: plotting a smooth curve through discrete data points reading between the lines of a table approximating a mathematical function by a simpler one differentiating or integrating tabular data A common choice of the interpolating function p(x) is a polynomial because polynomials are simple in form can be calculated by elementary operations are free from singular points are unrestricted as to the range of values may be differentiated and integrated without difficulty and the coefficients to be determined enter linearly For special cases, however, other types of interpolants may be advantageous For example, if the underlying function is periodic, it is natural to employ trigonometric interpolation The discussion of polynomial interpolation in the following sections revolves around how an interpolating polynomial can be represented, computed, and evaluated Polynomial Interpolation The Power Series Form of the Interpolating Polynomial Consider the power series form of the polynomial p M (x) of degree M: p M (x) = a 0 + a x + + a M x M

3 If the polynomial interpolates the given data {(x i, f i )} N i=0 then the interpolating conditions form a system of N + linear equations p M (x 0 ) a 0 + a x a M x M 0 = f 0 p M (x ) a 0 + a x + + a M x M = f p M (x N ) a 0 + a x N + + a M x M N = f N for the M + coefficients a 0, a,, a M How should the degree M be chosen? Since the polynomial must satisfy N+ interpolation equations, we might expect to need N + parameters to satisfy the conditions If the interpolating polynomial is allowed to be of degree higher than N, there are many polynomials satisfying the interpolation conditions It is easy to show by example that if the degree is less than N, it may not be possible to satisfy all of the conditions Degree N is just right; then there is always a unique solution of the interpolation problem which we now demonstrate When M = N, the coefficient matrix of the linear system of interpolating conditions is the Vandermonde matrix x 0 x 0 x N 0 x x x N V N x N x N xn N It can be shown that the determinant of V N is det(v N ) = i>j(x i x j ) Note that det(v N ) 0, that is, V N is nonsingular, if and only if the nodes {x i } N i=0 are distinct; that is, if and only if x i x j whenever i j Thus when the nodes are distinct, there exist unique coefficients a 0, a,, a N (An alternative approach to showing that V N is nonsingular is the following Suppose that there is a vector c = [c 0, c,, c N ] T such that V N c = 0 It follows that the polynomial q(x) = c 0 + c x + c x + + c N x N is zero at x = x 0,, x N This says that we have a polynomial of degree N with N + roots The only way this can happen is if q is the zero polynomial, that is, when c = 0 Thus V N is nonsingular) EXAMPLE : Determine the power series form of the quadratic interpolating polynomial p (x) = a 0 + a x + a x interpolating the data (, 0), (0, ) and (, )

4 Solution: The interpolating conditions, in the order of the data, are p ( ) a 0 + ( )a + ()a = 0 p (0) a 0 = p () a 0 + ()a + ()a = On solving this linear system of equations, we obtain a 0 =, a = and a = So, in power series form, the interpolating polynomial is p (x) = + x + x Of course, polynomials can be written in many different ways, some more appropriate to a given task than others For example, when interpolating the data {(x i, f i )} N i=0, a Vandermonde system determines the values of the coefficients a 0, a,, a N in the power series form The number of floating point computations needed by Gauss elimination to solve this system grows like N with the degree N of the polynomial However, this cost can be significantly reduced by exploiting properties of the Vandermonde coefficient matrix Another way to reduce the cost of determining the interpolating polynomial is to change the way the interpolating polynomial is represented, as we shall see subsequently The Lagrange Form of the Interpolating Polynomial An interpolating polynomial can be written in various forms, the most common being the Lagrange form and the Newton form The Newton form is probably the most convenient and efficient; however, conceptually, the Lagrange form has several advantages We begin with the Lagrange form since it is easier to understand To illustrate the idea behind the Lagrange form, we consider the two points (x 0, f 0 ), (x, f ) with x 0 x The equation of the straight line through these points is or where p (x) = (x x)f 0 + (x x 0 )f (x x 0 ) (x x ) p (x) = f 0 (x 0 x ) + f (x x 0 ) (x x 0 ) l() 0 (x)f 0 + l () (x)f Note that each of the polynomials is a linear function, and l () 0 (x) = (x x ) (x 0 x ), l() (x) = (x x 0) (x x 0 ) l () 0 (x), l() (x) { l (), i = j i (x j ) = 0, i j 4

5 Consider now the data {(x i, f i )} i=0 The Lagrange form of the quadratic polynomial interpolating this data may be written: where p (x) = l () 0 (x)f 0 + l () (x)f + l () (x)f l () 0 = (x x )(x x ) (x 0 x )(x 0 x ), l() = (x x 0)(x x ) (x x 0 )(x x ), l() = (x x 0)(x x ) (x x 0 )(x x ) Clearly, l () 0, l(), l() are quadratic and l () i (x j ) = {, i = j 0, i j More generally, consider the polynomial p N (x) of degree N interpolating the data {(x i, f i )} N i=0 : N p N (x) = f i l (N) i (x) i=0 where the polynomials l (N) k (x) are polynomials of degree N and have the property that { l (N), k = j k (x j ) = 0, k j so that clearly In this case, p N (x i ) = f i, i = 0,,, N N l (N) (x x 0 )(x x ) (x x k )(x x k+ ) (x x N ) k (x) (x k x 0 )(x k x ) (x k x k )(x k x k+ ) (x k x N ) = j=0,j k The polynomial l (N) k is called a Lagrange basis polynomial The Lagrange basis polynomials are of exact degree N, but the interpolant can be of lower degree Indeed, it is important to appreciate that when f(x) is a polynomial of degree N, the interpolant must be f(x) itself After all, the polynomial f(x) interpolates itself and its degree is N, so, by uniqueness, p N (x) = f(x) ( x xj x k x j ) EXAMPLE : Find the cubic polynomial which interpolates the data (, ), (, ), (, ), (4, 4) Solution: First we calculate the Lagrange basis polynomials: Then we form the interpolating polynomial, p (x) = 6 (x )(x + )(x 4) (x )(x + )(x 4) 0 (x )(x )(x 4) + (x )(x )(x + ) 5 5

6 Unlike the power series form of the interpolating polynomial, the Lagrange form has no coefficients whose values must be determined In a sense, Lagrange interpolation provides an explicit solution of the interpolating conditions In summary, the Lagrange form of the interpolating polynomial is useful theoretically because it does not require solving a linear system explicitly shows how each data value f i affects the overall interpolating polynomial The Newton Form of the Interpolating Polynomial Consider the problem of determining the quadratic polynomial interpolating the data (x 0, f 0 ), (x, f ), (x, f ) Using the data written in this order, the Newton form of the quadratic interpolating polynomial is p (x) = b 0 + b (x x 0 ) + b (x x 0 )(x x ) In this case, the interpolating conditions are or p (x 0 ) b 0 = f 0 p (x ) b 0 + b (x x 0 ) = f p (x ) b 0 + b (x x 0 ) + b (x x 0 )(x x ) = f 0 0 (x x 0 ) 0 (x x 0 ) (x x 0 )(x x ) b 0 b b = Note that the coefficient matrix of this system is lower triangular When the nodes {x i } i=0 are distinct, the diagonal elements of this lower triangular matrix are nonzero Consequently, the linear system has a unique solution which may be determined by forward substitution In fact, f 0 f f b 0 = f 0, b = f b 0 (x x 0 ), b = f b 0 b (x x 0 ) (x x 0 )(x x ) () EXAMPLE : Determine the Newton form of the quadratic interpolating polynomial: p (x) = b 0 + b (x x 0 ) + b (x x 0 )(x x ) of the data (, 0), (0, ) and (, ) 6

7 Solution: Taking the data points in their order in the data, we have x 0 =, x = 0 and x = The interpolating conditions are p ( ) b 0 = 0 p (0) b 0 + (0 ( ))b = p () b 0 + ( ( ))b + ( ( ))( 0)b = and this lower triangular system may be solved to give b 0 = 0, b = and b = So, the Newton form of the quadratic interpolating polynomial is p (x) = 0 + (x + ) + (x + )(x 0) After rearrangement, we observe that this is the same polynomial as the power series form in Example (You may wish to verify this) EXAMPLE : For the data (, ), (, ) and (5, 4), using the data points in the order given, the Newton form of the interpolating polynomial is and the interpolating conditions are p (x) = b 0 + b (x ) + b (x )(x ) p () b 0 = p () b 0 + b ( ) = p (5) b 0 + b (5 ) + b (5 )(5 ) = 4 This lower triangular linear system may be solved by forward substitution giving b 0 = b = b 0 ( ) b = 4 b 0 (5 )b (5 )(5 ) = = 0 Consequently, the Newton form of the interpolating polynomial is p (x) = + (x ) + 0(x )(x ) Generally, this interpolating polynomial is of degree, but its exact degree may be less Here, rearranging into power series form, we have p (x) = + x and the exact degree is ; this happens because the data (, ), (, ) and (5, 4) are collinear 7

8 4 The Newton Divided Difference Interpolating Polynomial In this section, we derive the Newton form of the interpolating polynomial of degree N given the data {(x i, f i )} N i=0, and develop an efficient procedure for the determination of its coefficients Let p k (x) denote the polynomial of degree k interpolating the data {(x i, f i )} k i=0 We now try to determine a polynomial of degree k, P k, such that Since then () implies that p k (x) = p k (x) + P k (x) () p k (x i ) = f i, and p k (x i ) = f i, i = 0,,, k, P k (x i ) = p k (x i ) p k (x i ) = 0, i = 0,,, k Thus, for some constant b k, P k (x) = b k (x x 0 )(x x )(x x ) (x x k ) (4) Again using (), and adding these equations gives p k (x) p 0 (x) = P (x) + P (x) + + P k (x), where p 0 (x) = b 0 Thus, from (4), p k (x) = b 0 +b (x x 0 )+b (x x 0 )(x x )+ +b N (x x 0 )(x x )(x x ) (x x k ) To find the coefficients b i, we see that the coefficient of x k in p k (x) is b k, and since the interpolating polynomial is unique, b k must also be the coefficient of x k in the Lagrange form of the interpolating polynomial, p k (x) which is the coefficient of x k in that is, k i=0 b k = f i (x x 0 ) (x x i )(x x i+ ) (x x k ) (x i x 0 ) (x i x i )(x i x i+ ) (x i x k ) ; k i=0 f i (x i x 0 ) (x i x i )(x i x i+ ) (x i x k ) (5) We denote the right hand side of (5) by f[x 0, x,, x k ], the k-th divided difference of f at x 0, x,, x k Thus: b k = f[x 0, x,, x k ], k = 0,,, N (6) 8

9 Accordingly we have p N (x) = b 0 +b (x x 0 )+b (x x 0 )(x x )+ +b N (x x 0 )(x x )(x x ) (x x N ) (7) This is Newton s divided difference interpolating polynomial Note that by (5), f[x 0, x,, x k ] is symmetric in its arguments, that is f[x 0, x,, x i,, x j,, x k ] = f[x 0, x,, x j,, x i,, x k ] We now derive a recurrence relationship between the divided differences of f which is useful in computing high order differences Note that we could equally well have written p k (x) in the form p N (x) = c 0 +c (x x N )+c (x x N )(x x N )+ +c k (x x N )(x x N )(x x N ) (x x ), (8) where c i = f[x N, x N,, x N i ], i = 0,,, N (9) In particular, the coefficients of x N in (7) and(8) are identical, that is, b N = c N Equating (7) and(8), and taking all the terms to one side yields That is, or b N (x x )(x x ) (x x N ){(x x 0 ) (x x N )} + x N (b N c N ) + a polynomial of degree at most N 0 (0) b N (x x )(x x ) (x x N )(x N x 0 ) + (b N c N )x N + a polynomial of degree at most N 0, () {b N (x N x 0 )+(b N c N )}x N + a polynomial of degree at most N 0 Since the coefficient of x N is zero, we have from which it follows that b N (x N x 0 ) + b N c N = 0, that is, using (6) and (9) in (), we have b N = c N b N x N x 0 ; () f[x 0, x,, x N ] = f[x N, x N,, x ] f[x 0, x,, x N ] x N x 0, and using the symmetry property of divided differences, f[x N, x N,, x ] = f[x, x,, x N ] 9

10 and hence f[x 0, x,, x N ] = f[x, x,, x N ] f[x 0, x,, x N ] x N x 0 () In practice, (7) is used in conjunction with a divided difference table based on () x f[ ] f[, ] f[,, ] f[,,, ] x 0 f[x 0 ] f[x 0, x ] x f[x ] f[x 0, x, x ] f[x, x ] f[x 0, x, x, x ] x f[x ] f[x, x, x ] f[x, x ] x f[x ] The divided differences on the upper diagonal are the coefficients in (7) The first three divided differences are f[x i ] = f(x i ) (4) f[x i, x i+ ] = f[x i+] f[x i ] x i+ x i (5) f[x i, x i+, x i+ ] = f[x i+, x i+ ] f[x i, x i+ ] x i+ x i (6) EXAMPLE 4: Construct a divided difference table for the function f given in the following table, and determine the Newton divided difference interpolating polynomial x 0 5 f(x) 4 : Solution: The divided difference table is: x f[ ] f[, ] f[,, ] f[,,, ] From this table, we obtain 4 p (x) = + (x ) + (x )(x ) (x )(x )x 0

11 to In summary, for the Newton form of the interpolating polynomial, it is easy determine the coefficients evaluate the polynomial at specified values via nested multiplication extend the polynomial to incorporate additional interpolation points and data If the interpolating polynomial is to be evaluated at many points, generally it is best first to determine its Newton form and then to use the nested multiplication scheme to evaluate the interpolating polynomial at each desired point 5 Reading Assignments/References: About Lagrangian interpolation: follow the link org/wiki/lagrange_polynomial About Newton s interpolation: follow the link org/wiki/newton_polynomial

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 =0, x 1 = π 4, x

More information

4.3 Lagrange Approximation

4.3 Lagrange Approximation 206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving linear equations 3.1 Introduction Many problems in engineering reduce to the solution of an equation or a set of equations. An equation is a type of mathematical expression which contains one or

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 16, 2015 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Separable First Order Differential Equations

Separable First Order Differential Equations Separable First Order Differential Equations Form of Separable Equations which take the form = gx hy or These are differential equations = gxĥy, where gx is a continuous function of x and hy is a continuously

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Factoring Cubic Polynomials

Factoring Cubic Polynomials Factoring Cubic Polynomials Robert G. Underwood 1. Introduction There are at least two ways in which using the famous Cardano formulas (1545) to factor cubic polynomials present more difficulties than

More information

General Framework for an Iterative Solution of Ax b. Jacobi s Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method 2.6 Iterative Solutions of Linear Systems 143 2.6 Iterative Solutions of Linear Systems Consistent linear systems in real life are solved in one of two ways: by direct calculation (using a matrix factorization,

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

4.5 Chebyshev Polynomials

4.5 Chebyshev Polynomials 230 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION 4.5 Chebyshev Polynomials We now turn our attention to polynomial interpolation for f (x) over [ 1, 1] based on the nodes 1 x 0 < x 1 < < x N 1. Both

More information

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON John Burkardt Information Technology Department Virginia Tech http://people.sc.fsu.edu/ jburkardt/presentations/cg lab fem basis tetrahedron.pdf

More information

7.6 Approximation Errors and Simpson's Rule

7.6 Approximation Errors and Simpson's Rule WileyPLUS: Home Help Contact us Logout Hughes-Hallett, Calculus: Single and Multivariable, 4/e Calculus I, II, and Vector Calculus Reading content Integration 7.1. Integration by Substitution 7.2. Integration

More information

is identically equal to x 2 +3x +2

is identically equal to x 2 +3x +2 Partial fractions 3.6 Introduction It is often helpful to break down a complicated algebraic fraction into a sum of simpler fractions. 4x+7 For example it can be shown that has the same value as 1 + 3

More information

Linearly Independent Sets and Linearly Dependent Sets

Linearly Independent Sets and Linearly Dependent Sets These notes closely follow the presentation of the material given in David C. Lay s textbook Linear Algebra and its Applications (3rd edition). These notes are intended primarily for in-class presentation

More information

Integrals of Rational Functions

Integrals of Rational Functions Integrals of Rational Functions Scott R. Fulton Overview A rational function has the form where p and q are polynomials. For example, r(x) = p(x) q(x) f(x) = x2 3 x 4 + 3, g(t) = t6 + 4t 2 3, 7t 5 + 3t

More information

Section 1.7 22 Continued

Section 1.7 22 Continued Section 1.5 23 A homogeneous equation is always consistent. TRUE - The trivial solution is always a solution. The equation Ax = 0 gives an explicit descriptions of its solution set. FALSE - The equation

More information

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of. Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Piecewise Cubic Splines

Piecewise Cubic Splines 280 CHAP. 5 CURVE FITTING Piecewise Cubic Splines The fitting of a polynomial curve to a set of data points has applications in CAD (computer-assisted design), CAM (computer-assisted manufacturing), and

More information

6. Cholesky factorization

6. Cholesky factorization 6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Real Roots of Univariate Polynomials with Real Coefficients

Real Roots of Univariate Polynomials with Real Coefficients Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

it is easy to see that α = a

it is easy to see that α = a 21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives

6 EXTENDING ALGEBRA. 6.0 Introduction. 6.1 The cubic equation. Objectives 6 EXTENDING ALGEBRA Chapter 6 Extending Algebra Objectives After studying this chapter you should understand techniques whereby equations of cubic degree and higher can be solved; be able to factorise

More information

5 Homogeneous systems

5 Homogeneous systems 5 Homogeneous systems Definition: A homogeneous (ho-mo-jeen -i-us) system of linear algebraic equations is one in which all the numbers on the right hand side are equal to : a x +... + a n x n =.. a m

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

Polynomial Invariants

Polynomial Invariants Polynomial Invariants Dylan Wilson October 9, 2014 (1) Today we will be interested in the following Question 1.1. What are all the possible polynomials in two variables f(x, y) such that f(x, y) = f(y,

More information

7. Some irreducible polynomials

7. Some irreducible polynomials 7. Some irreducible polynomials 7.1 Irreducibles over a finite field 7.2 Worked examples Linear factors x α of a polynomial P (x) with coefficients in a field k correspond precisely to roots α k [1] of

More information

CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

CHAPTER 1 Splines and B-splines an Introduction

CHAPTER 1 Splines and B-splines an Introduction CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the

More information

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra: Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

An Overview of the Finite Element Analysis

An Overview of the Finite Element Analysis CHAPTER 1 An Overview of the Finite Element Analysis 1.1 Introduction Finite element analysis (FEA) involves solution of engineering problems using computers. Engineering structures that have complex geometry

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

Basics of Polynomial Theory

Basics of Polynomial Theory 3 Basics of Polynomial Theory 3.1 Polynomial Equations In geodesy and geoinformatics, most observations are related to unknowns parameters through equations of algebraic (polynomial) type. In cases where

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Integer roots of quadratic and cubic polynomials with integer coefficients

Integer roots of quadratic and cubic polynomials with integer coefficients Integer roots of quadratic and cubic polynomials with integer coefficients Konstantine Zelator Mathematics, Computer Science and Statistics 212 Ben Franklin Hall Bloomsburg University 400 East Second Street

More information

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)! Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

1 Error in Euler s Method

1 Error in Euler s Method 1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of

More information

calculating the result modulo 3, as follows: p(0) = 0 3 + 0 + 1 = 1 0,

calculating the result modulo 3, as follows: p(0) = 0 3 + 0 + 1 = 1 0, Homework #02, due 1/27/10 = 9.4.1, 9.4.2, 9.4.5, 9.4.6, 9.4.7. Additional problems recommended for study: (9.4.3), 9.4.4, 9.4.9, 9.4.11, 9.4.13, (9.4.14), 9.4.17 9.4.1 Determine whether the following polynomials

More information

(Refer Slide Time: 1:42)

(Refer Slide Time: 1:42) Introduction to Computer Graphics Dr. Prem Kalra Department of Computer Science and Engineering Indian Institute of Technology, Delhi Lecture - 10 Curves So today we are going to have a new topic. So far

More information

Arithmetic and Algebra of Matrices

Arithmetic and Algebra of Matrices Arithmetic and Algebra of Matrices Math 572: Algebra for Middle School Teachers The University of Montana 1 The Real Numbers 2 Classroom Connection: Systems of Linear Equations 3 Rational Numbers 4 Irrational

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

the points are called control points approximating curve

the points are called control points approximating curve Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

SOLVING POLYNOMIAL EQUATIONS BY RADICALS

SOLVING POLYNOMIAL EQUATIONS BY RADICALS SOLVING POLYNOMIAL EQUATIONS BY RADICALS Lee Si Ying 1 and Zhang De-Qi 2 1 Raffles Girls School (Secondary), 20 Anderson Road, Singapore 259978 2 Department of Mathematics, National University of Singapore,

More information

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)

Revised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m) Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.

More information

RESULTANT AND DISCRIMINANT OF POLYNOMIALS

RESULTANT AND DISCRIMINANT OF POLYNOMIALS RESULTANT AND DISCRIMINANT OF POLYNOMIALS SVANTE JANSON Abstract. This is a collection of classical results about resultants and discriminants for polynomials, compiled mainly for my own use. All results

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

March 29, 2011. 171S4.4 Theorems about Zeros of Polynomial Functions

March 29, 2011. 171S4.4 Theorems about Zeros of Polynomial Functions MAT 171 Precalculus Algebra Dr. Claude Moore Cape Fear Community College CHAPTER 4: Polynomial and Rational Functions 4.1 Polynomial Functions and Models 4.2 Graphing Polynomial Functions 4.3 Polynomial

More information

Solution to Homework 2

Solution to Homework 2 Solution to Homework 2 Olena Bormashenko September 23, 2011 Section 1.4: 1(a)(b)(i)(k), 4, 5, 14; Section 1.5: 1(a)(b)(c)(d)(e)(n), 2(a)(c), 13, 16, 17, 18, 27 Section 1.4 1. Compute the following, if

More information

1 Mathematical Models of Cost, Revenue and Profit

1 Mathematical Models of Cost, Revenue and Profit Section 1.: Mathematical Modeling Math 14 Business Mathematics II Minh Kha Goals: to understand what a mathematical model is, and some of its examples in business. Definition 0.1. Mathematical Modeling

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

3.6. Partial Fractions. Introduction. Prerequisites. Learning Outcomes

3.6. Partial Fractions. Introduction. Prerequisites. Learning Outcomes Partial Fractions 3.6 Introduction It is often helpful to break down a complicated algebraic fraction into a sum of simpler fractions. For 4x + 7 example it can be shown that x 2 + 3x + 2 has the same

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

Vieta s Formulas and the Identity Theorem

Vieta s Formulas and the Identity Theorem Vieta s Formulas and the Identity Theorem This worksheet will work through the material from our class on 3/21/2013 with some examples that should help you with the homework The topic of our discussion

More information

Stress Recovery 28 1

Stress Recovery 28 1 . 8 Stress Recovery 8 Chapter 8: STRESS RECOVERY 8 TABLE OF CONTENTS Page 8.. Introduction 8 8.. Calculation of Element Strains and Stresses 8 8.. Direct Stress Evaluation at Nodes 8 8.. Extrapolation

More information

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

More information

15. Symmetric polynomials

15. Symmetric polynomials 15. Symmetric polynomials 15.1 The theorem 15.2 First examples 15.3 A variant: discriminants 1. The theorem Let S n be the group of permutations of {1,, n}, also called the symmetric group on n things.

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

5.5. Solving linear systems by the elimination method

5.5. Solving linear systems by the elimination method 55 Solving linear systems by the elimination method Equivalent systems The major technique of solving systems of equations is changing the original problem into another one which is of an easier to solve

More information

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

More information

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a

88 CHAPTER 2. VECTOR FUNCTIONS. . First, we need to compute T (s). a By definition, r (s) T (s) = 1 a sin s a. sin s a, cos s a 88 CHAPTER. VECTOR FUNCTIONS.4 Curvature.4.1 Definitions and Examples The notion of curvature measures how sharply a curve bends. We would expect the curvature to be 0 for a straight line, to be very small

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Polynomials (Ch.1) Study Guide by BS, JL, AZ, CC, SH, HL Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given. Sasha s method

More information

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1 Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Objectives. Materials

Objectives. Materials Activity 4 Objectives Understand what a slope field represents in terms of Create a slope field for a given differential equation Materials TI-84 Plus / TI-83 Plus Graph paper Introduction One of the ways

More information