Lecture 5: Finite differences 1

Size: px
Start display at page:

Download "Lecture 5: Finite differences 1"

Transcription

1 Lecture 5: Finite differences 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 17, 2010 c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 1 / 35

2 Outline 1 Finite differences 2 Interpolating tabulated values 3 Difference equations 4 Differential equations 5 Function approximation Finite differences Splines A first look at Fourier series 6 References c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 2 / 35

3 Outline Finite differences 1 Finite differences 2 Interpolating tabulated values 3 Difference equations 4 Differential equations 5 Function approximation Finite differences Splines A first look at Fourier series 6 References c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 3 / 35

4 Finite differences The generic function is pathological The Bolzano-Weierstrass function is defined for any odd positive integer b as f (x) = a n cos πb n x, 0 < a < 1, ab > 1 + 3π/2. n=0 This was the first known example of a real function which is continuous at all points, but has no derivative anywhere. Functions of this kind are generic. The trajectory of Brownian motion is of this form. Path integrals in any non-trivial field theory are dominated by such paths: they are the essence of quantum theory. Problem 1: What can you say about the accuracy of numerical computations of the BW function and its derivative from the series definition above? Assume that the machine precision is p. What is the coarsest grid on which you can tabulate the BW function and still get close to machine precision evaluation of the function through a polynomial interpolation? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 4 / 35

5 Finite differences Non-Newtonian machines The forward, backward and central differences are h f = h f = D h f = f (x + h) f (x), h f (x) f (x h), h f (x + h) f (x h), 2h all have the same limit (over reals) as h 0, and it is the derivative, f (x). The limit has no meaning for floating point numbers: as h becomes smaller and smaller, the differences in the numerator become smaller and catastrophic loss of significance sets in E E E E E E E E E E E E E E E h, h sin(1.0) and h sin(1.0) in single precision arithmetic. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 5 / 35

6 Finite differences Instability and inaccuracy A Taylor series expansion shows that h f (x) = f (x) + hf (c), where x c x + h, so for large h there is an inaccuracy. For small h one has to also take care of numerical inaccuracy. If ǫ m is the machine precision, then h f (x) = f (x) + ǫ m h + hf (c). Here (and later) a bar under a quantity will denote the result of a numerical computation. The 1/h term is a numerical instability. Problem 2: Perform the analysis of errors and instabilities for h and D h. Write FORTRAN, f90 and C codes for computing all three difference coefficients for sin x and check whether these formulæ are accurate, i.e., whether ǫ m can be extracted from the behaviour of the differences. Can one extract the exact value of f (x) using computations at different h? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 6 / 35

7 Outline Interpolating tabulated values 1 Finite differences 2 Interpolating tabulated values 3 Difference equations 4 Differential equations 5 Function approximation Finite differences Splines A first look at Fourier series 6 References c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 7 / 35

8 Interpolating tabulated values An impossible thing before breakfast Given a set of knot points, {x 0, x 1,, x N } (assume that the points are ordered x 0 < x 1 < < x N ), and a set of function values at these points, {y 0, y 1,, y N }, how do you find the function value at any point in the range [x 0, x N ]? If you find an answer, f (x), then I can always construct a new function which coincides with f (x) at every tabulated value and can be made to differ from f (x) everywhere else by as much as we wish. Therefore, this is an ill-posed question. We try to make the answer unique by asking how to construct polynomials which pass through the tabulated points. We can, of course, also ask that these polynomials satisfy some other properties, like smoothness, etc. We could also ask for rational functions, Fourier series, or any other well-posed question. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 8 / 35

9 Interpolating tabulated values An impossible thing before breakfast Given a set of knot points, {x 0, x 1,, x N } (assume that the points are ordered x 0 < x 1 < < x N ), and a set of function values at these points, {y 0, y 1,, y N }, how do you find the function value at any point in the range [x 0, x N ]? If you find an answer, f (x), then I can always construct a new function N f (x) + g(x) (x x i ) which coincides with f (x) at every tabulated value and can be made to differ from f (x) everywhere else by as much as we wish. Therefore, this is an ill-posed question. We try to make the answer unique by asking how to construct polynomials which pass through the tabulated points. We can, of course, also ask that these polynomials satisfy some other properties, like smoothness, etc. We could also ask for rational functions, Fourier series, or any other well-posed question. i=0 c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 8 / 35

10 Interpolating tabulated values Polynomial fits and van der Monde matrices Typically, you could find a polynomial of order N which passes through N + 1 points P(x) = N c i x i, where P(x i ) = y i (0 i N). i=0 This gives a set of linear equations to solve 1 x 0 x N 0 c 0 y 0 1 x 1 x1 N c = y x N xn N c N y N This (N + 1) (N + 1) matrix is called a van der Monde matrix. It has very small determinant: typically det M exp[ O(N p )] where p > 1. As a result, inverting it by the usual methods leads to catastrophic loss of precision. So look for more clever methods. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 9 / 35

11 Newton polynomials Interpolating tabulated values For a given set of knot points, the Newton polynomials are, P 0 (x) = 1 and n P n (x) = (x x i ). Note P n (x α ) = 0 when α n. i=0 The Newton polynomial approximation is the linear combination P(x) = N a n P n (x), where P(x α ) = y α for 0 α N. n=0 Solve this by inverting the matrix a 0 y 0 1 x 1 x 0 0 a = y 1.. N 1 1 x 1 x 0 n=0 (x N x n ) a N 1 y N 1 Problem 3: Plot the Newton polynomials for a randomly chosen set of grid points. Bring the van der Monde matrix to the above form. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 10 / 35

12 Divided differences Interpolating tabulated values Solutions are the divided differences. Solve recursively starting from equation for a 0 : a 0 = y 0, a 1 = y 1 y 0 x 1 x 0, a 2 = One usually displays this as the tableau: 2 j=0 x 0 y 0 y (1) 0 y (2) 0 y (3) 0 x 1 y 1 y (1) 1 y (2) 1 x 2 y 2 y (1) 2 x 3 y 3 y j n=0,2 n j (x j x n ). The elements of the tableau can be generated by the recursive formulæ ( ) x (l+1) i = x (l) i+1 x(l) i, y (l+1) i = y (l) i+1 y(l) i /x (l+1) i, starting with x (0) i = x i and y (0) i = y i. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 11 / 35

13 Interpolating tabulated values A few problems Problem 4: Show that when the knot points, x i, approach each other, the Newton polynomial approximation approaches a truncated Taylor series. If these computations are performed in floating point arithmetic, is the limit well-defined? Given a certain machine precision, is there a limit to how closely the x i s can be spaced before catastrophic loss of precision renders the divided differences meaningless? Problem 5: What relation does the Newton polynomial approximation bear to the Lagrange interpolation formula P(t) = N j=0 y j n=0,n n j (t x n) n=0,n n j (x j x n )? Is there any way to make this approximation behave better than the divided differences at any fixed value of machine precision? Problem 6: Are the derivative of the Lagrange interpolation formula continuous at the knot points? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 12 / 35

14 Outline Difference equations 1 Finite differences 2 Interpolating tabulated values 3 Difference equations 4 Differential equations 5 Function approximation Finite differences Splines A first look at Fourier series 6 References c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 13 / 35

15 Difference equations Solving a finite difference equation The equation h f = αf gives f (t + h) = (1 + αh)f (t) = (1 + αh) 1+t/h f (0). In the limit of h 0, this is indeed the exponential function. With the approximation for the derivative, we neglect terms of order h 2. The solution of the differential equation f (t) = αf (t) would give f (t + h) = e hα f (t) [ 1 + hα + O(h 2 ) ] f (t). So, the solution of the difference equation is correct to the same order. This will be generalized to Euler s method. Improvements can be obtained by solving the equation h f = 1 ( ) e hα 1 f. h It is common to improve the equation by placing correction terms into the right hand side. It is also possible to use expressions for the derivative which are correct to higher order in h. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 14 / 35

16 Difference equations Instability and inaccuracy The solution of the differential equation f (t) = αf (t) goes to zero in the limit t for all positive values of α and generic initial conditions. The equation h f = αf gives f (t + h) = (1 αh)f (t) = (1 αh) 1+t/h f (0). If αh > 2 then f (t) in the limit t. In other words, the solution is unstable. In a region around h = 2/α, one may have catastrophic loss of significance. Even if the loss per step is not catastrophic, errors could build up over many steps. In general, if the error at the n-th step is ǫ n, one will have ǫ n = K n ǫ 1, where K is the error in evaluating the multiplicative factor. This is large also when αh is small. If 1 αh were evaluated exactly, then an error made for some reason at any step of the computation propagates unchanged, i.e., K = 1. Any error in computing this multiplicative factor increases K. Hence, the error grows exponentially, irrespective of the sign of α. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 15 / 35

17 Stabilizing the solution Difference equations One way to stabilize the solution is to replace the difference equation by h f (t) = αf (t + h). Then f (t + h) = αh f (t) = 1 f (0). (1 + αh) 1+t/h This goes to zero in the limit of large t. Since the solution requires the evaluation of the derivative at the next step of the solution, a method such as this is called an implicit method. Another stable solution is obtained by using the difference equation h f (t) = α[f (t) + f (t + h)]/2. This has the solution f (t + h) = 1 αh/2 1 + αh/2 f (t) = ( 1 αh 1 + αh ) t/h f (0). Not only is this stable, one can also check that this is correct to O(h 2 ). Evaluating the derivative at the mid-point of the interval is a technique will be used in the 2nd order Runge-Kutta method. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 16 / 35

18 Difference equations An oscillator: instabilities The system of equations df dt = ( ) 0 1 f, f(0) = 1 0 ( ) 0 1 has the solution f 1 (t) = sint and f 2 (t) = cos t. The corresponding forward difference equations, h f(t) = Mf(t), where M is the 2 2 matrix above, have the solutions ( ) 1 h f(t + h) = f(t). h 1 This matrix has eigenvalues λ ± = 1 ± ih. Since λ ± > 1, errors in the computed solution will always grow. Replace this by the implicit equation h f(t) = Mf(t + h). Then the solution is ( ) 1 1 h f(t + h) = f(t). h 1 This matrix has eigenvalues λ ± < 1, so the solutions will always decay. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 17 / 35

19 Difference equations The leap-frog method t= 1/2 3/2 5/2 7/2 9/2 11/2 13/2 t= The oscillator should have a conserved quantity f 2. In these methods f(t + h) 2 = λ ± 2 f(t) 2 = f(t) 2 + O(h 2 ). If one improves the conservation law, one may have a better algorithm. Introduce a staggered lattice as here. Place f 1 at integer points nh and f 2 at half-integers (n + 1/2)h. Place the f 1 on the integer lattice and the f 2 at the half integer points. Then the solutions of the difference equations are f 1 (t + h) = f 1 (t) + hf 2 (t + h 2 ), f 2(t + 3h 2 ) = f 2(t + h 2 ) hf 1(t + h). The initial and final half steps are: f 2 (h/2) = f 2 (0) hf 1 (0)/2 and f 2 (nh) = f 2 (nh h/2) hf 1 (nh)/2. Clearly this is equivalent to f 1 (t+h) = f 1 (t)+hf 2 (t) h2 2 f 1(t), f 2 (t+h) = f 2 (t) h 2 [f 1(t)+f 1 (t+h)]. Therefore f(t + h) 2 = f(t) 2 + O(h 4 ). c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 18 / 35

20 Difference equations Using the Euler and leapfrog methods Problem 7: Write Mathematica functions which will take as input the vector f(t) and the quantity h and give back (1) the Euler updated vector, (2) the implicit Euler updated vector, and (3) the leap-frog updated vector, f(t + h). In each case use the function to evolve the vector (0,1) T through time 4π using h = 2π/10, 2π/20, 2π/50 and 2π/100. Plot the phase space trajectories, and write out the initial and final values of f 2 in each case. Problem 8: Generalize the leap-frog method to solve the problem of the anharmonic oscillator, i.e., H = 1 2 p ! q4. Implement your solution in Mathematica and compare it with the results obtained using the inbuilt methods for solving differential equations. Change the precision of the Mathematica routines to see how this affects the solution. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 19 / 35

21 Outline Differential equations 1 Finite differences 2 Interpolating tabulated values 3 Difference equations 4 Differential equations 5 Function approximation Finite differences Splines A first look at Fourier series 6 References c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 20 / 35

22 Differential equations Solving differential equations Euler s method of solving a system of first order differential equations, f (x) = d(f(x), x), is f(x + h) = f(x) + hd(f(x), x), where f i (x) is the i th function. This is equivalent to solving the difference equation h f(x) = d(f(x), x). The solution is correct to order h. The second-order Runge-Kutta algorithm (RK2) is defined by f(x+h) = f(x)+hd(f(x+h/2), x+h/2), f(x+h/2) = f(x)+ h d(f (x), x), 2 f(x) x Euler RK2 x+h/2 x+h This is correct to O(h 2 ). One can also think of this as Euler s method on a staggered lattice. This corrects the right hand side of the equation by interpolating the derivative linearly between x and x + h. A quadratic interpolation gives rise to the fourth order Runge-Kutta algorithm. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 21 / 35

23 Differential equations Complexity of algorithms Suppose that the dimension of the vectors f and d is M. Suppose also that the CPU time taken to perform an arithmetic operation is T 1, and that required for the evaluation of each component of d is T 2. It may happen that T 2 T 1. Then the time taken per step of the Euler method is M(2T 1 +T 2 ), and the time per step in RK2 is exactly twice, i.e., 2M(2T 1 + T 2 ). If the ODEs are integrated from x i to x f in N steps, i.e., h = (x f x 1 )/N, then the time taken for the Euler method is MN(2T 1 + T 2 ), giving an error of O(N 2 ). RK2 takes exactly twice the time to get an error of O(N 3 ). A fair comparison would would be of the time taken to make comparable errors. To get an error of O(N 3 ) using Euler s method would require time of order MN 3/2 (2T 1 + T 2 ). We say that RK2 is faster because to get the same error, the time requirements are { O(N 3/2 ) for Euler s method, T = O(N) for RK2. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 22 / 35

24 Example Differential equations Problem 9: The RK2 solution of the equations are dx dt = v, dv dt = x, x(nh + h 2 ) = x(nh) + h 2 v(nh) v(nh + h 2 ) = v(nh) h 2 x(nh) x(nh + h) = x(nh) + hv(nh + h ( 2 ) = 1 h2 2 v(nh + h) = v(nh) hx(nh + h 2 ) = ( 1 h2 2 ) x(nh) + h 2 v(nh) ) v(nh) h 2 x(nh). Verify using Mathematica that this is indeed the correct solution of the equations upto and including the O(h 2 ) term. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 23 / 35

25 Differential equations A problem Problem 10: Implement a leap-frog integration for the anharmonic oscillator H = 1 2 p ! q4 as a subroutine in f90 and c which is suitable for updating an ensemble of initial conditions; i.e., it should take a set of (staggered) initial phase space positions (p i,q i ) where 1 i N and a time step, h, and return the final phase space positions. Also implement a method for a half-step evolution of p i in the beginning and end as a separate subroutine. Take N = 1000 different initial conditions distributed uniformly within the phase space volume with H 1. Find the density of points within the phase space volumes with H 1/4, 1/2 and 1. Evolve the systems using your program for trajectories of time duration 1, 2, 5 and 10 units. At the end of these times compute the same three phase space densities as initially. Are these conserved? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 24 / 35

26 Outline Function approximation 1 Finite differences 2 Interpolating tabulated values 3 Difference equations 4 Differential equations 5 Function approximation Finite differences Splines A first look at Fourier series 6 References c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 25 / 35

27 Function approximation Finite differences Recursive evaluation of the Lagrange interpolation formula Let P i:i+m be the m-th order polynomial interpolating the function at x i, x i+1,, x i+m. Then one has Neville s recurrence formula 1 P i:i+m (t) = x i+m x i t x i P i:i+m 1 (t) t x i+m P i+1:i+m (t). which starts from the constant polynomials P i:i (t) = y i. If the error in P i,i+1, i+m is δ i,i+1, i+m, whose magnitude is bounded by δ, then the recurrence gives ( ) x i+m x δ i,i+1, i+m δ x i+m x i + x x i x i+m x i = δ. Therefore errors in the function values do not grow unless the knot points are so close that terms like x i+m x i are dominated by catastrophic loss of significance. For a further improvement on this algorithm, see Numerical Recipes. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 26 / 35

28 Function approximation Neville s process as elimination Finite differences This can be seen as a process of elimination. Make the Taylor expansions of the function at x 0 and x 1 f (t) = f (x 0 ) + h 0 f (x 0 ) h2 0f (x 0 ) + f (t) = f (x 1 ) + h 1 f (x 1 ) h2 1f (x 1 ) + where h i = t x i. Neville s process applied to these expansions gives (h 1 h 0 )f (t) = h 0 f (x 1 ) h 1 f (x 0 ) + [ h 0 h 1 { f (x 1 ) + h 1 f (x 1 ) + } { h 0 h 1 f (x 0 ) + h 0 f (x 0 ) + }] + = h 0 f (x 1 ) h 1 f (x 0 ) + h 0h 1 [ h0 f (x 1 ) h 1 f (x 0 ) ] + 2 This removes the first derivative. What happens at the next step? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 27 / 35

29 Function approximation Finite differences Is Lagrange interpolation a well-posed problem? Problem 11: Write a Mathematica code which starts from a Taylor expansion of a function (and its derivatives) around some number of knot points. Apply successive steps of Neville s process to this and show that at each step of the process one more derivative is removed. Problem 12: We have shown above that initial errors, δ, in the function values, y i, do not grow when using Lagrange interpolation using Neville s recurrence. Check what happens to the Newton polynomials, divided differences, the Lagrange interpolation formula and Neville s recurrence when the knot point, x i, are changed slightly. What happens when one of the knot-points is removed entirely? Is the behaviour equally stable (or unstable) for interpolation (i.e., x 0 t x N ) and extrapolation (t < x 0 or t > x N )? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 28 / 35

30 Numerical integration Function approximation Finite differences Problem 13: Given a polynomial approximation to tabulated data on functions, give expressions for integrating them. Note that the integration formulae should only involve the tabulated function values and constants. Read the Newton-Cotes integration formulae and check what kind of function interpolations they come from. Problem 14: Consider integrals of functions with poles in the region of integration, such as I = dx P(x) x 1, where P(1) 0. These are often regularized using the Cauchy Principal Value prescription I = Lim 1 ǫ dx P(x) ǫ 0 x 1 + dx P(x) 1+ǫ x 1. Find an efficient method of evaluating the principal value. How does the value change if the integral is defined as the sum over the the two intervals [,1 5ǫ] and [1 + ǫ, ]? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 29 / 35

31 Smooth polynomials Function approximation Splines Given knot points {x 0, x 1 } and function values {y 0, y 1 }, one has a linear interpolation g(t) = A(t)y 0 +B(t)y 1, where A(t) = t x 1 x 0 x 1, B(t) = x 0 t x 0 x 1. The first derivative is constant in the interval (x 0, x 1 ). In the next interval, (x 1, x 2 ), the first derivative is again constant but a different value. As a result, the second derivative is discontinuous at the knot points. If one had information on the second derivatives at the knot points, one could build an interpolating function h(t) = A(t)y 0 + B(t)y 1. Using this, one could try to make cubic interpolating function f (t) = g(t) + (t x 0 )(t x 1 )h(t), and arrange it to have continuous first derivatives at the knot points. This is the idea of a spline. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 30 / 35

32 Cubic splines Function approximation Splines In fact the first derivative is f (t) = g (t)y 1 + (t x 0 + t x 1 )h(t) + (t x 0 )(t x 1 )h (t). At the knot point x 1 one has the two expressions f (x 1 ) = y 0 y 1 x 0 x 1 (x 0 x 1 )y 1, f (x 1 ) = y 1 y 2 x 1 x 2 + (x 1 x 2 )y 1. Continuity of the derivative is obtained by equating these two expressions. This gives a solution for the unknown quantity [ y 1 1 y0 y 1 = y ] 1 y 2. x 0 x 2 x 0 x 1 x 1 x 2 One can eliminate y i at every knot point except x 0 and x N. At these two points one can give a boundary condition of one s choice. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 31 / 35

33 A Fourier series Function approximation A first look at Fourier series A periodic function y(t) = y(t + L), has a compact representation as the Fourier series N ( ) 2πitk y(t) = f k exp. NL k=0 The Fourier coefficients are given by the solution of a complex van der Monde matrix equation 1 z 0 z N 0 f 0 y 0 1 z 1 z1 N f = y z N zn N f N y N where z j = exp(2πix j /NL) and z j = 1. If the knot points x j = jl, then the z j s are roots of unity, and the sum over the elements of all but one row add up to zero. Since the rows orthogonal to each other, the matrix is invertible. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 32 / 35

34 Fourier inversion Function approximation A first look at Fourier series In fact, the definition of the Fourier transformation, f k = 1 N ( y j exp 2πix ) jk, N + 1 NL j=0 gives the inverse (zj is the complex conjugate of z j ) y 0 1 z0 z1 zn y 1 N = z0 N z1 N zn N y N f 0 f 1.. Problem 15: Hadamard s bound states that for a real (N + 1) (N + 1) matrix whose elements are bounded by unity, the determinant is at most (N + 1) (N+1)/2. What relation do the Fourier matrices hold to Hadamard matrices, i.e., those real matrices which satisfy the bound? c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 33 / 35 f N

35 Outline References 1 Finite differences 2 Interpolating tabulated values 3 Difference equations 4 Differential equations 5 Function approximation Finite differences Splines A first look at Fourier series 6 References c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 34 / 35

36 References References and further reading 1 Numerical Recipes, by W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Cambridge University Press. Treat this as a well written manual for a ready source of building blocks. However, be ready to go into the blocks to improve their performance. 2 E. H. Neville, Iterative interpolation, Journal of the Indian Mathematical Society, Vol 20, p. 87 (1934). 3 Several points in this lecture are illustrated in the associated Mathematica notebooks. These notebooks also have additional examples. They are given in tar.gz format. Unpack them using the tar -xvzf command. c : Sourendu Gupta (TIFR) Lecture 5: Finite differences 1 CP 1 35 / 35

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

4.3 Lagrange Approximation

4.3 Lagrange Approximation 206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

More information

Homework 2 Solutions

Homework 2 Solutions Homework Solutions Igor Yanovsky Math 5B TA Section 5.3, Problem b: Use Taylor s method of order two to approximate the solution for the following initial-value problem: y = + t y, t 3, y =, with h = 0.5.

More information

Numerical Analysis An Introduction

Numerical Analysis An Introduction Walter Gautschi Numerical Analysis An Introduction 1997 Birkhauser Boston Basel Berlin CONTENTS PREFACE xi CHAPTER 0. PROLOGUE 1 0.1. Overview 1 0.2. Numerical analysis software 3 0.3. Textbooks and monographs

More information

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.

Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0. Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 16, 2015 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY January 10, 2010 CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY The set of polynomials over a field F is a ring, whose structure shares with the ring of integers many characteristics.

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

The continuous and discrete Fourier transforms

The continuous and discrete Fourier transforms FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

1 Error in Euler s Method

1 Error in Euler s Method 1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

Natural cubic splines

Natural cubic splines Natural cubic splines Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology October 21 2008 Motivation We are given a large dataset, i.e. a function sampled

More information

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS N. ROBIDOUX Abstract. We show that, given a histogram with n bins possibly non-contiguous or consisting

More information

3.2 Sources, Sinks, Saddles, and Spirals

3.2 Sources, Sinks, Saddles, and Spirals 3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients

More information

Differentiation and Integration

Differentiation and Integration This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have

More information

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t) Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system

More information

FFT Algorithms. Chapter 6. Contents 6.1

FFT Algorithms. Chapter 6. Contents 6.1 Chapter 6 FFT Algorithms Contents Efficient computation of the DFT............................................ 6.2 Applications of FFT................................................... 6.6 Computing DFT

More information

An Introduction to Partial Differential Equations

An Introduction to Partial Differential Equations An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

(Refer Slide Time: 01:11-01:27)

(Refer Slide Time: 01:11-01:27) Digital Signal Processing Prof. S. C. Dutta Roy Department of Electrical Engineering Indian Institute of Technology, Delhi Lecture - 6 Digital systems (contd.); inverse systems, stability, FIR and IIR,

More information

3. Reaction Diffusion Equations Consider the following ODE model for population growth

3. Reaction Diffusion Equations Consider the following ODE model for population growth 3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent

More information

1 Cubic Hermite Spline Interpolation

1 Cubic Hermite Spline Interpolation cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite

More information

16.1 Runge-Kutta Method

16.1 Runge-Kutta Method 70 Chapter 6. Integration of Ordinary Differential Equations CITED REFERENCES AND FURTHER READING: Gear, C.W. 97, Numerical Initial Value Problems in Ordinary Differential Equations (Englewood Cliffs,

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate.

An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate. Chapter 10 Series and Approximations An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate 1 0 e x2 dx, you could set

More information

Separable First Order Differential Equations

Separable First Order Differential Equations Separable First Order Differential Equations Form of Separable Equations which take the form = gx hy or These are differential equations = gxĥy, where gx is a continuous function of x and hy is a continuously

More information

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 =0, x 1 = π 4, x

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials

3. Interpolation. Closing the Gaps of Discretization... Beyond Polynomials 3. Interpolation Closing the Gaps of Discretization... Beyond Polynomials Closing the Gaps of Discretization... Beyond Polynomials, December 19, 2012 1 3.3. Polynomial Splines Idea of Polynomial Splines

More information

5.1 Radical Notation and Rational Exponents

5.1 Radical Notation and Rational Exponents Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots

More information

TMA4213/4215 Matematikk 4M/N Vår 2013

TMA4213/4215 Matematikk 4M/N Vår 2013 Norges teknisk naturvitenskapelige universitet Institutt for matematiske fag TMA43/45 Matematikk 4M/N Vår 3 Løsningsforslag Øving a) The Fourier series of the signal is f(x) =.4 cos ( 4 L x) +cos ( 5 L

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver. Finite Difference Methods for Partial Differential Equations As you are well aware, most differential equations are much too complicated to be solved by

More information

Understanding Poles and Zeros

Understanding Poles and Zeros MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

www.mathsbox.org.uk ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates Further Pure Summary Notes. Roots of Quadratic Equations For a quadratic equation ax + bx + c = 0 with roots α and β Sum of the roots Product of roots a + b = b a ab = c a If the coefficients a,b and c

More information

Lies My Calculator and Computer Told Me

Lies My Calculator and Computer Told Me Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing

More information

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

CHAPTER 1 Splines and B-splines an Introduction

CHAPTER 1 Splines and B-splines an Introduction CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the

More information

Numerical Solution of Differential

Numerical Solution of Differential Chapter 13 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown

More information

Stability Analysis for Systems of Differential Equations

Stability Analysis for Systems of Differential Equations Stability Analysis for Systems of Differential Equations David Eberly Geometric Tools, LLC http://wwwgeometrictoolscom/ Copyright c 1998-2016 All Rights Reserved Created: February 8, 2003 Last Modified:

More information

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology.

Numerical Analysis Introduction. Student Audience. Prerequisites. Technology. Numerical Analysis Douglas Faires, Youngstown State University, (Chair, 2012-2013) Elizabeth Yanik, Emporia State University, (Chair, 2013-2015) Graeme Fairweather, Executive Editor, Mathematical Reviews,

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

Derivative Approximation by Finite Differences

Derivative Approximation by Finite Differences Derivative Approximation by Finite Differences David Eberly Geometric Tools, LLC http://wwwgeometrictoolscom/ Copyright c 998-26 All Rights Reserved Created: May 3, 2 Last Modified: April 25, 25 Contents

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS

SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS SECOND-ORDER LINEAR DIFFERENTIAL EQUATIONS A second-order linear differential equation has the form 1 Px d y dx dy Qx dx Rxy Gx where P, Q, R, and G are continuous functions. Equations of this type arise

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS

POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS POLYNOMIAL HISTOPOLATION, SUPERCONVERGENT DEGREES OF FREEDOM, AND PSEUDOSPECTRAL DISCRETE HODGE OPERATORS N. ROBIDOUX Abstract. We show that, given a histogram with n bins possibly non-contiguous or consisting

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Roots of Polynomials

Roots of Polynomials Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x

More information

Higher Order Equations

Higher Order Equations Higher Order Equations We briefly consider how what we have done with order two equations generalizes to higher order linear equations. Fortunately, the generalization is very straightforward: 1. Theory.

More information

The Method of Partial Fractions Math 121 Calculus II Spring 2015

The Method of Partial Fractions Math 121 Calculus II Spring 2015 Rational functions. as The Method of Partial Fractions Math 11 Calculus II Spring 015 Recall that a rational function is a quotient of two polynomials such f(x) g(x) = 3x5 + x 3 + 16x x 60. The method

More information

Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations.

Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3. Stability and pole locations. Part IB Paper 6: Information Engineering LINEAR SYSTEMS AND CONTROL Dr Glenn Vinnicombe HANDOUT 3 Stability and pole locations asymptotically stable marginally stable unstable Imag(s) repeated poles +

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

MATH 551 - APPLIED MATRIX THEORY

MATH 551 - APPLIED MATRIX THEORY MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points

More information

Mean value theorem, Taylors Theorem, Maxima and Minima.

Mean value theorem, Taylors Theorem, Maxima and Minima. MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.

More information

The Method of Least Squares. Lectures INF2320 p. 1/80

The Method of Least Squares. Lectures INF2320 p. 1/80 The Method of Least Squares Lectures INF2320 p. 1/80 Lectures INF2320 p. 2/80 The method of least squares We study the following problem: Given n points (t i,y i ) for i = 1,...,n in the (t,y)-plane. How

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems Using the Theory of Reals in Analyzing Continuous and Hybrid Systems Ashish Tiwari Computer Science Laboratory (CSL) SRI International (SRI) Menlo Park, CA 94025 Email: ashish.tiwari@sri.com Ashish Tiwari

More information

Numerical Methods for Engineers

Numerical Methods for Engineers Steven C. Chapra Berger Chair in Computing and Engineering Tufts University RaymondP. Canale Professor Emeritus of Civil Engineering University of Michigan Numerical Methods for Engineers With Software

More information

Second Order Linear Partial Differential Equations. Part I

Second Order Linear Partial Differential Equations. Part I Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction

More information

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous? 36 CHAPTER 1. LIMITS AND CONTINUITY 1.3 Continuity Before Calculus became clearly de ned, continuity meant that one could draw the graph of a function without having to lift the pen and pencil. While this

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

APPLIED MATHEMATICS ADVANCED LEVEL

APPLIED MATHEMATICS ADVANCED LEVEL APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

Derivative Free Optimization

Derivative Free Optimization Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free

More information

Derive 5: The Easiest... Just Got Better!

Derive 5: The Easiest... Just Got Better! Liverpool John Moores University, 1-15 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de Technologie Supérieure, Canada Email; mbeaudin@seg.etsmtl.ca 1. Introduction Engineering

More information

Numerical Methods for Ordinary Differential Equations 30.06.2013.

Numerical Methods for Ordinary Differential Equations 30.06.2013. Numerical Methods for Ordinary Differential Equations István Faragó 30.06.2013. Contents Introduction, motivation 1 I Numerical methods for initial value problems 5 1 Basics of the theory of initial value

More information

Chapter 5. Methods for ordinary differential equations. 5.1 Initial-value problems

Chapter 5. Methods for ordinary differential equations. 5.1 Initial-value problems Chapter 5 Methods for ordinary differential equations 5.1 Initial-value problems Initial-value problems (IVP) are those for which the solution is entirely known at some time, say t = 0, and the question

More information

Lecture 13 - Basic Number Theory.

Lecture 13 - Basic Number Theory. Lecture 13 - Basic Number Theory. Boaz Barak March 22, 2010 Divisibility and primes Unless mentioned otherwise throughout this lecture all numbers are non-negative integers. We say that A divides B, denoted

More information

LIES MY CALCULATOR AND COMPUTER TOLD ME

LIES MY CALCULATOR AND COMPUTER TOLD ME LIES MY CALCULATOR AND COMPUTER TOLD ME See Section Appendix.4 G for a discussion of graphing calculators and computers with graphing software. A wide variety of pocket-size calculating devices are currently

More information

Numerical Methods for Solving Systems of Nonlinear Equations

Numerical Methods for Solving Systems of Nonlinear Equations Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information