Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Size: px
Start display at page:

Download "Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88"

Transcription

1 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88

2 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have to solve the nonlinear algebraic equation u n+1 t g(u n+1 ) = u n, (2) at each time step. Here u n is known and u n+1 is unknown. If we let c denote u n and v denote u n+1, we want to find v such that where c is given. v t g(v) = c, (3)

3 Nonlinear algebraic equations First consider the case of g(u) = u, which corresponds to the differential equation u = u, u(0) = u 0. (4) The equation (3) for each time step, is now v t v = c, (5) which has the solution v = 1 1 t c. (6) The time stepping in the Euler scheme for (4) is written u n+1 = 1 1 t u n. (7) Lectures INF2320 p. 3/88

4 Lectures INF2320 p. 4/88 Nonlinear algebraic equations Similarly, for any linear function g, i.e., functions on the form g(v) = α+βv (8) with constants α and β, we can solve equation (3) directly and get v = c+α t 1 β t. (9)

5 Lectures INF2320 p. 5/88 Nonlinear algebraic equations Next we study the nonlinear differential equation u = u 2, (10) which means that g(v) = v 2. (11) Now (3) reads v t v 2 = c. (12)

6 Lectures INF2320 p. 6/88 Nonlinear algebraic equations This second order equation has two possible solutions v + = t c 2 t (13) and v = t c 2 t. (14) Note that lim t t c 2 t =. Since t is supposed to be small and the solution is not expected to blow up, we conclude that v + is not correct.

7 Lectures INF2320 p. 7/88 Nonlinear algebraic equations Therefore the correct solution of (12) to use in the Euler scheme is v = t c. (15) 2 t We can now conclude that the implicit scheme can be written on computational form u n+1 t u 2 n+1 = u n (16) u n+1 = t u n 2 t. (17)

8 Lectures INF2320 p. 8/88 Nonlinear algebraic equations We have seen that the equation can be solved analytically when or v t g(v) = c (18) g(v) = v (19) g(v) = v 2. (20) Generally it can be seen that we can solve (18) when g is on the form g(v) = α+βv+γv 2. (21)

9 Lectures INF2320 p. 9/88 Nonlinear algebraic equations For most cases of nonlinear functions g, (18) can not be solved analytically A couple of examples of this is g(v) = e v or g(v) = sin(v)

10 Lectures INF2320 p. 10/88 Nonlinear algebraic equations Since we work with nonlinear equations on the form u n+1 u n = t g(u n+1 ) (22) where t is a small number, we know that u n+1 is close to u n. This will be a useful property later. In the rest of this lecture we will write nonlinear equations on the form f(x) = 0, (23) where f is nonlinear. We assume that we have available a value x 0 close to the true solution x (, i.e. f(x ) = 0). We also assume that f has no other zeros in a small region around x.

11 Lectures INF2320 p. 11/88 The bisection method Consider the function f(x) = 2+x e x (24) for x ranging from 0 to 3, see the graph in Figure 1. We want to find x = x such that f(x ) = 0

12 Lectures INF2320 p. 12/ x* x=3 5 f(x) y x Figure 1: The graph of f(x) = 2+x e x.

13 Lectures INF2320 p. 13/88 The bisection method An iterative method is to create a series {x i } of approximations of x, which hopefully converges towards x For the Bisection Method we choose the two first guesses x 0 and x 1 as the endpoints of the definition domain, i.e. x 0 = 0 and x 1 = 3 Note that f(x 0 ) = f(0) > 0 and f(x 1 ) = f(3) < 0, and therefore x 0 < x < x 1, provided that f is continuous We now define the mean value of x 0 and x 1 x 2 = 1 2 (x 0 + x 1 ) = 3 2

14 Lectures INF2320 p. 14/ x =0 0 x 2 =1.5 x 1 =3 5 y x Figure 2: The graph of f(x) = 2 + x e x and three values of f : f(x 0 ), f(x 1 ) and f(x 2 ).

15 Lectures INF2320 p. 15/88 The bisection method We see that f(x 2 ) = f( 3 2 ) = 2+3/2 e3/2 < 0, Since f(x 0 ) > 0 and f(x 2 ) < 0, we know that x 0 < x < x 2 Therefore we define x 3 = 1 2 (x 0 + x 2 ) = 3 4 Since f(x 3 ) > 0, we know that x 3 < x < x 2 (see Figure 3) This can be continued until f(x n ) is sufficiently small

16 Lectures INF2320 p. 16/ x 3 =0.75 x 2 =1.5 y x Figure 3: The graph of f(x) = 2+x e x and two values of f : f(x 2 ) and f(x 3 ).

17 Lectures INF2320 p. 17/88 The bisection method Written in algorithmic form the Bisection method reads: Algorithm 1. Given a, b such that f(a) f(b) < 0 and given a tolerance ε. Define c = 1 2 (a+b). while f(c) > ε do if f(a) f(c) < 0 then b = c else a = c c := 1 2 (a+b) end

18 Lectures INF2320 p. 18/88 Example 11 Find the zeros for f(x) = 2+x e x using Algorithm 1 and choose a = 0, b = 3 and ε = In Table 1 we show the number of iterations i, c and f(c) The number of iterations, i, refers to the number of times we pass through the while-loop of the algorithm

19 Lectures INF2320 p. 19/88 i c f(c) Table 1: Solving the nonlinear equation f(x) = 2 + x e x = 0 by using the bisection method; the number of iterations i, c and f(c).

20 Lectures INF2320 p. 20/88 Example 11 We see that we get sufficient accuracy after 21 iterations The next slide show the C program that is used to solve this problem The entire computation uses seconds on a Pentium III 1GHz processor Even if this quite fast, we need even faster algorithms in actual computations In practical applications you might need to solve billions of nonlinear equations, and then every micro second counts

21 Lectures INF2320 p. 21/88 #include <stdio.h> #include <math.h> double f (double x) { return 2.+x-exp(x); } inline double fabs (double r) { return ( (r >= 0.0)? r : -r ); } int main (int nargs, const char** args) { double epsilon = 1.0e-6; double a, b, c, fa, fc; a = 0.; b = 3.; fa = f(a); c = 0.5*(a+b); while (fabs(fc=(f(c))) > epsilon) { if ((fa*fc) < 0) { b = c; } else { a = c; fa = fc; } c = 0.5*(a+b); } printf("final c=%g, f(c)=%g\n",c,fc); return 0; }

22 Lectures INF2320 p. 22/88 Newton s method Recall that we have assumed that we have a good initial guess x 0 close to x (where f(x ) = 0) We will also assume that we have a small region around x where f has only one zero, and that f (x) 0 Taylor series expansion around x = x 0 yields f(x 0 + h) = f(x 0 )+hf (x 0 )+O(h 2 ) (25) Thus, for small h we have f(x 0 + h) f(x 0 )+hf (x 0 ) (26)

23 Lectures INF2320 p. 23/88 Newton s method We want to choose the step h such that f(x 0 + h) 0 By (26) this can be done by choosing h such that Solving this gives We therefore define f(x 0 )+hf (x 0 ) = 0 h = f(x 0) f (x 0 ) x 1 def = x 0 + h = x 0 f(x 0) f (x 0 ) (27)

24 Lectures INF2320 p. 24/88 Newton s method We test this on the example studied above with f(x) = 2+x e x and x 0 = 3 We have that Therefore We see that f (x) = 1 e x x 1 = x 0 f(x 0) f (x 0 ) = 3 5 e3 1 e 3 = f(x 0 ) = f(3) and f(x 1 ) = f(2.2096) 4.902, i.e, the value of f is significantly reduced

25 Lectures INF2320 p. 25/88 Newton s method We can now repeat the above procedure and define x 2 def = x 1 f(x 1) f (x 1 ), (28) and in algorithmic form Newton s method reads: Algorithm 2. Given an initial approximation x 0 and a tolerance ε. k = 0 while f(x k ) > ε do x k+1 = x k f(x k) f (x k ) k = k+1 end

26 Lectures INF2320 p. 26/88 Newton s method In Table 2 we show the results generated by Newton s method on the above example. k x k f(x k ) Table 2: Solving the nonlinear equation f(x) = 2 + x e x = 0 by using Algorithm 25 and ε = 10 6 ; the number of iterations k, x k and f(x k ).

27 Lectures INF2320 p. 27/88 Newton s method We observe that the convergence is much faster for Newton s method than for the Bisection method Generally, Newton s method converges faster than the Bisection method This will be studied in more detail in Project 1

28 Lectures INF2320 p. 28/88 Example 12 Let f(x) = x 2 2, and find x such that f(x ) = 0. Note that one of the exact solutions is x = 2 Newton s method for this problem reads or x k+1 = x k x2 k 2 2x k x k+1 = x2 k + 2 2x k

29 Lectures INF2320 p. 29/88 Example 12 If we choose x 0 = 1, we get x 1 = 1.5, x 2 = , x 3 = Comparing this with the exact value x = , we see that a very accurate approximation is obtained in only 3 iterations.

30 Lectures INF2320 p. 30/88 An alternative derivation The Taylor series expansion of f around x 0 is given by f(x) = f(x 0 )+(x x 0 ) f (x 0 )+O((x x 0 ) 2 ) Let F 0 (x) be a linear approximation of f around x 0 : F 0 (x) = f(x 0 )+(x x 0 ) f (x 0 ) F 0 (x) approximates f around x 0 since F 0 (x 0 ) = f(x 0 ) and F 0(x 0 ) = f (x 0 ) We now define x 1 to be such that F(x 1 ) = 0, i.e. f(x 0 )+(x 1 x 0 ) f (x 0 ) = 0

31 Lectures INF2320 p. 31/88 An alternative derivation Then we get x 1 = x 0 f(x 0) f (x 0 ), which is identical to the iteration obtained above We repeat this process, and define a linear approximation of f around x 1 F 1 (x) = f(x 1 )+(x x 1 ) f (x 1 ) x 2 is defined such that F 1 (x 2 ) = 0, i.e. x 2 = x 1 f(x 1) f (x 1 )

32 Lectures INF2320 p. 32/88 An alternative derivation Generally we get x k+1 = x k f(x k) f (x k ) This process is illustrated in Figure 4

33 Lectures INF2320 p. 33/88 f(x) x 3 x 2 x 1 x 0 Figure 4: Graphical illustration of Newton s method.

34 Lectures INF2320 p. 34/88 The Secant method The secant method is similar to Newton s method, but the linear approximation of f is defined differently Now we assume that we have two values x 0 and x 1 close to x, and define the linear function F 0 (x) such that F 0 (x 0 ) = f(x 0 ) and F 0 (x 1 ) = f(x 1 ) The function F 0 (x) is therefore given by F 0 (x) = f(x 1 )+ f(x 1) f(x 0 ) x 1 x 0 (x x 1 ) F 0 (x) is called the linear interpolant of f

35 Lectures INF2320 p. 35/88 The Secant method Since F 0 (x) f(x), we can compute a new approximation x 2 to x by solving the linear equation F(x 2 ) = 0 This means that we must solve f(x 1 )+ f(x 1) f(x 0 ) x 1 x 0 (x 2 x 1 ) = 0, with respect to x 2 (see Figure 5) This gives x 2 = x 1 f(x 1)(x 1 x 0 ) f(x 1 ) f(x 0 )

36 x f x x x f x Figure 5: The figure shows a function f = f(x) and its linear interpolant F between x 0 and x 1. Lectures INF2320 p. 36/88

37 Lectures INF2320 p. 37/88 The Secant method Following the same procedure as above we get the iteration x k+1 = x k f(x k)(x k x k 1 ) f(x k ) f(x k 1 ), and the associated algorithm reads Algorithm 3. Given two initial approximations x 0 and x 1 and a tolerance ε. k = 1 while f(x k ) > ε do x k+1 = x k f(x k ) k = k+1 end (x k x k 1 ) f(x k ) f(x k 1 )

38 Lectures INF2320 p. 38/88 Example 13 Let us apply the Secant method to the equation f(x) = 2+x e x = 0, studied above. The two initial values are x 0 = 0, x 1 = 3, and the stopping criteria is specified by ε = Table 3 show the number of iterations k, x k and f(x k ) as computed by Algorithm 3 Note that the convergence for the Secant method is slower than for Newton s method, but faster than for the Bisection method

39 k x k f(x k ) Lectures INF2320 p. 39/88 Table 3: The Secant method applied with f(x) = 2+x e x =0.

40 Example 14 Find a zero of which has a solution x = 2. f(x) = x 2 2, The general step of the secant method is in this case x k+1 =x k f(x k ) x k x k 1 f(x k ) f(x k 1 ) =x k (x 2 k 2) x k x k 1 x 2 k x2 k 1 =x k x2 k 2 x k + x k 1 = x kx k x k + x k 1 Lectures INF2320 p. 40/88

41 Lectures INF2320 p. 41/88 Example 14 By choosing x 0 = 1 and x 1 = 2 we get x 2 = x 3 = x 4 = This is quite good compared to the exact value x = Recall that Newton s method produced the approximation in three iterations, which is slightly more accurate

42 Fixed-Point iterations Above we studied implicit schemes for the differential equation u = g(u), which lead to the nonlinear equation u n+1 t g(u n+1 ) = u n, where u n is known, u n+1 is unknown and t > 0 is small. We defined v = u n+1 and c = u n, and wrote the equation v t g(v) = c. We can rewrite this equation on the form where v = h(v), (29) h(v) = c+ t g(v). Lectures INF2320 p. 42/88

43 Lectures INF2320 p. 43/88 Fixed-Point iterations The exact solution, v, must fulfill v = h(v ). This fact motivates the Fixed Point Iteration: with an initial guess v 0. v k+1 = h(v k ), Since h leaves v unchanged; h(v ) = v, the value v is referred to as a fixed-point of h

44 Lectures INF2320 p. 44/88 Fixed-Point iterations We try this method to solve x = sin(x/10), which has only one solution x = 0 (see Figure 6) The iteration is x k+1 = sin(x k /10). (30) Choosing x 0 = 1.0, we get the following results x 1 = , x 2 = , x 3 = , which seems to converge fast towards x = 0.

45 Lectures INF2320 p. 45/88 y y = x 10π y = sin(x/10) 10π x Figure 6: The graph of y = x and y = sin(x/10).

46 Lectures INF2320 p. 46/88 Fixed-Point iterations We now try to understand the behavior of the iteration. From calculus we recall for small x we have Using this fact in (30), we get and therefore sin(x/10) x/10. x k+1 x k /10, x k (1/10) k. We see that this iteration converges towards zero.

47 Lectures INF2320 p. 47/88 Convergence of Fixed-Point iterations We have seen that h(v) = v can be solved with the Fixed-Point iteration v k+1 = h(v k ) We now analyze under what conditions the values {v k } generated by the Fixed-Point iterations converge towards a solution v of the equation. Definition: h = h(v) is called a contractive mapping on a closed interval I if (i) h(v) h(w) δ v w for any v,w I, where 0 < δ < 1, and (ii) v I h(v) I.

48 Lectures INF2320 p. 48/88 Convergence of Fixed-Point iterations The Mean Value Theorem of Calculus states that if f is a differentiable function defined on an interval [a, b], then there is a c [a,b] such that f(b) f(a) = f (c)(b a). It follows from this theorem that h in is a contractive mapping defined on an interval I if and h(v) I for all v I h (ξ) < δ < 1 for all ξ I, (31)

49 Lectures INF2320 p. 49/88 Convergence of Fixed-Point iterations Let us check the above example x = sin(x/10) We see that h(x) = sin(x/10) is contractive on I = [ 1,1] since h (x) = 1 10 cos(x/10) 1 10 and x [ 1,1] sin(x/10) [ 1,1].

50 Lectures INF2320 p. 50/88 Convergence of Fixed-Point iterations For a contractive mapping h, we assume that for any v,w in a closed interval I we have h(v) h(w) δ v w, where 0 < δ < 1, The error, e k = v k v, fulfills v I h(v) I e k+1 = v k+1 v = h(v k ) h(v ) δ v k v =δe k.

51 Lectures INF2320 p. 51/88 Convergence of Fixed-Point iterations It now follows by induction on k, that e k δ k e 0. Since 0 < δ < 1, we know that e k 0 as k. This means that we have convergence lim v k = v. k We can now conclude that the Fixed-Point iteration will converge when h is a contractive mapping.

52 Speed of convergence We have seen that the Fixed-Point iterations fulfill e k e 0 δ k. Assume we want to solve this equation to the accuracy e k e 0 ε. We need to have δ k ε, which gives k ln(δ) ln(ε) Therefore the number of iterations needs to satisfy k ln(ε) ln(δ) Lectures INF2320 p. 52/88

53 Lectures INF2320 p. 53/88 Existence and Uniqueness of a Solution For the equations on the form v = h(v), we want to answer the following questions a) Does there exist a value v such that b) If so, is v unique? c) How can we compute v? v = h(v )? We assume that h is a contractive mapping on a closed interval I such that for all v, w. h(v) h(w) δ v w, where 0 < δ < 1, (32) v I h(v) I (33)

54 Lectures INF2320 p. 54/88 Uniqueness Assume that we have two solutions v and w of the problem, i.e. v = h(v ) and w = h(w ) (34) From the assumption (32) we have where δ < 1. But (34) gives h(v ) h(w ) δ v w, v w δ v w which can only hold when v = w, and consequently the solution is unique.

55 Lectures INF2320 p. 55/88 Existence We have seen that if h is a contractive mapping, the equation can only have one solution. h(v) = v (35) If we now can show that there exists a solution of (35) we have answered (a), (b) and (c) above Below we show that assumptions (32) and (33) imply existence

56 Lectures INF2320 p. 56/88 Cauchy sequences First we recall the definition of Cauchy sequences. A sequence of real numbers, {v k }, is called a Cauchy sequence if, for any ε > 0, there is an integer M such that for any m,n M we have v m v n < ε (36) Theorem: A sequence {v k } converges if and only if it is a Cauchy sequence Under we shall show that the sequence, {v k }, produced by the Fixed-Point iteration, is a Cauchy series when assumptions (32) and (33) hold

57 Lectures INF2320 p. 57/88 Existence Since v n+1 = h(v n ), we have v n+1 v n = h(v n ) h(v n 1 ) δ v n v n 1 By induction, we have v n+1 v n δ n v 1 v 0 In order to show that {v n } is a Cauchy sequence, we need to bind v m v n We may assume that m > n, and we see that v m v n = (v m v m 1 )+(v m 1 v m 2 )+...+(v n+1 v n )

58 Existence By the triangle-inequality, we have v m v n v m v m 1 + v m 1 v m v n+1 v n (37) gives consequently v m v m 1 δ m 1 v 1 v 0 v m 1 v m 2 δ m 2 v 1 v 0. v n+1 v n δ n v 1 v 0 v m v n v m v m 1 + v m 1 v m v n+1 v n ( δ m 1 + δ m δ n) v 1 v 0 Lectures INF2320 p. 58/88

59 Lectures INF2320 p. 59/88 Existence We can now estimate the power series δ m 1 + δ m δ n = δ n 1( δ+δ δ m n) δ n 1 δ k k=1 = δ n δ So v m v n δn 1 1 δ v 1 v 0 δ n 1 can be as small as you like, if you choose n big enough

60 Lectures INF2320 p. 60/88 Existence This means that for any ε > 0, we can find an integer M such that v m v n < ε provided that m,n M, and consequently {v k } is a Cauchy sequence. The sequence is therefore convergent, and we call the limit v Since v = lim k v k = lim k h(v k ) = h(v ) by continuity of h, we have that the limit satisfies the equation

61 Lectures INF2320 p. 61/88 Systems of nonlinear equations We start our study of nonlinear equations, by considering a linear system that arises from the discretization of a linear 2 2 system of ordinary differential equations, u (t) = v(t), u(0) = u 0, v (t) = u(t), v(0) = v 0. (37) An implicit Euler scheme for this system reads u n+1 u n t = v n+1, v n+1 v n t = u n+1, (38) and can be rewritten on the form u n+1 + t v n+1 = u n, t u n+1 + v n+1 = v n. (39)

62 Lectures INF2320 p. 62/88 Systems of linear equations We can write this system on the form Aw n+1 = w n, (40) where A = ( 1 t t 1 ) and w n = ( u n v n ). (41) In order to compute w n+1 = (u n+1,v n+1 ) T from w n = (u n,v n ), we have to solve the linear system (40). The system has a unique solution since det(a) = 1+ t 2 > 0. (42)

63 Lectures INF2320 p. 63/88 Systems of linear equations And the solution is given by w n+1 = A 1 w n, where ( ) A t = 1+ t 2. (43) t 1 Therefore we get ( ) u n+1 v n+1 = = 1 1+ t t 2 ( ( 1 t t 1 u n t v n t u n + v n )( ) u n v n ) (44). (45)

64 Lectures INF2320 p. 64/88 Systems of linear equations We write this as u n+1 = 1 1+ t (u 2 n t v n ), v n+1 = 1 1+ t (v 2 n + t u n ). (46) By choosing u 0 = 1 and v 0 = 0, we have the analytical solutions u(t) = cos(t), v(t) = sin(t). (47) In Figure 7 we have plotted (u,v) and (u n,v n ) for 0 t 2π, t = π/500. We see that the scheme provides good approximations.

65 t Figure 7: The analytical solution (u = cos(t),v = sin(t)) and the numerical solution (u n,v n ), in dashed lines, produced by the implicit Euler scheme. Lectures INF2320 p. 65/88

66 Lectures INF2320 p. 66/88 A nonlinear system Now we study a nonlinear system of ordinary differential equations u = v 3, u(0) = u 0, v = u 3, v(0) = v 0. (48) An implicit Euler scheme for this system reads u n+1 u n t = v 3 n+1, v n+1 v n t = u 3 n+1, (49) which can be rewritten on the form u n+1 + t v 3 n+1 u n = 0, v n+1 t u 3 n+1 v n = 0. (50)

67 Lectures INF2320 p. 67/88 A nonlinear system Observe that in order to compute (u n+1,v n+1 ) based on (u n,v n ), we need to solve a nonlinear system of equations We would like to write the system on the generic form f(x,y) = 0, g(x, y) = 0. (51) This is done by setting f(x,y) = x+ t y 3 α, g(x,y) = y t x 3 β, (52) α = u n and β = v n.

68 Lectures INF2320 p. 68/88 Newton s method When deriving Newton s method for solving a scalar equation we exploited Taylor series expansion p(x) = 0 (53) p(x 0 + h) = p(x 0 )+hp (x 0 )+O(h 2 ), (54) to make a linear approximation of the function p, and solve the linear approximation of (53). This lead to the iteration x k+1 = x k p(x k) p (x k ). (55)

69 Lectures INF2320 p. 69/88 Newton s method We shall try to extend Newton s method to systems of equations on the form f(x,y) = 0, g(x, y) = 0. (56) The Taylor-series expansion of a smooth function of two variables F(x, y), reads F(x+ x,y+ y) = F(x,y)+ x F x (x,y)+ y F y (x,y) +O( x 2, x y, y 2 ). (57)

70 Lectures INF2320 p. 70/88 Newton s method Using Taylor expansion on (56) we get f(x 0 + x,y 0 + y) = f(x 0,y 0 )+ x f x (x 0,y 0 )+ y f y (x 0,y 0 ) and +O( x 2, x y, y 2 ), (58) g(x 0 + x,y 0 + y) = g(x 0,y 0 )+ x g x (x 0,y 0 )+ y g y (x 0,y 0 ) +O( x 2, x y, y 2 ). (59)

71 Lectures INF2320 p. 71/88 Newton s method Since we want x and y to be such that f(x 0 + x,y 0 + y) 0, g(x 0 + x,y 0 + y) 0, (60) we define x and y to be the solution of the linear system f(x 0,y 0 )+ x f x (x 0,y 0 )+ y f y (x 0,y 0 ) = 0, g(x 0,y 0 )+ x g x (x 0,y 0 )+ y g y (x 0,y 0 ) = 0. (61) Remember here that x 0 and y 0 are known numbers, and therefore f(x 0,y 0 ), f x (x 0,y 0 ) and f y (x 0,y 0 ) are known numbers as well. x and y are the unknowns.

72 Newton s method (61) can be written on the form )( ) x y ( f0 x g 0 x f 0 y g 0 y = ( f 0 g 0 ). (62) where f 0 = f(x 0,y 0 ), g 0 = g(x 0,y 0 ), f 0 x = f x (x 0,y 0 ), etc. If the matrix ) A = ( f0 x g 0 x f 0 y g 0 y (63) is nonsingular. Then ( ) x = y ( f0 x g 0 x f 0 y g 0 y ) 1 ( f 0 g 0 ). (64) Lectures INF2320 p. 72/88

73 Lectures INF2320 p. 73/88 Newton s method We can now define ( ) ( ) ( x 1 x 0 = + y 1 y 0 x y ) = ( x 0 y 0 ) ( f0 x g 0 x f 0 y g 0 y ) 1 ( f 0 g 0 ). And by repeating this argument we get ( x k+1 y k+1 ) = ( x k y k ) ( fk x g k x f k y g k y ) 1 ( f k g k ), (65) where f k = f(x k,y k ), g k = g(x k,y k ) and f k x = f x (x k,y k ) etc. The scheme (65) is Newton s method for the system (56).

74 Lectures INF2320 p. 74/88 A Nonlinear example We test Newton s method on the system e x e y = 0, ln(1+x+y) = 0. (66) The system have analytical solution x = y = 0. Define f(x,y) = e x e y, g(x,y) = ln(1+x+y). x k+1 y k+1 The iteration in Newton s method (65) reads ) = ( x k y k ) ( e x k e y k x k +y k 1+x k +y k ) 1 ( e x k e y k ln(1+x k + y k ) ).(67)

75 Lectures INF2320 p. 75/88 A Nonlinear example The table below shows the computed results when x 0 = y 0 = 1 2. k x k y k We observe that, as in the scalar case, Newton s method gives very rapid convergence towards the analytical solution x = y = 0.

76 Lectures INF2320 p. 76/88 The Nonlinear System Revisited We now go back to nonlinear system of ordinary differential equations (48), presented above. For each time step we had to solve f(x,y) = 0, g(x, y) = 0, (68) where f(x,y) = x+ t y 3 α, g(x,y) = y t x 3 β. (69) We shall now solve this system using Newton s method.

77 Lectures INF2320 p. 77/88 The Nonlinear System Revisited We put x 0 = α, y 0 = β and iterate as follows where ( x k+1 y k+1 ) = ( x k y k ) ( fk x g k x f k y g k y ) 1 ( f k g k ), (70) f k = f(x k,y k ), g k = g(x k,y k ), f k x = f x (x k,y k ) = 1, g k x = g x (x k,y k ) = 3 t xk, 2 f k y = f y (x k,y k ) = 3 t y 2 k, g k y = g y (x k,y k ) = 1.

78 Lectures INF2320 p. 78/88 The Nonlinear System Revisited The matrix A = ( fk x g k x f k y g k y ) = ( 1 3 t y 2 k 3 t xk 2 1 ) (71) has its determinant given by: det(a) = 1+9 t 2 xk 2 y2 k > 0. So A 1 is well defined and is given by ( ) A t y 2 k = 1+9 t 2 xk 2 y2 k 3 t xk 2. (72) 1 For each time-level we can e.g. iterate until f(x k,y k ) + g(x k,y k ) < ε = (73)

79 Lectures INF2320 p. 79/88 The Nonlinear System Revisited We have tested this method with t = 1/100 and t [0,1] In Figure 8 the numerical solutions of u and v are plotted as functions of time, and in Figure 9 the numerical solution is plotted in the (u, v) coordinate system In Figure 10 we have plotted the number of Newton s iterations needed to reach the stopping criterion (73) at each time-level Observe that we need no more than two iterations at all time-levels

80 Lectures INF2320 p. 80/ t Figure 8: The numerical solutions u(t) and v(t) (in dashed line) of (48) produced by the implicit Euler scheme (49) using u 0 = 1, v 0 = 0 and t = 1/100.

81 Lectures INF2320 p. 81/ v u Figure 9: The numerical solutions of (48) in the (u, v)-coordinate system, arising from the implicit Euler scheme (49) using u 0 = 1, v 0 = 0 and t = 1/100.

82 Lectures INF2320 p. 82/ t Figure 10: The graph shows the number of iterations used by Newton s method to solve the system (50) at each time-level.

83 Lectures INF2320 p. 83/88

84 Lectures INF2320 p. 84/88

85 Lectures INF2320 p. 85/88

86 Lectures INF2320 p. 86/88

87 Lectures INF2320 p. 87/88

88 Lectures INF2320 p. 88/88

Section 3.7. Rolle s Theorem and the Mean Value Theorem. Difference Equations to Differential Equations

Section 3.7. Rolle s Theorem and the Mean Value Theorem. Difference Equations to Differential Equations Difference Equations to Differential Equations Section.7 Rolle s Theorem and the Mean Value Theorem The two theorems which are at the heart of this section draw connections between the instantaneous rate

More information

Homework # 3 Solutions

Homework # 3 Solutions Homework # 3 Solutions February, 200 Solution (2.3.5). Noting that and ( + 3 x) x 8 = + 3 x) by Equation (2.3.) x 8 x 8 = + 3 8 by Equations (2.3.7) and (2.3.0) =3 x 8 6x2 + x 3 ) = 2 + 6x 2 + x 3 x 8

More information

Practice with Proofs

Practice with Proofs Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using

More information

Separable First Order Differential Equations

Separable First Order Differential Equations Separable First Order Differential Equations Form of Separable Equations which take the form = gx hy or These are differential equations = gxĥy, where gx is a continuous function of x and hy is a continuously

More information

5.1 Derivatives and Graphs

5.1 Derivatives and Graphs 5.1 Derivatives and Graphs What does f say about f? If f (x) > 0 on an interval, then f is INCREASING on that interval. If f (x) < 0 on an interval, then f is DECREASING on that interval. A function has

More information

Rootfinding for Nonlinear Equations

Rootfinding for Nonlinear Equations > 3. Rootfinding Rootfinding for Nonlinear Equations > 3. Rootfinding Calculating the roots of an equation is a common problem in applied mathematics. f(x) = 0 (7.1) We will explore some simple numerical

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Continuity. DEFINITION 1: A function f is continuous at a number a if. lim

Continuity. DEFINITION 1: A function f is continuous at a number a if. lim Continuity DEFINITION : A function f is continuous at a number a if f(x) = f(a) REMARK: It follows from the definition that f is continuous at a if and only if. f(a) is defined. 2. f(x) and +f(x) exist.

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were: Topic 1 2.1 mode MultipleSelection text How can we approximate the slope of the tangent line to f(x) at a point x = a? This is a Multiple selection question, so you need to check all of the answers that

More information

x a x 2 (1 + x 2 ) n.

x a x 2 (1 + x 2 ) n. Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number

More information

Roots of Equations (Chapters 5 and 6)

Roots of Equations (Chapters 5 and 6) Roots of Equations (Chapters 5 and 6) Problem: given f() = 0, find. In general, f() can be any function. For some forms of f(), analytical solutions are available. However, for other functions, we have

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

CS 261 Fall 2011 Solutions to Assignment #4

CS 261 Fall 2011 Solutions to Assignment #4 CS 61 Fall 011 Solutions to Assignment #4 The following four algorithms are used to implement the bisection method, Newton s method, the secant method, and the method of false position, respectively. In

More information

Principles of Scientific Computing Nonlinear Equations and Optimization

Principles of Scientific Computing Nonlinear Equations and Optimization Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related

More information

The Method of Least Squares. Lectures INF2320 p. 1/80

The Method of Least Squares. Lectures INF2320 p. 1/80 The Method of Least Squares Lectures INF2320 p. 1/80 Lectures INF2320 p. 2/80 The method of least squares We study the following problem: Given n points (t i,y i ) for i = 1,...,n in the (t,y)-plane. How

More information

Roots of equation fx are the values of x which satisfy the above expression. Also referred to as the zeros of an equation

Roots of equation fx are the values of x which satisfy the above expression. Also referred to as the zeros of an equation LECTURE 20 SOLVING FOR ROOTS OF NONLINEAR EQUATIONS Consider the equation f = 0 Roots of equation f are the values of which satisfy the above epression. Also referred to as the zeros of an equation f()

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

G.A. Pavliotis. Department of Mathematics. Imperial College London

G.A. Pavliotis. Department of Mathematics. Imperial College London EE1 MATHEMATICS NUMERICAL METHODS G.A. Pavliotis Department of Mathematics Imperial College London 1. Numerical solution of nonlinear equations (iterative processes). 2. Numerical evaluation of integrals.

More information

5 Numerical Differentiation

5 Numerical Differentiation D. Levy 5 Numerical Differentiation 5. Basic Concepts This chapter deals with numerical approximations of derivatives. The first questions that comes up to mind is: why do we need to approximate derivatives

More information

1. Then f has a relative maximum at x = c if f(c) f(x) for all values of x in some

1. Then f has a relative maximum at x = c if f(c) f(x) for all values of x in some Section 3.1: First Derivative Test Definition. Let f be a function with domain D. 1. Then f has a relative maximum at x = c if f(c) f(x) for all values of x in some open interval containing c. The number

More information

Solutions of Equations in One Variable. Fixed-Point Iteration II

Solutions of Equations in One Variable. Fixed-Point Iteration II Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Average rate of change of y = f(x) with respect to x as x changes from a to a + h:

Average rate of change of y = f(x) with respect to x as x changes from a to a + h: L15-1 Lecture 15: Section 3.4 Definition of the Derivative Recall the following from Lecture 14: For function y = f(x), the average rate of change of y with respect to x as x changes from a to b (on [a,

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Rolle s Theorem. q( x) = 1

Rolle s Theorem. q( x) = 1 Lecture 1 :The Mean Value Theorem We know that constant functions have derivative zero. Is it possible for a more complicated function to have derivative zero? In this section we will answer this question

More information

Limits and Continuity

Limits and Continuity Math 20C Multivariable Calculus Lecture Limits and Continuity Slide Review of Limit. Side limits and squeeze theorem. Continuous functions of 2,3 variables. Review: Limits Slide 2 Definition Given a function

More information

4.3 Lagrange Approximation

4.3 Lagrange Approximation 206 CHAP. 4 INTERPOLATION AND POLYNOMIAL APPROXIMATION Lagrange Polynomial Approximation 4.3 Lagrange Approximation Interpolation means to estimate a missing function value by taking a weighted average

More information

DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

More information

Real Roots of Univariate Polynomials with Real Coefficients

Real Roots of Univariate Polynomials with Real Coefficients Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

Numerical Methods for Solving Systems of Nonlinear Equations

Numerical Methods for Solving Systems of Nonlinear Equations Numerical Methods for Solving Systems of Nonlinear Equations by Courtney Remani A project submitted to the Department of Mathematical Sciences in conformity with the requirements for Math 4301 Honour s

More information

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation

November 16, 2015. Interpolation, Extrapolation & Polynomial Approximation Interpolation, Extrapolation & Polynomial Approximation November 16, 2015 Introduction In many cases we know the values of a function f (x) at a set of points x 1, x 2,..., x N, but we don t have the analytic

More information

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima. Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =

More information

Inverse Functions and Logarithms

Inverse Functions and Logarithms Section 3. Inverse Functions and Logarithms 1 Kiryl Tsishchanka Inverse Functions and Logarithms DEFINITION: A function f is called a one-to-one function if it never takes on the same value twice; that

More information

CHAPTER FIVE. Solutions for Section 5.1. Skill Refresher. Exercises

CHAPTER FIVE. Solutions for Section 5.1. Skill Refresher. Exercises CHAPTER FIVE 5.1 SOLUTIONS 265 Solutions for Section 5.1 Skill Refresher S1. Since 1,000,000 = 10 6, we have x = 6. S2. Since 0.01 = 10 2, we have t = 2. S3. Since e 3 = ( e 3) 1/2 = e 3/2, we have z =

More information

Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem

Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem Intermediate Value Theorem, Rolle s Theorem and Mean Value Theorem February 21, 214 In many problems, you are asked to show that something exists, but are not required to give a specific example or formula

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 3. Spaces with special properties SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 3 Fall 2008 III. Spaces with special properties III.1 : Compact spaces I Problems from Munkres, 26, pp. 170 172 3. Show that a finite union of compact subspaces

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Microeconomic Theory: Basic Math Concepts

Microeconomic Theory: Basic Math Concepts Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts

More information

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...

n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +... 6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence

More information

Fixed Point Theorems

Fixed Point Theorems Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation

More information

Taylor and Maclaurin Series

Taylor and Maclaurin Series Taylor and Maclaurin Series In the preceding section we were able to find power series representations for a certain restricted class of functions. Here we investigate more general problems: Which functions

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Differentiation and Integration

Differentiation and Integration This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have

More information

A Brief Review of Elementary Ordinary Differential Equations

A Brief Review of Elementary Ordinary Differential Equations 1 A Brief Review of Elementary Ordinary Differential Equations At various points in the material we will be covering, we will need to recall and use material normally covered in an elementary course on

More information

I. Pointwise convergence

I. Pointwise convergence MATH 40 - NOTES Sequences of functions Pointwise and Uniform Convergence Fall 2005 Previously, we have studied sequences of real numbers. Now we discuss the topic of sequences of real valued functions.

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

Linear and quadratic Taylor polynomials for functions of several variables.

Linear and quadratic Taylor polynomials for functions of several variables. ams/econ 11b supplementary notes ucsc Linear quadratic Taylor polynomials for functions of several variables. c 010, Yonatan Katznelson Finding the extreme (minimum or maximum) values of a function, is

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

Math 104A - Homework 2

Math 104A - Homework 2 Math 04A - Homework Due 6/0 Section. -, 7a, 7d, Section. - 6, 8, 0 Section. - 5, 8, 4 Section. - 5,..a Use three-digit chopping arithmetic to compute the sum 0 i= first by i + + +, and then by + + +. Which

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical

More information

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane

NOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.

More information

AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT

AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT AC 2012-4561: MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT Dr. Nikunja Swain, South Carolina State University Nikunja Swain is a professor in the College of Science, Mathematics,

More information

6.4 Logarithmic Equations and Inequalities

6.4 Logarithmic Equations and Inequalities 6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver. Finite Difference Methods for Partial Differential Equations As you are well aware, most differential equations are much too complicated to be solved by

More information

The Mean Value Theorem

The Mean Value Theorem The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers

More information

Numerical Solution of Differential

Numerical Solution of Differential Chapter 13 Numerical Solution of Differential Equations We have considered numerical solution procedures for two kinds of equations: In chapter 10 the unknown was a real number; in chapter 6 the unknown

More information

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1 Implicit Functions Defining Implicit Functions Up until now in this course, we have only talked about functions, which assign to every real number x in their domain exactly one real number f(x). The graphs

More information

Metric Spaces Joseph Muscat 2003 (Last revised May 2009)

Metric Spaces Joseph Muscat 2003 (Last revised May 2009) 1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of

More information

2.1 Increasing, Decreasing, and Piecewise Functions; Applications

2.1 Increasing, Decreasing, and Piecewise Functions; Applications 2.1 Increasing, Decreasing, and Piecewise Functions; Applications Graph functions, looking for intervals on which the function is increasing, decreasing, or constant, and estimate relative maxima and minima.

More information

An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate.

An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate. Chapter 10 Series and Approximations An important theme in this book is to give constructive definitions of mathematical objects. Thus, for instance, if you needed to evaluate 1 0 e x2 dx, you could set

More information

Roots of Polynomials

Roots of Polynomials Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x

More information

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 =0, x 1 = π 4, x

More information

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

More information

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1 (R) User Manual Environmental Energy Technologies Division Berkeley, CA 94720 http://simulationresearch.lbl.gov Michael Wetter MWetter@lbl.gov February 20, 2009 Notice: This work was supported by the U.S.

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.

FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper. FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES

FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES FUNCTIONAL ANALYSIS LECTURE NOTES: QUOTIENT SPACES CHRISTOPHER HEIL 1. Cosets and the Quotient Space Any vector space is an abelian group under the operation of vector addition. So, if you are have studied

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

1 Cubic Hermite Spline Interpolation

1 Cubic Hermite Spline Interpolation cs412: introduction to numerical analysis 10/26/10 Lecture 13: Cubic Hermite Spline Interpolation II Instructor: Professor Amos Ron Scribes: Yunpeng Li, Mark Cowlishaw, Nathanael Fillmore 1 Cubic Hermite

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

Verifying Numerical Convergence Rates

Verifying Numerical Convergence Rates 1 Order of accuracy Verifying Numerical Convergence Rates We consider a numerical approximation of an exact value u. Te approximation depends on a small parameter, suc as te grid size or time step, and

More information

2 Integrating Both Sides

2 Integrating Both Sides 2 Integrating Both Sides So far, the only general method we have for solving differential equations involves equations of the form y = f(x), where f(x) is any function of x. The solution to such an equation

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

6.1. The Exponential Function. Introduction. Prerequisites. Learning Outcomes. Learning Style

6.1. The Exponential Function. Introduction. Prerequisites. Learning Outcomes. Learning Style The Exponential Function 6.1 Introduction In this block we revisit the use of exponents. We consider how the expression a x is defined when a is a positive number and x is irrational. Previously we have

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

More information

Section 2.7 One-to-One Functions and Their Inverses

Section 2.7 One-to-One Functions and Their Inverses Section. One-to-One Functions and Their Inverses One-to-One Functions HORIZONTAL LINE TEST: A function is one-to-one if and only if no horizontal line intersects its graph more than once. EXAMPLES: 1.

More information

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of. Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:

More information

1.7 Graphs of Functions

1.7 Graphs of Functions 64 Relations and Functions 1.7 Graphs of Functions In Section 1.4 we defined a function as a special type of relation; one in which each x-coordinate was matched with only one y-coordinate. We spent most

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

1 Error in Euler s Method

1 Error in Euler s Method 1 Error in Euler s Method Experience with Euler s 1 method raises some interesting questions about numerical approximations for the solutions of differential equations. 1. What determines the amount of

More information

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition

More information

Numerical Analysis An Introduction

Numerical Analysis An Introduction Walter Gautschi Numerical Analysis An Introduction 1997 Birkhauser Boston Basel Berlin CONTENTS PREFACE xi CHAPTER 0. PROLOGUE 1 0.1. Overview 1 0.2. Numerical analysis software 3 0.3. Textbooks and monographs

More information

Homework 2 Solutions

Homework 2 Solutions Homework Solutions 1. (a) Find the area of a regular heagon inscribed in a circle of radius 1. Then, find the area of a regular heagon circumscribed about a circle of radius 1. Use these calculations to

More information

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

More information

Introduction to Numerical Math and Matlab Programming

Introduction to Numerical Math and Matlab Programming Introduction to Numerical Math and Matlab Programming Todd Young and Martin Mohlenkamp Department of Mathematics Ohio University Athens, OH 45701 young@math.ohiou.edu c 2009 - Todd Young and Martin Mohlenkamp.

More information

Two Fundamental Theorems about the Definite Integral

Two Fundamental Theorems about the Definite Integral Two Fundamental Theorems about the Definite Integral These lecture notes develop the theorem Stewart calls The Fundamental Theorem of Calculus in section 5.3. The approach I use is slightly different than

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information