Math 2400 - Numerical Analysis Homework #2 Solutions



Similar documents
CS 261 Fall 2011 Solutions to Assignment #4

Simple Programming in MATLAB. Plotting a graph using MATLAB involves three steps:

AC : MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT

4.5 Chebyshev Polynomials

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

WARM UP EXERCSE. 2-1 Polynomials and Rational Functions

How long is the vector? >> length(x) >> d=size(x) % What are the entries in the matrix d?

Math 104A - Homework 2

Roots of Equations (Chapters 5 and 6)

4.3 Lagrange Approximation

ALGEBRA 2 CRA 2 REVIEW - Chapters 1-6 Answer Section

Lectures 5-6: Taylor Series

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Roots of equation fx are the values of x which satisfy the above expression. Also referred to as the zeros of an equation

ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

Examples of Tasks from CCSS Edition Course 3, Unit 5

Introduction to Numerical Math and Matlab Programming

Introduction to Numerical Methods and Matlab Programming for Engineers

G.A. Pavliotis. Department of Mathematics. Imperial College London

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).

b) since the remainder is 0 I need to factor the numerator. Synthetic division tells me this is true

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

2.2 Derivative as a Function

Cubic Functions: Global Analysis

Algebra Practice Problems for Precalculus and Calculus

The Method of Least Squares. Lectures INF2320 p. 1/80

Numerical Analysis. Professor Donna Calhoun. Fall 2013 Math 465/565. Office : MG241A Office Hours : Wednesday 10:00-12:00 and 1:00-3:00

ALGEBRA REVIEW LEARNING SKILLS CENTER. Exponents & Radicals

CS 221. Tuesday 8 November 2011

Zeros of Polynomial Functions

Zeros of Polynomial Functions

Question 1a of 14 ( 2 Identifying the roots of a polynomial and their importance )

So far, we have looked at homogeneous equations

Implicit Differentiation

Aim. Decimal Search. An Excel 'macro' was used to do the calculations. A copy of the script can be found in Appendix A.

What are the place values to the left of the decimal point and their associated powers of ten?

Gas Dynamics Prof. T. M. Muruganandam Department of Aerospace Engineering Indian Institute of Technology, Madras. Module No - 12 Lecture No - 25

is the degree of the polynomial and is the leading coefficient.

Cooling and Euler's Method

Spline Toolbox Release Notes

Algebra I Vocabulary Cards

Name: ID: Discussion Section:

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Syllabus for MTH 311 Numerical Analysis

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.

FINAL EXAM SECTIONS AND OBJECTIVES FOR COLLEGE ALGEBRA

The Heat Equation. Lectures INF2320 p. 1/88

SOLUTIONS. Would you like to know the solutions of some of the exercises?? Here you are

Solving Cubic Polynomials

Study Guide 2 Solutions MATH 111

BEST METHODS FOR SOLVING QUADRATIC INEQUALITIES.

Math 241, Exam 1 Information.

Roots of Polynomials

Homework 2 Solutions

Algebra Cheat Sheets

Aim: How do we find the slope of a line? Warm Up: Go over test. A. Slope -

NUMERICAL ANALYSIS PROGRAMS

Part I Matlab and Solving Equations

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Unit 3: Day 2: Factoring Polynomial Expressions

4.9 Graph and Solve Quadratic

a.) Write the line 2x - 4y = 9 into slope intercept form b.) Find the slope of the line parallel to part a

TI-83/84 Plus Graphing Calculator Worksheet #2

Math 25 Activity 6: Factoring Advanced

Introduction to MATLAB IAP 2008

Linear Equations and Inequalities

Graphic Designing with Transformed Functions

Introduction. Chapter 1

POISSON AND LAPLACE EQUATIONS. Charles R. O Neill. School of Mechanical and Aerospace Engineering. Oklahoma State University. Stillwater, OK 74078

The Deadly Sins of Algebra

MA107 Precalculus Algebra Exam 2 Review Solutions

High School Algebra Reasoning with Equations and Inequalities Solve equations and inequalities in one variable.

Numerical Methods for Differential Equations

Anchorage School District/Alaska Sr. High Math Performance Standards Algebra

The degree of a polynomial function is equal to the highest exponent found on the independent variables.

Roots, Linear Factors, and Sign Charts review of background material for Math 163A (Barsamian)

Chapter 2: Linear Equations and Inequalities Lecture notes Math 1010

Math Assignment 6

Simplify the rational expression. Find all numbers that must be excluded from the domain of the simplified rational expression.

A three point formula for finding roots of equations by the method of least squares

Numerical Methods for Solving Systems of Nonlinear Equations

Finding Equations of Sinusoidal Functions From Real-World Data

MATH APPLIED MATRIX THEORY

f(x) = g(x), if x A h(x), if x B.

Homework # 3 Solutions

MATH 21. College Algebra 1 Lecture Notes

1 Review of Newton Polynomials

Chapter 4 -- Decimals

Sequences and Series

Math Common Core Sampler Test

Polynomial and Rational Functions

Math 120 Final Exam Practice Problems, Form: A

List the elements of the given set that are natural numbers, integers, rational numbers, and irrational numbers. (Enter your answers as commaseparated

Engineering Problem Solving and Excel. EGN 1006 Introduction to Engineering

Gerrit Stols

QUADRATIC EQUATIONS AND FUNCTIONS

Algebra 2 PreAP. Name Period

Nonlinear Algebraic Equations Example

Simple Linear Regression

Transcription:

Math 24 - Numerical Analysis Homework #2 Solutions 1. Implement a bisection root finding method. Your program should accept two points, a tolerance limit and a function for input. It should then output the final approximation and the number of iterations. Make sure that the program checks that the initial interval is acceptable for this method. Here is my code with comments: function [y,iter]=bisec(f,a,b,tol) if feval(f,a)*feval(f,b)> % Check if we meet conditon for bisection method error( Need f(a)f(b)< ); if(abs(b-a)<=tol) % Are we starting with a solution? error( Interval too small for give tolerance ) %Note: for the bisection method, we cannot get % stuck in an infinite loop. So don t really % need a maximum number of iterations. Ok if we % have one though. iter=1; % we count the iterations for comparison while(abs(b-a)>tol) c=(a+b)/2; % The midpoint % Which inteval has the root? if (feval(f,a)*feval(f,c)<=) b=c; else a=c; iter=iter+1; y=c; % the best guess. 2. Implement a Newton s method root finding method. Your program should accept an initial guess, a tolerance(for the relative difference between successive approximations), a function and it s derivative for input. It should then output the final approximation and the number of iterations. Make sure that your program can t be stuck in an infinite loop or divide by. Here is my code: function [y,iter]=newton(f,fp,x,tol) max=1; % The maximum permited number of iterations iter=1; % We start on the first interation if (feval(fp,x)==) 1

% Check to see before we get div by erro error( Derivative of function is ) x1=x-feval(f,x)/feval(fp,x); % The first Newton step while(abs((x1-x)/x)>tol) x=x1; % update the guess if (feval(fp,x)==) % Check to see before we get div by erro error( Derivative of function is ) x1=x-feval(f,x)/feval(fp,x); % next iteration iter=iter+1; if (iter>=max) error( Maximum iterations exceeded ) y=x1; % solution 3. Use the programs you have written to find roots for the following: Function Newton s Method guess Bisection Method Interval tan(x) 2x 1.4 a = 1, b = 1.4 65x 4 72x 3 1.5x 2 + 16.5x 1 1 a =, b = 1 x 3 6x 2 + 12x 8 3 a = 1, b = 3 Use 1 4 as your tolerance for both methods. Discuss the appropriateness of the methods and initial guess to the above problems. I had to write the m-files to provide the functions. Here is a listing of the 6 m=files. The files f1.m f2.m f3.m are for the function values. The files f1p.m, f2p.m and f3p.m are for the derivatives. function y=f1(x) y=tan(x)-2*x; function y=f1p(x) y=sec(x)^2-2; function y=f2(x) y=65*x^4-72*x^3-1.5*x^2+16.5*x-1; 2

function y=f2p(x) y=26*x^3-216*x^2-3*x+16.5; function y=f3(x) y=x^3-6*x^2+12*x-8; function y=f3p(x) y=3*x^2-12*x+12; Here is the Matlab output for this question: octave:2> [y n]=newton( f1, f1p,1.4,.1) y = 1.1656 n = 6 octave:3> [y n]=bisec( f1,1,1.4,.1) y = 1.1655 n = 13 octave:4> [y n]=newton( f2, f2p,1,.1) y =.61933 n = 15 octave:5> [y n]=bisec( f2,,1,.1) y =.61951 n = 24 octave:6> [y n]=newton( f3, f3p,3,.1) y = 2.3 n = 2 octave:7> [y n]=bisec( f3,1,3,.1) y = 1.9999 n = 16 octave:8> quit The results for the first function are more or less what we would expect. Newton s method converges must faster then the bisection method. For the second function the performance of the two methods seems comparable. If we take a look at the second function, we can see what is happening. 3

6 4 2.2.4.6.8 x 1 By guessing 1, we are getting stuck around the minimum located near.7. If we where to use any initial guess less than.3, we would get very rapid convergence. As well if we where to decrease the tolerance, Newton s method would start to appear much better. For the third equation, Newton s method also appears to be worse than the bisection method. The actual root is at exactly 2, so not only did Newton s method take longer to converge, but the answer is less accurate. Again we look at a graph of the function to see what is happening. 4

.15.1.5 -.5 -.1 -.15 1.6 1.8 2 2.2 2.4 As you can see from the graph, the function is very flat near its root at x = 2. This means that the derivative will become very small as we approach the root. For this example we also note that the bisection method should do much better than my implementation. The very first guess with these values is the actual root. If I had considered the possibility of one of the guesses being the exact root, my program would have converged after one iteration. Since I didn t consider this possibility, I continue on for an additional 15 iterations and at some point some roundoff error enters the picture. Thus the 16th guess is not as good as the first. 4. In this question, we will find interpolating polynomials of degree at most 6 for the function f(x) = e 2x using equally spaced points and using the Chebyshev roots. To solve for this problem, I wrote code to find the divided difference table and plot the resulting polynomial. Since this is part of the current assignment I will not include the code, but the derivation of the table can follow the examples in the notes for lab2 and lab3. Here is a listing of the function I used to plot an interpolating polynomial given its divided difference table and the x values. function [xi,y]=divdifplot(x,f) len=length(x); xi=linspace(x(1),x(len),5) ; y=zeros(5,1); for i=1:len p=ones(5,1); for j=1:(i-1) p=p.*(xi-x(j)); y=y+f(i,i).*p; 5

(a) Construct (use the computer or do it by hand) a polynomial of degree at most five, P 5 (x), interpolating f(x) at the points x k =.4(k 1), k = 1,...,7 (you should have equally spaced points between and 2). Plot P 5 (x) with a solid line and plot the points f(x k ) with a + for < x < 2. I made a typo a mistake here the.4 should be 1, or we should be considering a polynomial 3 of degree at most 5. The way the statement now reads, the last point wil l be 2.4. People can do it either way and I won t take of marks. Sorry for the confusion. Here is the MATLAB session to generate the polynomial and the graph. octave:2> x=linspace(,2,6) x =..4.8 1.2 1.6 2. octave:3> y=exp(-2.*x) y = 1..449329.21897.9718.4762.18316 octave:4> F=divdif(x,y) F = 1.......44933-1.37668.....219 -.61858.94762....972 -.27795.42579 -.43486...476 -.12489.19132 -.19539.14966..1832 -.5612.8597 -.878.6725 -.4121 octave:5> [x1,y1]=divdifplot(x,f); octave:6> plot(x1,y1) octave:7> hold on octave:8> plot(x,y, + ) octave:9> print("hw3q4a.eps") 6

octave:1> diary off Here is the graph from that session. 1.9.8.7.6.5.4.3.2.1.5 1 1.5 2 (b) Use a linear transformation to move the zeros of the degree 6 Chebyshev polynomial to the interval [, 2]. We can plug in the formula for the translated Chebyshev roots x i = b + a 2 + b a 2 ( ) (2i 1)π cos 2n to find the interpolation points. As this goes with then next question, I will find these point and the interpolating polynomial in the MATLAB session posted after the next question. (c) Construct a polynomial of degree at most six, P6 (x), interpolating f(x) at the points found in 4b. Plot P 6 (x) with a solid line and plot the points f(x k ) with a + on the interval < x < 2. Here is the MATLAB session I used to answer the previous two questions: octave:1> x=zeros(6,1) x = octave:2> for i=1:6 > x(i)=1+cos((2*i-1)*pi/12); 7

> octave:3> x x = 1.965926 1.7717 1.258819.741181.292893.3474 octave:4> y=exp(-2.*x) y =.1967.3292.865.22711.556668.934122 octave:5> F=divdif(x,y) F =.1961......329 -.5137.....865 -.1651.7798....2271 -.28292.18263 -.8545...55667 -.73517.4682 -.2193.6962..93412-1.45837 1.2276 -.4528.14995 -.4158 octave:6> [xxx,yyy]=divdifplot(x,f); octave:7> plot(xxx,yyy) octave:8> hold on octave:9> plot(x,y, x ) octave:1> print("hw3q4c.eps") Here is the graph resulting from the above MATLAB session: 8

.9.8.7.6.5.4.3.2.1.5 1 1.5 2 (d) Plot f(x) P 5 (x) and f(x) P 5 (x) on the same graph for < x < 2. Explain why one of the polynomials has a lower maximum error. Here is the tail of the MATLAB session I used to generate the final graphs octave:12> plot(x1,exp(-2.*x1)-y1, b ) octave:13> plot(xxx,exp(-2.*xxx)-yyy, r ) octave:14> hold on octave:15> plot(x1,exp(-2.*x1)-y1, b ) octave:16> print("hw3q4d.eps") Here is the graph generated..4.2 -.2 -.4 -.6 -.8 -.1 -.12.5 1 1.5 2 The error for P 5 is graphed with the dotted cure. As you can see from the graph both 9

polynomials provide a good approximation in the middle, but near the points, P5 gives a much better approximation. Actually in the interior, P 5 is slightly better than P 5, but using the Chebyshev roots as interpolation points spreads the error more evenly over the interval, so the global error bound we have for P 5 is much lower. 1