Roots of Equations (Chapters 5 and 6)



Similar documents
Roots of equation fx are the values of x which satisfy the above expression. Also referred to as the zeros of an equation

Implicit Differentiation

G.A. Pavliotis. Department of Mathematics. Imperial College London

Core Maths C3. Revision Notes

Systems of Equations Involving Circles and Lines

Section 3-3 Approximating Real Zeros of Polynomials

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

7.7 Solving Rational Equations

D.3. Angles and Degree Measure. Review of Trigonometric Functions

Core Maths C2. Revision Notes

Aim. Decimal Search. An Excel 'macro' was used to do the calculations. A copy of the script can be found in Appendix A.

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

10.1. Solving Quadratic Equations. Investigation: Rocket Science CONDENSED

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

In this this review we turn our attention to the square root function, the function defined by the equation. f(x) = x. (5.1)

Section 3-7. Marginal Analysis in Business and Economics. Marginal Cost, Revenue, and Profit. 202 Chapter 3 The Derivative

ab = c a If the coefficients a,b and c are real then either α and β are real or α and β are complex conjugates

Homework 2 Solutions

2013 MBA Jump Start Program

2008 AP Calculus AB Multiple Choice Exam

Rootfinding for Nonlinear Equations

TI-83/84 Plus Graphing Calculator Worksheet #2

Anchorage School District/Alaska Sr. High Math Performance Standards Algebra

Solving Quadratic Equations by Graphing. Consider an equation of the form. y ax 2 bx c a 0. In an equation of the form

Review of Intermediate Algebra Content

Numerical Solution of Differential

Exponential and Logarithmic Functions

A power series about x = a is the series of the form

15.1. Exact Differential Equations. Exact First-Order Equations. Exact Differential Equations Integrating Factors

A Slide Show Demonstrating Newton s Method

Chapter 4 One Dimensional Kinematics

Nonlinear Algebraic Equations Example

Section 6-3 Double-Angle and Half-Angle Identities

Lines. We have learned that the graph of a linear equation. y = mx +b

Higher. Polynomials and Quadratics 64

CS 261 Fall 2011 Solutions to Assignment #4

Higher Education Math Placement

4.3 Lagrange Approximation

5 Numerical Differentiation

Section 1-4 Functions: Graphs and Properties

A three point formula for finding roots of equations by the method of least squares

Polynomials. Dr. philippe B. laval Kennesaw State University. April 3, 2005

A Quick Algebra Review

Algebra 2 Unit 8 (Chapter 7) CALCULATORS ARE NOT ALLOWED

The numerical values that you find are called the solutions of the equation.

LINEAR INEQUALITIES. less than, < 2x + 5 x 3 less than or equal to, greater than, > 3x 2 x 6 greater than or equal to,

POLYNOMIAL FUNCTIONS

D.2. The Cartesian Plane. The Cartesian Plane The Distance and Midpoint Formulas Equations of Circles. D10 APPENDIX D Precalculus Review

DISTANCE, CIRCLES, AND QUADRATIC EQUATIONS

Root Computations of Real-coefficient Polynomials using Spreadsheets*

Numerical Methods for Solving Systems of Nonlinear Equations

Answers to Basic Algebra Review

Mathematics 31 Pre-calculus and Limits

Polynomial and Synthetic Division. Long Division of Polynomials. Example 1. 6x 2 7x 2 x 2) 19x 2 16x 4 6x3 12x 2 7x 2 16x 7x 2 14x. 2x 4.

Chapter 4. Polynomial and Rational Functions. 4.1 Polynomial Functions and Their Graphs

Assessment Schedule 2013

7.6 Approximation Errors and Simpson's Rule

Zero: If P is a polynomial and if c is a number such that P (c) = 0 then c is a zero of P.

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

Math 0980 Chapter Objectives. Chapter 1: Introduction to Algebra: The Integers.

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

EQUATIONS and INEQUALITIES

Worksheet 1. What You Need to Know About Motion Along the x-axis (Part 1)

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Polynomials. Teachers Teaching with Technology. Scotland T 3. Teachers Teaching with Technology (Scotland)

Section 5.0A Factoring Part 1

Numerical Methods for Option Pricing

1 Lecture: Integration of rational functions by decomposition

INTRODUCTION TO ERRORS AND ERROR ANALYSIS

Taylor Polynomials. for each dollar that you invest, giving you an 11% profit.

Math 120 Final Exam Practice Problems, Form: A

Core Maths C1. Revision Notes

STATISTICA Formula Guide: Logistic Regression. Table of Contents

Principles of Scientific Computing Nonlinear Equations and Optimization

YOU CAN COUNT ON NUMBER LINES

Basic numerical skills: EQUATIONS AND HOW TO SOLVE THEM. x + 5 = = (2-2) = = 5. x = 7-5. x + 0 = 20.

C3: Functions. Learning objectives

AC : MATHEMATICAL MODELING AND SIMULATION US- ING LABVIEW AND LABVIEW MATHSCRIPT

3.3 Real Zeros of Polynomials

Continued Fractions and the Euclidean Algorithm

135 Final Review. Determine whether the graph is symmetric with respect to the x-axis, the y-axis, and/or the origin.

GLM, insurance pricing & big data: paying attention to convergence issues.

1 Error in Euler s Method

Zeros of Polynomial Functions

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Polynomial Degree and Finite Differences

Solutions of Equations in One Variable. Fixed-Point Iteration II

2.6. The Circle. Introduction. Prerequisites. Learning Outcomes

Review of Fundamental Mathematics

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

Common Core Unit Summary Grades 6 to 8

PowerScore Test Preparation (800)

6.4 Logarithmic Equations and Inequalities

Planar Curve Intersection

Microeconomic Theory: Basic Math Concepts

Chapter 4. Probability and Probability Distributions

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Answer Key for California State Standards: Algebra I

3 e) x f) 2. Precalculus Worksheet P Complete the following questions from your textbook: p11: # Why would you never write 5 < x > 7?

Transcription:

Roots of Equations (Chapters 5 and 6) Problem: given f() = 0, find. In general, f() can be any function. For some forms of f(), analytical solutions are available. However, for other functions, we have to design some methods, or algorithms to find either eact, or approimate solution for f() = 0. We will focus on f() with single unknown variable, not linear, and continuous. Outline: Incremental search methods: bisection method, false position method Open methods: Newton-Raphson method, Secant method Finding repeated roots 1

Analytical solutions: Eample 1: a + b + c = 0, = b± b 4ac a Eample : ae b = 0. No analytical solution. Straightforward approach: Graphical techniques. The most straightforward method is to draw a picture of the function and find where the function crosses -ais. Graphically find the root: Plot f() for different values of and find the intersection with the ais. Graphical methods can be utilized to provide estimates of roots, which can be used as starting guesses for other methods. Drawbacks: not precise. Different people may have different guesses. 0 f() 1 A better method is called incremental search method, which is a general method that most computer-based methods are based on.

1 Incremental search methods: Key idea: The function f() may change signs around the roots. f() f() l l u u l u Figure 1: Illustration of incremental search method Note: f() may not change signs around the roots (see figures on the net page). This case will be dealt with later. Assume that function f() is real and continuous in interval ( l, u ) and f() has opposite signs at l and u, i.e., f( l ) f( u ) < 0 Then, there is at least one root between l and u. 3

4

Net, we can locate the root in a smaller interval until reasonable accuracy is achieved or the root is found. Based on this, the basic steps in incremental search methods are as follows: (1). Locate an interval where the function change signs. (). Divide the interval into a number of sub-intervals. (3). Search the sub-interval to accurately locate the sign change. (4). Repeat until the root (the location of the sign change) is found. Errors in an iterative approach Since getting the eact value of the root is almost impossible during iterations, a termination criterion is needed. (When the error is below a certain stopping threshold, terminate the iterations.) Define approimate percentage relative error as new r old r ɛ a = new r 100% where new r is the estimated root for the present iteration, and old r the previous iteration. Define true percentage relative error as ɛ t = t r t 5 100% is the estimated root from

where t is the true solution of f() = 0, i.e., f( t ) = 0. In general, ɛ t < ɛ a. That is, if ɛ a is below the stopping threshold, then ɛ t is definitely below it as well. Bisection (or interval halving) method Bisection method is an incremental search method where sub-interval for the net iteration is selected by dividing the current interval in half..1 Bisection steps (1). Select l and u such that the function changes signs, i.e., f( l ) f( u ) < 0 (). Estimate the root as r given by (3). Determine the net interval: r = l + r If f( l ) f( r ) < 0, then the root lies in ( l, r ), set u = r and return to Step (). If f( u ) f( r ) < 0, then the root lies in ( r, u ), set l = r and return to Step (). 6

If f( u ) f( r ) = 0, then the root has been found. Set the solution = r and terminate the computation. (4). If ɛ a < ɛ threshold, = r ; else, repeat the above process. (new u ) l r u l r (new ) u u (new l ) l r u l r l (new ) u Figure : Bisection method If ɛ a is below a certain threshold (e.g., 1%), stop the iteration. This is, the computation is stopped when the solution does not change much from one iteration to another. 7

Eample: Find the root of f() = e 3 = 0. Solution: 1. l = 0, u = 1, f(0) = 1 > 0, f(1) = 0.817 < 0, r = ( l + u )/ = 0.5, ɛ = 100%.. f(0.5) = 0.1487 > 0, l = r = 0.5, u = 1, r = ( l + u )/ = 0.75, ɛ a = 33.33%. 3. f(0.75) = 0.1330 < 0, l = 0.5, u = r = 0.75, r = 0.65, ɛ a = 0%. 4. f(0.65) = 0.0068 < 0, l = 0.5, u = r = 0.65, r = 0.565, ɛ a = 11%. 5. f(0.565) = 0.0676 > 0, l = r = 0.565, u = u = 0.65, r = 0.5938, ɛ a = 5.3%. 6. f(0.5938) = 0.095 > 0, l = r = 0.5938, u = u = 0.65, r = 0.6094, ɛ a =.6%. 7.... r : 0.5 0.75 0.65 0.565 0.5938 0.6094 0.617 0.611 0.6191 0.618 0.6187 0.6189 0.6190 0.6191 8

1 Bisection Method (between0and1) 0.8 0.6 f() 0.4 0. 0 0. 0.4 0 0. 0.4 0.6 0.8 1 Figure 3: Eample of using Bisection method, f() = e 3 = 0 9

. Evaluation of ɛ a old l old r old u new l new r new u Figure 4: Evaluation of ɛ a ɛ a = new r old r new r new r = 1 (new l + new u ) Therefore, new r old r = 1 new l old r + 1 new u = new l new l = 1 ( new u new ) l and ɛ a = 1 1 ( new u new ) l ( new u + new ) 100% l 10

or ɛ a = u l 100% u + l That is, the relative error can be found before starting the iteration..3 Finding the number of iterations Initially (iteration 0), the absolute error is E 0 a = 0 u 0 l = 0 where superscript indicates the iteration number. After first iteration, the absolute error is halved. That is After n iterations, If the desired absolute error is E a,d, then E 1 a = 1 E0 a = 0 E n a = 1 En 1 a E n a E a,d = 0 n which is equivalent to 0 n E a,d n log 0 E a,d 11

The minimum n is 0 log E a,d = log 10 0 E a,d log 10 1

Eample: Find the roots of function f() = 3 + 0.5 + 0.75. Solution: To find the eact roots of f(), we first factorize f() as f() = 3 + 0.5 + 0.75 = ( 1)( 0.75) = ( 1) ( 1.5) ( + 0.5) Thus, = 1, = 1.5 and = 0.5 are the eact roots of f(). l = 1, u =, the result converges to r = 0.5. l =, u =, the result converges to r = 0.5. l = 1, u = 8, the result converges to r = 1.5. 1.5 1 0.5 0 0.5 1 1.5 f()= 3 +0.5+0.75.5 1 0.5 0 0.5 1 1.5 13

3 Newton-Raphson method 3.1 Iterations The Newton-Raphson method uses the slope (tangent) of the function f() at the current iterative solution ( i ) to find the solution ( i+1 ) in the net iteration. The slope at ( i, f( i )) is given by f ( i ) = f( i) 0 i i+1 Then i+1 can be solved as i+1 = i f( i) f ( i ) which is known as the Newton-Raphson formula. Relative error: ɛ a = i+1 i i+1 100%. Iterations stop if ɛ a ɛ threshold. 14

Figure 5: Newton-Raphson method to find the roots of an equation 15

Eample: Find the root of e 3 = 0. Solution: f() = e 3 f () = e 3 With these, the Newton-Raphson solution can be updated as i+1 = i e i 3 i e i 3 1 0.795 0.5680 0.617 0.6191 0.6191 Converges much faster than the bisection method. 3. Errors and termination condition The approimate percentage relative error is ɛ a = i+1 i i+1 100% f() 5 4 3 1 0 Newton Raphson Method (from 1) 1 1.5 1 0.5 0 0.5 1 16

The Newton-Raphson iteration can be terminated when ɛ a is less than a certain threshold (e.g., 1%). 3.3 Error analysis of Newton-Raphson method using Taylor series Let t be the true solution to f() = 0. That is According to the Taylor s theorem, we have f( t ) = 0 f( t ) = f( i ) + f ( i )( t i ) + f (α) ( t i ) where α is an unknown value between t and i. Since f( t ) = 0, the above equation becomes f( i ) + f ( i )( t i ) + f (α) ( t i ) = 0 (1) In Newton-Raphson method, we use the following iteration: That is i+1 = i f( i) f ( i ) f( i ) + f ( i )( i+1 i ) = 0 () 17

Subtracting () from (1), we have f ( i )( t i+1 ) + f (α) ( t i ) = 0 Denoting e i = t i, which is the error in the i-th iteration, we have f ( i )e i+1 + f (α) (e i ) = 0 With convergence, i t, and α t. Then and e i+1 = f ( t ) f ( t ) e i e i+1 = f ( t ) f ( t ) e i+1 e i The error in the current iteration is proportional to the square of the previous error. That is, we have quadratic convergence with the Newton-Raphson method. The number of correct decimal places in a Newton-Raphson solution doubles after each iteration. e i Drawbacks of the Newton-Raphson method: (see Fig. 6.6 in [Chapra]) It cannot handle repeated roots 18

The solution may diverge near a point of inflection. The solution may oscillate near local minima or maima. With near-zero slope, the solution may diverge or reach a different root. Eample: Find the roots of function f() = 3 + 0.5 + 0.75. Solution: To find the eact roots of f(), we first factorize f() as f() = 3 + 0.5 + 0.75 = ( 1)( 0.75) = ( 1) ( 1.5) ( + 0.5) Thus, = 1, = 1.5 and = 0.5 are the eact roots of f(). To find the roots using the Newton-Raphson methods, write the iteration formula as i+1 = i f( i) f ( i ) = i 3 + 0.5 + 0.75 3 4 + 0.5 0 = 1, 1 = 0.655, = 0.51, 3 = 0.5005, 4 = 0.5000, 5 = 0.5000. 0 = 0, 1 = 3, = 1.8535, 3 = 1.138, 4 = 0.711, 5 = 0.5410, 6 = 0.5018, 7 = 0.5000, 8 = 0.5000. 19

Figure 6: Drawbacks of using NR method 0

0 = 0.5, 1 = 1, = 1. 0 = 1.3, 1 = 1.5434, = 1.5040, 3 = 1.5000, 4 = 1.5000. 0 = 1.5, 1 = 0.5000, = 0.5000. 1

4 Secant method In order to implement the Newton-Raphson method, f () needs to be found analytically and evaluated numerically. In some cases, the analytical (or its numerical) evaluation may not be feasible or desirable. The Secant method is to evaluate the derivative online using two-point differencing. f() (,f( )) i 1 i 1 i i (, f( )) i+1 i i 1 i+1 i i 1 Figure 7: Secant method to find the roots of an equation As shown in the figure, f () f( i 1) f( i ) i 1 i

Then the Newton-Raphson iteration can be modified as which is the Secant iteration. i+1 = i f( i) f ( i ) = i f( i) f( i 1 ) f( i ) i 1 i = i f( i)( i i 1 ) f( i ) f( i 1 ) The performance of the Secant iteration is typically inferior to that of the Newton-Raphson method. Eample: f() = e 3 = 0, find. Solution: 0 = 1.1, 1 = 1, = 1 f( 1)( 1 0 ) f( 1 ) f( 0 ) = 0.709, ɛ a = 1 = 1, = 0.709, 3 = f( )( 1 ) f( ) f( 1 ) = 0.4917, ɛ a = 1 = 0.709, 3 = 0.4917, 4 = 3 f( 3)( 3 ) f( 3 ) f( ) = 0.5961, ɛ a = 5 = 0.6170, ɛ a = 3.4% 6 = 0.6190, ɛ a = 0.3% 7 = 0.6191, ɛ a = 5.93 10 5 100% = 469.09% 3 3 100% = 44.90% 4 3 4 100% = 17.51% 3

5 False position method f( u ) f() r u f( ) f( l ) l t u f( ) l l r u Figure 8: False position method to find the roots of an equation Idea: if f( l ) is closer to zero than f( n ), then the root is more likely to be closer to l than to u. (NOT always true!) False position steps: (1). Find l, u, l < u, f( l )f( u ) < 0 (). Estimate r using similar triangles: f( l ) f( u ) = r l u r 4

Find r as or (3). Determine net iteration interval r = [f( l) f( u )] u + f( u )( u l ) f( l ) f( u ) r = u f( u)( u l ) f( u ) f( l ) If f( l ) f( r ) < 0, then the root lies in ( l, r ), set u = r and return to Step (). If f( u ) f( r ) < 0, then the root lies in ( r, u ), set l = r and return to Step (). If f( u ) f( r ) = 0, then the root has been found. Set the solution = r and terminate the computation. (4). If ɛ a < ɛ threshold, = r ; else, back to (). False position method is one of the incremental search methods. In general, false position method performs better than bisection method. However, special case eists (see fig. 5.14 in tetbook). 5

Figure 9: Comparison between false position and Secant methods 6

7

6 Handling repeated (multiple) roots Eamples of multiple roots: f() = 4 + 3 = ( 1)( 3) has unrepeated roots: = 1, = 3 f() = ( 1) ( 3) has repeated roots at = 1 f() = ( 1) 3 ( 3) has 3 repeated roots at = 1 f() = ( 1) 4 ( 3) has 4 repeated roots at = 1 Observations: The sign of the function does not change around even multiple roots bisection and false position methods do not work Both f() and f () go to zero around multiple roots Newton-Raphson and secant methods may converge slowly or even diverge. 6.1 Modified Newton-Raphson method Fact: Both f() and u() = f() f have the same roots, but u() has better convergence properties. () Idea: to find the roots of u() instead of f(). 8

3 4 f()=( 3).*( 1) 1 0 f()=( 3).*( 1). 0 1 0 1 3 4 4 0 1 3 4 4 4 f()=( 3).*( 1). 3 0 f()=( 3).*( 1). 4 0 4 0 1 3 4 4 0 1 3 4 Figure 10: Repeated roots 9

The Newton-Raphson method iteration for u(): or i+1 = i u( i) u ( i ) [ ] f () f()f () u () = [ f () ] i+1 = i i+1 = i [ ] f ( i ) [ f ( i ) ] f( i ) f(i )f ( i ) f ( i ) f( i )f ( i ) [ f ( i ) ] f(i )f ( i ) Modified Newton-Raphson method has quadratic convergence even for multiple roots. t is an unrepeated root of u() = 0 f() has n repeated roots, n f() = g() ( t ) n where g( t ) 0. Then f () = g ()( t ) n + g()n( t ) n 1 ] = ( t ) [g n 1 ()( t ) + ng() 30

and u() = Therefore, t is an unrepeated root of u() = 0. = = f() f () g() ( t ) n ( t ) [ n 1 g ()( t ) + ng() ] g() ( t ) g ()( t ) + ng() 31