Nonlinear Algebraic Equations Example



Similar documents
Separable First Order Differential Equations

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Vector and Matrix Norms

CS 221. Tuesday 8 November 2011

Solutions of Equations in One Variable. Fixed-Point Iteration II

5 Numerical Differentiation

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Roots of Equations (Chapters 5 and 6)

7 Gaussian Elimination and LU Factorization

UNIVERSITETET I OSLO

1 Review of Least Squares Solutions to Overdetermined Systems

Numerical Methods for Solving Systems of Nonlinear Equations

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Linear Algebra Notes for Marsden and Tromba Vector Calculus

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

1 Review of Newton Polynomials

Solving Linear Systems of Equations. Gerald Recktenwald Portland State University Mechanical Engineering Department

Inner Product Spaces

1 Norms and Vector Spaces

Statistical Machine Learning

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Continued Fractions and the Euclidean Algorithm

is identically equal to x 2 +3x +2

Adding and Subtracting Positive and Negative Numbers

Lecture 13 Linear quadratic Lyapunov theory

Core Maths C1. Revision Notes

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Several Views of Support Vector Machines

Department of Chemical Engineering ChE-101: Approaches to Chemical Engineering Problem Solving MATLAB Tutorial VI

1 Introduction to Matrices

14. Nonlinear least-squares

x = + x 2 + x

Principles of Scientific Computing Nonlinear Equations and Optimization

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Solving Linear Systems, Continued and The Inverse of a Matrix

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Notes on metric spaces

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Homework 1. [p 0 q i+j p i 1 q j+1 ] + [p i q j ] + [p i+1 q j p i+j q 0 ]

Chapter 7 - Roots, Radicals, and Complex Numbers

Roots of equation fx are the values of x which satisfy the above expression. Also referred to as the zeros of an equation

Integrals of Rational Functions

Numerical methods for American options

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Weighted-Least-Square(WLS) State Estimation

BANACH AND HILBERT SPACE REVIEW

Row Echelon Form and Reduced Row Echelon Form

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Systems of Linear Equations

Discrete Optimization

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Numerical Analysis Lecture Notes

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Zeros of Polynomial Functions

G.A. Pavliotis. Department of Mathematics. Imperial College London

November 16, Interpolation, Extrapolation & Polynomial Approximation

MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform

Metric Spaces. Chapter Metrics

Lecture 3: Linear methods for classification

Differentiation and Integration

Zeros of a Polynomial Function

3.6. Partial Fractions. Introduction. Prerequisites. Learning Outcomes

Polynomial and Rational Functions

10.2 Series and Convergence

An Introduction to Machine Learning

(Quasi-)Newton methods

Derivative Free Optimization

Linear Algebra I. Ronald van Luijk, 2012

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

3.2 Sources, Sinks, Saddles, and Spirals

An Introduction to Applied Mathematics: An Iterative Process

The Method of Least Squares. Lectures INF2320 p. 1/80

Basics of Polynomial Theory

Numerical Methods I Eigenvalue Problems

2.4 Real Zeros of Polynomial Functions

Date: April 12, Contents

To discuss this topic fully, let us define some terms used in this and the following sets of supplemental notes.

ASEN Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

The Epsilon-Delta Limit Definition:

Practice with Proofs

Linearly Independent Sets and Linearly Dependent Sets

Numerical Solution of Differential

LINEAR EQUATIONS IN TWO VARIABLES

4 Lyapunov Stability Theory

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Introduction to Matrix Algebra

Nonlinear Optimization: Algorithms 3: Interior-point methods

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

EQUATIONS and INEQUALITIES

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

Pre-Algebra Lecture 6

Transcription:

Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities c and temperature T Inside: N reactions and reaction constants r. s p,i (in) with stoichiometric coefficients Out: N spieces with concentrations c (c-s may be equal to zero), heat capacities c and temperature T. α (in) i σ α,i November 00

Nonlinear Algebraic Equations Example Nr (in) i i + σα,i α = = s α= (F/ V) c c r 0 i,,..., N Mass balancefor spieces,,..., N Ns Nr (in) (in) (in) i p,i i p,i i= α= (F/V) c c T cc T H r = 0 α α Energy balance Column of unknown variables :x = [c,c,...,c,t] N s s T November 00

Nonlinear Algebraic Equations Example Each of the above equations may be written in the general form: f (x, x,..., x ) = 0 : i N f (x, x,..., x ) = 0 N f (x, x,..., x ) = 0 N... f (x, x,..., x ) = 0 N N In vector form: f(x)=0 f(x) f(x) f(x)=... f N (x) Let x be the solution staisfying f(x)=0. We do not know x and [0] take x as initial guess. November 00

Nonlinear Algebraic Equations We need to form a sequence of estimates to the solution: [] [] [3] x, x, x,... that will hopehully converge to x. Thus we want: lim x m [m] = x [m] lim x x = 0 m Unlike with linear equations, we can t say much about existence or uniqueness of solutions, even for a single equation. November 00

Single Nonlinear Equation We assume f(x) is infinitely differentiable at the solution x. Then f(x) may be Taylor expanded around x : = + + + + 3! 3 f(x) f(x) (x x)f (x) (x x) f (x) (x x) f (x)... [0] Assume x being close to x so t [0] [0] [0] f(x ) (x x)f (x ), or hat the series may be truncated: [0] f(x ) [0] f(x ) [0] [0] [] [0] [] [0] f(x ) (x x )f (x ) x = x Newton s method 3 Example: f(x) (x 3)(x )(x ) x 6x x 6 = = + November 00

Single Nonlinear Equation November 00

Single Nonlinear Equation November 00

Single Nonlinear Equation For f(x)=(x-3)(x-)(x-)=x 3-6x +x+6=0, we se that Newton s method converges to the root at x= only if.6<x [0] <.4. We may look at the direction of the first step and see why: x [] - x [0] =-f(x [0] )/ f (x [0] ). So, we have easy roots: x=, x=3 and a more difficult one. We may factorize out the roots we already know. November 00

Single Nonlinear Equation f(x) Introduce g(x) = (x 3)(x ) f (x) f(x) = + (x 3)(x ) (x 3)(x ) (x 3) (x ) We now use Newton s method to find the roots of g(x): g(x) g(x) g(x) [i+] x = x get x =. November 00

Systems of Nonlinear Equations Let s extend the method to multiple equations: f (x,x,x,,x N )=0 f (x,x,x,,x N )=0 => f(x)=0.. Start from initial guess x [0] f N (x,x,x,,x N )=0 As before expand each equation at the solution x with f( x )=0: N N N f(x) i f(x) i i i j j j j k k j = xj jk = = xj xk [0] Assume x is close to x and discard quadratic terms f (x) = f (x) + (x x ) + (x x ) (x x ) +... N f(x) i (xj x) j j = f(x) i x j : November 00

Systems of Nonlinear Equations Let s define the Jacobian matrix J( x ) with the elements: f(x) i J(x) ij = x Then our approximate expansion may be written as: N f i(x) J ij(x)(x j x j) = J(x)(x x) j= This gives us the linear system: f(x) J(x)(x x) j i November 00

Systems of Nonlinear Equations [i+ ] f(x ) = J(x )(x x ) = J(x)(x x) i Note, Jacobian is evaluated at the position of an old iteration, not at an unknown solution [i+ ] Defining x x x, we rewrite the equation as J(x ) = x = f(x ) or just J x = f The iterations are continued untill some convergence criteria are met: relative error f δ f [0] rel absolute error November 00 f δ abs

Systems of Nonlinear Equations Newton s method does not always converge. If it does, can we estimate the error? Let our function f(x) be continuously differentiable in the vicinity of x and the vector connecting x and x +p all lies inside this vicinity. f J(x + sp) = x T x+ sp f(x + p) = f(x) + J(x + sp)pds 0 x x+sp x+p use path integral along the line x - x+p, parametrized by s. November 00

Systems of Nonlinear Equations Add and subtract J(x)p to RHS: 0 [ ] 0 [ ] f(x + p) = f(x) + J(x)p + J(x + sp) J(x) pds In Newton s method we ignore the integral term and choose p to estimate f(x+p). Thus the error in this case is: J(x + sp) J(x) pds = f(x + p) What is the upper bound on this error? f(x) and J(x) are continuous: J(x+sp)-J(x) 0 as p 0 for all 0 s. November 00

Systems of Nonlinear Equations Av The norm of the matrix is defined as: A = max, v 0 v Ay so for any y A, or Ay A y, therefore y [ + ] [ + ] J(x sp) J(x) pds J(x sp) J(x) ds p 0 0 The error goes down at least as fast as p because for a continuous Jacobian J(x+sp)-J(x) 0 as p 0. November 00

0 Systems of Nonlinear Equations If we suggest that there exist some L>0 such that J(y)-J(z) L y z or there is some upper bound on the "stretching" effect of J. [ ] [ ] f(x+ sp) = J(x+ sp) J(x) pds J(x+ sp) J(x) ds p 0 0 [ ] and J(x + sp) J(x) ds L s p, so in this case [ ] ( ) f (x + sp) = J(x + sp) J(x) pds L s p p 0 ( ) ( Ls p O p ) = November 00

Systems of Nonlinear Equations Thus if we are at distance p from the solution, our error scales as p. What about convergence? How the error scales with the number of iterations? The answer is: [i+ ] x x = O x x - local quadratic convergence, a very fast one! Works when you are close enough to the solution. November 00

Systems of Nonlinear Equations x x : 0.= 0 [i+ ] x x : 0.0 = 0 [i+ ] 4 x x : 0.0000 = 0 November 00 [i+ 3] 8 x x : 0.00000000 = 0 Works if we are close enough to an isolated (non-singular) solution: det J( x ) 0.

November 00 Systems of Nonlinear Equations, Example = = = = + = = + = 3 3 3x 8x 8x 9x x f x f x f x f J The Jacobian is : 4 3 x 0 8 x 4x f 0 45 4x 3x f works for a simple system : method Let s examine how Newton s

Systems of Nonlinear Equations, Example At each step we have the following system to solve: J J f = x ( ) x 9 8x = ( ) ( ) ( ) 3 + 4 x ( ) ( ) 3 x x 3 x 4 = f 8x 3 x x 45 + 8 [i+ ] = x + x November 00

Systems of Nonlinear Equations, Example Let us examine performance of Newton s criterion f = δ bs = 0 0 method with the convergence max { f, f } < δ abs November 00

Newton s Method Newton s method works well close to the solution, but otherwise takes large erratic steps, shows poor performance and reliability. Let s try to avoid such large steps: employ reduced step Newton s algorithm. Full Newton s step gives x [i+] =x +p. We ll use only fraction of of the step: x [i+] =x +λ i p, 0<λ i <. November 00

Newton s Method How do we choose λ i? Simplest way - weak line search: - start with λ i = -m for m=0,,, As we reduce the value of λ i bya a factor of at each step, we accept the first one that satisfies a descent criterion: ( f x + λ p ) < f ( x ) i It can be proven that if the Jacobian is not singular, the correct solution will be found by the reduced step Newton s algorithm. November 00