Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities c and temperature T Inside: N reactions and reaction constants r. s p,i (in) with stoichiometric coefficients Out: N spieces with concentrations c (c-s may be equal to zero), heat capacities c and temperature T. α (in) i σ α,i November 00
Nonlinear Algebraic Equations Example Nr (in) i i + σα,i α = = s α= (F/ V) c c r 0 i,,..., N Mass balancefor spieces,,..., N Ns Nr (in) (in) (in) i p,i i p,i i= α= (F/V) c c T cc T H r = 0 α α Energy balance Column of unknown variables :x = [c,c,...,c,t] N s s T November 00
Nonlinear Algebraic Equations Example Each of the above equations may be written in the general form: f (x, x,..., x ) = 0 : i N f (x, x,..., x ) = 0 N f (x, x,..., x ) = 0 N... f (x, x,..., x ) = 0 N N In vector form: f(x)=0 f(x) f(x) f(x)=... f N (x) Let x be the solution staisfying f(x)=0. We do not know x and [0] take x as initial guess. November 00
Nonlinear Algebraic Equations We need to form a sequence of estimates to the solution: [] [] [3] x, x, x,... that will hopehully converge to x. Thus we want: lim x m [m] = x [m] lim x x = 0 m Unlike with linear equations, we can t say much about existence or uniqueness of solutions, even for a single equation. November 00
Single Nonlinear Equation We assume f(x) is infinitely differentiable at the solution x. Then f(x) may be Taylor expanded around x : = + + + + 3! 3 f(x) f(x) (x x)f (x) (x x) f (x) (x x) f (x)... [0] Assume x being close to x so t [0] [0] [0] f(x ) (x x)f (x ), or hat the series may be truncated: [0] f(x ) [0] f(x ) [0] [0] [] [0] [] [0] f(x ) (x x )f (x ) x = x Newton s method 3 Example: f(x) (x 3)(x )(x ) x 6x x 6 = = + November 00
Single Nonlinear Equation November 00
Single Nonlinear Equation November 00
Single Nonlinear Equation For f(x)=(x-3)(x-)(x-)=x 3-6x +x+6=0, we se that Newton s method converges to the root at x= only if.6<x [0] <.4. We may look at the direction of the first step and see why: x [] - x [0] =-f(x [0] )/ f (x [0] ). So, we have easy roots: x=, x=3 and a more difficult one. We may factorize out the roots we already know. November 00
Single Nonlinear Equation f(x) Introduce g(x) = (x 3)(x ) f (x) f(x) = + (x 3)(x ) (x 3)(x ) (x 3) (x ) We now use Newton s method to find the roots of g(x): g(x) g(x) g(x) [i+] x = x get x =. November 00
Systems of Nonlinear Equations Let s extend the method to multiple equations: f (x,x,x,,x N )=0 f (x,x,x,,x N )=0 => f(x)=0.. Start from initial guess x [0] f N (x,x,x,,x N )=0 As before expand each equation at the solution x with f( x )=0: N N N f(x) i f(x) i i i j j j j k k j = xj jk = = xj xk [0] Assume x is close to x and discard quadratic terms f (x) = f (x) + (x x ) + (x x ) (x x ) +... N f(x) i (xj x) j j = f(x) i x j : November 00
Systems of Nonlinear Equations Let s define the Jacobian matrix J( x ) with the elements: f(x) i J(x) ij = x Then our approximate expansion may be written as: N f i(x) J ij(x)(x j x j) = J(x)(x x) j= This gives us the linear system: f(x) J(x)(x x) j i November 00
Systems of Nonlinear Equations [i+ ] f(x ) = J(x )(x x ) = J(x)(x x) i Note, Jacobian is evaluated at the position of an old iteration, not at an unknown solution [i+ ] Defining x x x, we rewrite the equation as J(x ) = x = f(x ) or just J x = f The iterations are continued untill some convergence criteria are met: relative error f δ f [0] rel absolute error November 00 f δ abs
Systems of Nonlinear Equations Newton s method does not always converge. If it does, can we estimate the error? Let our function f(x) be continuously differentiable in the vicinity of x and the vector connecting x and x +p all lies inside this vicinity. f J(x + sp) = x T x+ sp f(x + p) = f(x) + J(x + sp)pds 0 x x+sp x+p use path integral along the line x - x+p, parametrized by s. November 00
Systems of Nonlinear Equations Add and subtract J(x)p to RHS: 0 [ ] 0 [ ] f(x + p) = f(x) + J(x)p + J(x + sp) J(x) pds In Newton s method we ignore the integral term and choose p to estimate f(x+p). Thus the error in this case is: J(x + sp) J(x) pds = f(x + p) What is the upper bound on this error? f(x) and J(x) are continuous: J(x+sp)-J(x) 0 as p 0 for all 0 s. November 00
Systems of Nonlinear Equations Av The norm of the matrix is defined as: A = max, v 0 v Ay so for any y A, or Ay A y, therefore y [ + ] [ + ] J(x sp) J(x) pds J(x sp) J(x) ds p 0 0 The error goes down at least as fast as p because for a continuous Jacobian J(x+sp)-J(x) 0 as p 0. November 00
0 Systems of Nonlinear Equations If we suggest that there exist some L>0 such that J(y)-J(z) L y z or there is some upper bound on the "stretching" effect of J. [ ] [ ] f(x+ sp) = J(x+ sp) J(x) pds J(x+ sp) J(x) ds p 0 0 [ ] and J(x + sp) J(x) ds L s p, so in this case [ ] ( ) f (x + sp) = J(x + sp) J(x) pds L s p p 0 ( ) ( Ls p O p ) = November 00
Systems of Nonlinear Equations Thus if we are at distance p from the solution, our error scales as p. What about convergence? How the error scales with the number of iterations? The answer is: [i+ ] x x = O x x - local quadratic convergence, a very fast one! Works when you are close enough to the solution. November 00
Systems of Nonlinear Equations x x : 0.= 0 [i+ ] x x : 0.0 = 0 [i+ ] 4 x x : 0.0000 = 0 November 00 [i+ 3] 8 x x : 0.00000000 = 0 Works if we are close enough to an isolated (non-singular) solution: det J( x ) 0.
November 00 Systems of Nonlinear Equations, Example = = = = + = = + = 3 3 3x 8x 8x 9x x f x f x f x f J The Jacobian is : 4 3 x 0 8 x 4x f 0 45 4x 3x f works for a simple system : method Let s examine how Newton s
Systems of Nonlinear Equations, Example At each step we have the following system to solve: J J f = x ( ) x 9 8x = ( ) ( ) ( ) 3 + 4 x ( ) ( ) 3 x x 3 x 4 = f 8x 3 x x 45 + 8 [i+ ] = x + x November 00
Systems of Nonlinear Equations, Example Let us examine performance of Newton s criterion f = δ bs = 0 0 method with the convergence max { f, f } < δ abs November 00
Newton s Method Newton s method works well close to the solution, but otherwise takes large erratic steps, shows poor performance and reliability. Let s try to avoid such large steps: employ reduced step Newton s algorithm. Full Newton s step gives x [i+] =x +p. We ll use only fraction of of the step: x [i+] =x +λ i p, 0<λ i <. November 00
Newton s Method How do we choose λ i? Simplest way - weak line search: - start with λ i = -m for m=0,,, As we reduce the value of λ i bya a factor of at each step, we accept the first one that satisfies a descent criterion: ( f x + λ p ) < f ( x ) i It can be proven that if the Jacobian is not singular, the correct solution will be found by the reduced step Newton s algorithm. November 00