Chapter 7 Nonlinear Systems Nonlinear systems in R n : X = B x. x n X = F (t; X) F (t; x ; :::; x n ) B C A ; F (t; X) =. F n (t; x ; :::; x n ) When F (t; X) = F (X) is independent of t; it is an example of dynamical system. Dynamical System: De nition: Consider any smooth two-variable function (t; X) : (t; X) 2 R R n 7! R n : Let t (X) = (t; X) be a map R n 7! R n : We call the family of map f t g a smooth dynamical system if it satis es the following two conditions. is an identity map: (X) = (; X) = X for any X 2. t s = t+s : (t; (s; X)) = (t + s; X) Flows of linear systems X = AX are dynamical systems Solution is So t (X ) = (t; X ) = e ta X (X ) = e A X = X t s (X ) = t e sa X = e ta e sa X = e (t+s)a X = t+s (X ) Flows of nonlinear systems X = F (X) are also dynamical systems (if F is smooth function) Integral equation formulation: X (t) = X + F (X (s)) ds C A
Picard Iteration: To solve this system, we create an iterative sequence X = X (initial value) X n+ = X + Existence (for general system F (t; X) ): F (X n (s)) ds the Picard sequence fx n (t)g has a limit as n! ;for a small time period " < t < ": (local existence theorem) the limit X (t) is a solution for t in ( "; ") ; because it satis es the integral formulation For linear (nonautonomous system) X = A (t) X the Picard sequence converges for all t as long as A (t) is de ned and continuous. (Global existence) Uniqueness (for general system F (t; X) ): The solution is unique, i.e., there is only one solution for the same IVP. Continuous dependence on initial data (for general system F (t; X) ): For Y close to X ; solution Y (t) with the initial data Y also exits in ( "; ") From integral formulations we see X (t) Y (t) = X + So if jrf j K; Y (t) = Y + F (X (s)) ds jx (t) Y (t)j jx Y j + jx Y j + jx 2 Y j + K F (Y (s)) ds Y + jf (X (s)) jf (X (s)) jx (s) F (Y (s)) ds F (Y (s)) dsj F (Y (s)) dsj Y (s)j ds
By Gronwall s inequality, we arrive at jx (t) Y (t)j jx Y j e Kt So if Y! X ;then Y (t)! X (t) Continuous dependence on parameters (for general system F (t; X) ): X = F a (X) Let X a (t) be the solution, and X b (t) the solution for X = F b (X). Then X a (t) X b (t) = (F a (X a (s)) F b (X b (s))) ds So if F b (X)! F a (X) as b! a; X b (t)! X a (t) So the ow (t; X) exists and unique, and continuously depends on initial data X: The ow is a dynamical system: By de nition, (; X) = X For any xed s; set Y (t) = (t + s; X) :Then Y (t) solves Y (t) = (t + s; X) = F ( (t + s; X)) = F (Y (t)) t Y () = (s; X) Recall that W (t) = (t; X ) solves Y (t) = F (Y (t)) Y () = X In particular, for X = (s; X) ; W (t) = (t; (s; X)) solves Y (t) = F (Y (t)) Y () = (s; X) By the uniqueness theory, W (t) = Y (t) (t; (s; X)) = (t + s; X) 3
Integral formulation for ow t (X) = X + All dynamical systems are ows of ODEs: Given t (X) ;we compute F (X) Since t+s (X) = t ( s (X)) At t = ; F ( s (X)) ds t t (X) j t= = F (X) t t+s (X) = t ( t ( s (X))) t t+s (X) = s s (X) t ( t ( s (X))) = F ( s (X)) so s (X) is a solution of Y = F (Y ) Example : Use the Picard iteration to solve X = X; X () = Sol: Recall that the solution is X = cos t sin t We now generate the Picard sequence: Z t U n+ = + and show it converges to X: U n (s) ds 4
Jacobian matrix of mappings F : R n! R n F (X) = f (x ; :::; x n ) f n (x ; :::; x n ) A ; DF (X) = B f f f x x 2 x n f n x f n x 2 f n x n C A = rf rf n A Example 2 (a) For F (X) = X in R n ; DF (X) = I (b) In R 2 ; F = (x x 2 ; x 2 ) x2 x DF = 2x Some formulas: DF () = DF D Di erentiate DE (t:x) = F ( (t:x)) t with respect to X; we see D (t:x) = D t (X) Solves D (t:x) t i.e., each column of D (t:x) solves or D t (X) = I + The Variational Equation = DF ( (t:x)) D (t:x) U = DF ( (t:x)) U DF ( s (X)) D s (X) ds Given X ; let (t; X ) be the solution of IVP with the initial value X Set A (t) = DF ( (t; X )) ;i.e., the Jacobian matrix of F evaluated at (t; X ) We call the following Variational Equation along solution (t; X ) : U = A (t) U 5
Variational equation is a (global) linearization along (t; X ) : If (t; X ) exists for t in [; T ] ; then so does the entire ow (t; U ) for Variational equation, i.e., (t; U ) t = A (t) (t; U ) Since D (t:x ) = DF ( (t:x )) D (t:x ) t each column of D (t:x ) is a solution of the variational equation Multiplying both sides by U : D (t:x ) U t = DF ( (t:x )) D (t:x ) U Since D (; X ) U = U, (t; U ) = D (t; X ) U This means that, if we are able to solve the viational equation, then we can easily nd D (t; X ) since D (t; X ) = [ (t; e ) ; (t; e 2 ) ; :::; (t; e n )] where e i is the standard basis. If (t; X) is smooth, we expand (t; X ) in X variable around X : (t; X ) = (t; X ) + D (t; X ) (X X ) + O jx X j 2 So if X = X + U ;then (t; X ) = (t; X ) + (t; U ) + O jx X j 2 This means that if we can solve IVP at initial value X to get (t; X ) ; then we can calculate A (t) = DF ( (t; X )) and thus to calculate the solution (t; U ) for the viational equation since it is linear. Then, we can approximate other IVP for nearby initial value X = X + U by (t; X ) + (t; U ) 6
In general (with less smoothness condition), let X (t) = (t; X ) ; U (t) = (t; U ) ; then X (t) + U (t) is an approximation for (t; X + U ) of order higher than one: jx (t) + U (t) (t; X + U )j ju j! So as ju j! : The convergence is uniform in t: In particular, if X is an equilibrium solution, i.e., (t; X ) = X ; then A (t) = DF ( (t; X )) = DF (X ) Variational equation is U = DF (X ) U Its solution (t; U ) is such that (t; X ) + (t; U ) = X + (t; U ) is an approximation solution for (t; X + U ). So sometime we call the variational equation linearization. Example 3 x = x 2 :Solution is (t; x ) = x x t D (t; x ) = ( x t) 2 On the other hand, DF = 2x: So its variational equation for x = (t; x ) is u 2x = 2 (t; x ) u = u x t The solution of this variational equation with initial data u is (t; u ) = u ( x t) 2 Apparently, this function is de ned as long as t < =x, the same as for (t; x ) : 7
For x = x + u ; So as u! ; (t; x ) + (t; u ) (t; x ) = x x t + u ( x t) 2 x x t = x x 2 t ( x t) 2 x x t = x x 2 t x 2 t + x 2 x t 2 x ( x t) 2 ( x t) 2 ( x t) = (u ) 2 t ( x t) 2 ( x t) j (t; x ) + (t; u ) (t; x )j ju j = u t ( x t) 2 ( x t)! When X is an equilibrium solution, the variational equation becomes an autonomous linear system. We call in this case the linearization. Example 5: Consider X = F (X) ; F = (x + y 2 ; y) : X = is an equilibrium. So along this solution, the linearization is x e DF () = ; U (t) = t y e t Homework: ce, 2, 7(for the case (i) A (t) = diag ( (t) ; :::; (t)) ; (ii) n = 2; general matrices that may not be diagonal) Homework (additional): Find and solve the variational equations for X = F (X). F = (x 2 + xy; x + y 3 ) ; X = ( ; ) 2. x = x 4=3 ; x (t) = 27 (3 t) 3 Homework (optional): Write (or nd from internet) a numerical program (in any language, such as C; Matlab, Mathematica, etc.) for (i) Euler s method (ii) Runge-Kutta of order 4. 8