Chapter 7 Nonlinear Systems

Similar documents
Section 2.4: Equations of Lines and Planes

LS.6 Solution Matrices

Systems with Persistent Memory: the Observation Inequality Problems and Solutions

Weighted-Least-Square(WLS) State Estimation

Lecture 1: Systems of Linear Equations

Question 2: How do you solve a matrix equation using the matrix inverse?

160 CHAPTER 4. VECTOR SPACES

Representation of functions as power series

Lecture Notes 10

Economic Growth: Lectures 2 and 3: The Solow Growth Model

General Theory of Differential Equations Sections 2.8, , 4.1

The Projection Matrix

Lecture 14: Section 3.3

Mean value theorem, Taylors Theorem, Maxima and Minima.

Numerical Methods for Solving Systems of Nonlinear Equations

INTEGRATING FACTOR METHOD

G.A. Pavliotis. Department of Mathematics. Imperial College London

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

DERIVATIVES AS MATRICES; CHAIN RULE

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

A First Course in Elementary Differential Equations. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Equilibria and Dynamics of. Supply Chain Network Competition with Information Asymmetry. and Minimum Quality Standards

Dimension Theory for Ordinary Differential Equations

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

Math Common Core Sampler Test

Numerical Methods I Eigenvalue Problems

Examples of Tasks from CCSS Edition Course 3, Unit 5

Nonlinear Programming Methods.S2 Quadratic Programming

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Introduction to the Finite Element Method (FEM)

Solving ODEs in Matlab. BP205 M.Tremont

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

College of the Holy Cross, Spring 2009 Math 373, Partial Differential Equations Midterm 1 Practice Questions

Numerical Methods for Differential Equations

Partial f (x; y) x f (x; x2 y2 and then we evaluate the derivative as if y is a constant.

1.2 Solving a System of Linear Equations

Prentice Hall Mathematics: Algebra Correlated to: Utah Core Curriculum for Math, Intermediate Algebra (Secondary)

Manual for SOA Exam MLC.

Real Business Cycle Models

A Comparison of Option Pricing Models

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Adding vectors We can do arithmetic with vectors. We ll start with vector addition and related operations. Suppose you have two vectors

Eigenvalues and Eigenvectors

by the matrix A results in a vector which is a reflection of the given

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Linear Algebra Notes

Study Guide 2 Solutions MATH 111

Direct Methods for Solving Linear Systems. Matrix Factorization

Math 215 HW #6 Solutions

CS 295 (UVM) The LMS Algorithm Fall / 23

ALGEBRA 2 CRA 2 REVIEW - Chapters 1-6 Answer Section

7 Gaussian Elimination and LU Factorization

Extremal equilibria for reaction diffusion equations in bounded domains and applications.

Surface Normals and Tangent Planes

SOLVING LINEAR SYSTEMS

[1] Diagonal factorization

The integrating factor method (Sect. 2.1).

3. INNER PRODUCT SPACES

LINEAR MAPS, THE TOTAL DERIVATIVE AND THE CHAIN RULE. Contents

Class Meeting # 1: Introduction to PDEs

Math Assignment 6

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

Using row reduction to calculate the inverse and the determinant of a square matrix

Interior-Point Algorithms for Quadratic Programming

Lecture 5: Singular Value Decomposition SVD (1)

Name: ID: Discussion Section:

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems

Equations Involving Lines and Planes Standard equations for lines in space

Similar matrices and Jordan form

General Framework for an Iterative Solution of Ax b. Jacobi s Method

REVIEW EXERCISES DAVID J LOWRY

Problem Set 5 Due: In class Thursday, Oct. 18 Late papers will be accepted until 1:00 PM Friday.

How To Understand The Theory Of Algebraic Functions

Numerical Analysis An Introduction

NUMERICAL METHODS TOPICS FOR RESEARCH PAPERS

Student name: Earlham College. Fall 2011 December 15, 2011

Lecture 4: Partitioned Matrices and Determinants

Geometric description of the cross product of the vectors u and v. The cross product of two vectors is a vector! u x v is perpendicular to u and v

1. Introduction to multivariate data

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

CORDIC: How Hand Calculators Calculate

4. Only one asset that can be used for production, and is available in xed supply in the aggregate (call it land).

Lecture 13 Linear quadratic Lyapunov theory

IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011

Chapter 6. Orthogonality

How To Understand The Dynamics Of A Multibody System

1 Teaching notes on GMM 1.

(Quasi-)Newton methods

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Chapter 5. Methods for ordinary differential equations. 5.1 Initial-value problems

Chapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors

c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint.

A Cournot-Nash Bertrand Game Theory Model of a Service-Oriented Internet with Price and Quality Competition Among Network Transport Providers

Some Lecture Notes and In-Class Examples for Pre-Calculus:

Transcription:

Chapter 7 Nonlinear Systems Nonlinear systems in R n : X = B x. x n X = F (t; X) F (t; x ; :::; x n ) B C A ; F (t; X) =. F n (t; x ; :::; x n ) When F (t; X) = F (X) is independent of t; it is an example of dynamical system. Dynamical System: De nition: Consider any smooth two-variable function (t; X) : (t; X) 2 R R n 7! R n : Let t (X) = (t; X) be a map R n 7! R n : We call the family of map f t g a smooth dynamical system if it satis es the following two conditions. is an identity map: (X) = (; X) = X for any X 2. t s = t+s : (t; (s; X)) = (t + s; X) Flows of linear systems X = AX are dynamical systems Solution is So t (X ) = (t; X ) = e ta X (X ) = e A X = X t s (X ) = t e sa X = e ta e sa X = e (t+s)a X = t+s (X ) Flows of nonlinear systems X = F (X) are also dynamical systems (if F is smooth function) Integral equation formulation: X (t) = X + F (X (s)) ds C A

Picard Iteration: To solve this system, we create an iterative sequence X = X (initial value) X n+ = X + Existence (for general system F (t; X) ): F (X n (s)) ds the Picard sequence fx n (t)g has a limit as n! ;for a small time period " < t < ": (local existence theorem) the limit X (t) is a solution for t in ( "; ") ; because it satis es the integral formulation For linear (nonautonomous system) X = A (t) X the Picard sequence converges for all t as long as A (t) is de ned and continuous. (Global existence) Uniqueness (for general system F (t; X) ): The solution is unique, i.e., there is only one solution for the same IVP. Continuous dependence on initial data (for general system F (t; X) ): For Y close to X ; solution Y (t) with the initial data Y also exits in ( "; ") From integral formulations we see X (t) Y (t) = X + So if jrf j K; Y (t) = Y + F (X (s)) ds jx (t) Y (t)j jx Y j + jx Y j + jx 2 Y j + K F (Y (s)) ds Y + jf (X (s)) jf (X (s)) jx (s) F (Y (s)) ds F (Y (s)) dsj F (Y (s)) dsj Y (s)j ds

By Gronwall s inequality, we arrive at jx (t) Y (t)j jx Y j e Kt So if Y! X ;then Y (t)! X (t) Continuous dependence on parameters (for general system F (t; X) ): X = F a (X) Let X a (t) be the solution, and X b (t) the solution for X = F b (X). Then X a (t) X b (t) = (F a (X a (s)) F b (X b (s))) ds So if F b (X)! F a (X) as b! a; X b (t)! X a (t) So the ow (t; X) exists and unique, and continuously depends on initial data X: The ow is a dynamical system: By de nition, (; X) = X For any xed s; set Y (t) = (t + s; X) :Then Y (t) solves Y (t) = (t + s; X) = F ( (t + s; X)) = F (Y (t)) t Y () = (s; X) Recall that W (t) = (t; X ) solves Y (t) = F (Y (t)) Y () = X In particular, for X = (s; X) ; W (t) = (t; (s; X)) solves Y (t) = F (Y (t)) Y () = (s; X) By the uniqueness theory, W (t) = Y (t) (t; (s; X)) = (t + s; X) 3

Integral formulation for ow t (X) = X + All dynamical systems are ows of ODEs: Given t (X) ;we compute F (X) Since t+s (X) = t ( s (X)) At t = ; F ( s (X)) ds t t (X) j t= = F (X) t t+s (X) = t ( t ( s (X))) t t+s (X) = s s (X) t ( t ( s (X))) = F ( s (X)) so s (X) is a solution of Y = F (Y ) Example : Use the Picard iteration to solve X = X; X () = Sol: Recall that the solution is X = cos t sin t We now generate the Picard sequence: Z t U n+ = + and show it converges to X: U n (s) ds 4

Jacobian matrix of mappings F : R n! R n F (X) = f (x ; :::; x n ) f n (x ; :::; x n ) A ; DF (X) = B f f f x x 2 x n f n x f n x 2 f n x n C A = rf rf n A Example 2 (a) For F (X) = X in R n ; DF (X) = I (b) In R 2 ; F = (x x 2 ; x 2 ) x2 x DF = 2x Some formulas: DF () = DF D Di erentiate DE (t:x) = F ( (t:x)) t with respect to X; we see D (t:x) = D t (X) Solves D (t:x) t i.e., each column of D (t:x) solves or D t (X) = I + The Variational Equation = DF ( (t:x)) D (t:x) U = DF ( (t:x)) U DF ( s (X)) D s (X) ds Given X ; let (t; X ) be the solution of IVP with the initial value X Set A (t) = DF ( (t; X )) ;i.e., the Jacobian matrix of F evaluated at (t; X ) We call the following Variational Equation along solution (t; X ) : U = A (t) U 5

Variational equation is a (global) linearization along (t; X ) : If (t; X ) exists for t in [; T ] ; then so does the entire ow (t; U ) for Variational equation, i.e., (t; U ) t = A (t) (t; U ) Since D (t:x ) = DF ( (t:x )) D (t:x ) t each column of D (t:x ) is a solution of the variational equation Multiplying both sides by U : D (t:x ) U t = DF ( (t:x )) D (t:x ) U Since D (; X ) U = U, (t; U ) = D (t; X ) U This means that, if we are able to solve the viational equation, then we can easily nd D (t; X ) since D (t; X ) = [ (t; e ) ; (t; e 2 ) ; :::; (t; e n )] where e i is the standard basis. If (t; X) is smooth, we expand (t; X ) in X variable around X : (t; X ) = (t; X ) + D (t; X ) (X X ) + O jx X j 2 So if X = X + U ;then (t; X ) = (t; X ) + (t; U ) + O jx X j 2 This means that if we can solve IVP at initial value X to get (t; X ) ; then we can calculate A (t) = DF ( (t; X )) and thus to calculate the solution (t; U ) for the viational equation since it is linear. Then, we can approximate other IVP for nearby initial value X = X + U by (t; X ) + (t; U ) 6

In general (with less smoothness condition), let X (t) = (t; X ) ; U (t) = (t; U ) ; then X (t) + U (t) is an approximation for (t; X + U ) of order higher than one: jx (t) + U (t) (t; X + U )j ju j! So as ju j! : The convergence is uniform in t: In particular, if X is an equilibrium solution, i.e., (t; X ) = X ; then A (t) = DF ( (t; X )) = DF (X ) Variational equation is U = DF (X ) U Its solution (t; U ) is such that (t; X ) + (t; U ) = X + (t; U ) is an approximation solution for (t; X + U ). So sometime we call the variational equation linearization. Example 3 x = x 2 :Solution is (t; x ) = x x t D (t; x ) = ( x t) 2 On the other hand, DF = 2x: So its variational equation for x = (t; x ) is u 2x = 2 (t; x ) u = u x t The solution of this variational equation with initial data u is (t; u ) = u ( x t) 2 Apparently, this function is de ned as long as t < =x, the same as for (t; x ) : 7

For x = x + u ; So as u! ; (t; x ) + (t; u ) (t; x ) = x x t + u ( x t) 2 x x t = x x 2 t ( x t) 2 x x t = x x 2 t x 2 t + x 2 x t 2 x ( x t) 2 ( x t) 2 ( x t) = (u ) 2 t ( x t) 2 ( x t) j (t; x ) + (t; u ) (t; x )j ju j = u t ( x t) 2 ( x t)! When X is an equilibrium solution, the variational equation becomes an autonomous linear system. We call in this case the linearization. Example 5: Consider X = F (X) ; F = (x + y 2 ; y) : X = is an equilibrium. So along this solution, the linearization is x e DF () = ; U (t) = t y e t Homework: ce, 2, 7(for the case (i) A (t) = diag ( (t) ; :::; (t)) ; (ii) n = 2; general matrices that may not be diagonal) Homework (additional): Find and solve the variational equations for X = F (X). F = (x 2 + xy; x + y 3 ) ; X = ( ; ) 2. x = x 4=3 ; x (t) = 27 (3 t) 3 Homework (optional): Write (or nd from internet) a numerical program (in any language, such as C; Matlab, Mathematica, etc.) for (i) Euler s method (ii) Runge-Kutta of order 4. 8