(2) f(x) = o(g(x)) as x ( little o ) if lim. f(x)


 Delphia Hampton
 1 years ago
 Views:
Transcription
1 1. Introduction To perturbation Theory & Asymptotic Expansions Example Consider x = ε coshx (1.1) For ε 0 we cannot solve this in closed form. (Note: ε = 0 x = ) The equation defines a function x : ( ε, ε) R (some range of ε either side of 0) We might look for a solution of the form x = x 0 +εx 1 +ε x + and by subsititung this into equation (1.1) we have x 0 + εx 1 + ε x + = ε cosh(x 0 + εx 1 + ) Now for ε = 0, x 0 = and so for a suitably small ε + εx 1 ε cosh( + εx 1 + ) x 1 cosh() x(ε) = + ε cosh(ε) + For example, if we set ε = 10 we get x = where the exact solution is x = Landau or Order Notation. Definition Let f and g be real functions defined on some open set containing zero, ie: 0 D R. We say: (1) f(x) = O(g(x)) as x ( Big O ) if K > 0 and ε > 0 such that( ε, ε) D and x ( ε, ε) f(x) < K g(x) f(x) () f(x) = o(g(x)) as x ( little o ) if lim x 0 g(x) = 0 (3) f(x) g(x) as x (asymptotically equivalent) if lim x 0 f(x) g(x) = 1 Remark (1) Could define these for x x 0 or x () Abuse of notation: for say, sin x = x + O(x ), O(x ) should be an equivalence class, ie: sin x x O(x ) f(x) Lemma If lim m < then f(x) = O(g(x)) x 0 g(x) Proof. Suppose f(x) g(x) m < ε and x < δ then f(x) g(x) m f(x) < ε g(x) < m + ε f(x) < ( m + ε) g(x) But this is just the definition of O with m + ε for K Example (i) x = o(x) as x since x x 0 as x (ii) 3x = O(x ) since 3x 5x 0 as x 0 1
2 (iii) x = o( x 1 ) as x 0 x (iv) 1 + x = o(1) since x/(1 + x ) 0 as x 0 1 x (v) 1 + x = O(x) sin x x cosx 1 (vi) sinx = x + o(x) as x 0 since lim = lim = 0 x 0 x x 0 1 (vii) sinx = x + O(x 3 sin x x cos x 1 ) since lim x 0 x = lim 3 x 0 3x sin x cos x = lim x 0 6x = lim x 0 6 = 1 6 (viii) sinx x x3 3! since lim = 1 (ix) sinx = x x3 3! + O(x4 ) sin x x 0 x x3 3! The definition of f (x) in o notation is: f(x + h) = f(x) + f (x)h + o(h) as h 0 The Taylor series for f(x + h) is given by f(x + h) = f(x) + f (x)h + + f(n) (x)h n n! + o(h) If the Taylor series is a convergent power series then o(h n ) can be replaced by O(h n+1 ). N N Any convergent power series a n x n = a n x n + O(x n+1 ) n=0 Examples: 1 + x = ( x) + 1 ( 1 )( x)! + O(x 3 ) = 1 x x + O(x3 ) ln(1 + x) = x x + O(x3 ) x sin( 1 x ) = O(x ) It is important to note that this is big O with no limit as x sin( 1 x ) 1 x though x sin( 1 x ) x n=0 = sin( 1 x ), and this has no limit. 1.. The Fundamental Theorem of Perturbation Theory. Theorem If A 0 +A 1 ε+ +A N ε N = O(ε N+1 ) then A 0 = A 1 = A N = 0 Proof. Suppose A 0 + A 1 ε + + A N ε N = O(ε N+1 ) but not all A k are zero. Let A M be the first nonzero. Consider A M ε M + A M+1 ε M A N ε N+1 ε N+1 = A M + A M+1 ε + + as ε ε N+1 M Then we have a contradiction with big O.
3 Perturbation Theory of Algebraic Equations. Example Consider x 3x++ε = 0. Assume the roots have the following expansion: x 0 + εx 1 + ε x + O(ε 3 ) then by substitution we get (x 0 + εx 1 + ε x + ) 3(x 0 + εx 1 + ε x + ) + ε + = O(ε 3 ) = (x 0 + 3x 0 + ) + ε(x 0 x 1 3x 1 + 1) + ε (x 1 + x 0 x 1 3x ) = O(ε 3 ) Terms in ε 0 : x 0 3x 0 + = 0 x 0 = 1 or x 0 = Terms in ε 1 : x 0 x 1 3x = 0 if x 0 = 1 then x 1 = 1 otherwise if x 0 = then x 1 = 1 Terms in ε : x 1 = x 0x 3x = 0 if x 0 = 1 then x 1 = 1, and so x = 1 otherwise if x 0 = then x 1 = 1, and so x = 1 x = 1 + ε + ε + O(ε 3 ) or x = ε ε + O(ε 3 ) We can solve x 3x + + ε = 0 directly to get x = 3 ± 1 4ε Now 1 4ε = 1 ε ε +O(x 3 ) and substituting this into 3 ± 1 4ε we get x = 3 ± (1 ε ε ) which is the same answer as above. Example 1.3. (Singular Perturbation).. Consider εx x+1 = 0 and again assume there is an expansion x 0 +εx 1 +ε x + O(ε 3 ). We get for terms in ε 0 : x = 0 x 0 = 1 terms in ε 1 x 0 x 1 = 0 x 1 = 1 8 terms in ε x 0 x 1 x = 6 x = 1 8 and so we have x = 1 + ε 8 + ε 8 and this gives us one of the roots, but where is the second? The exact solution is given by: and the other root should be x = 1 ± 1 ε ε x = ε 1 + O(ε)
4 4 Last time in example 1.3. we did not find the other root of εx x + 1 = 0 using the expansion of form x 0 + εx 1 + ε x + O(ε 3 ). If instead we try ω = εx, then ω ε = w ε + 1 = 0 ω ω + ε = 0 this, we can assume in the usual way has an expansion of the form: ω 0 + εω 1 + ε ω O(ε 3 ), and: Terms in ε 0 : ω0 ω 0 = 0 ω 0 = 0 or Terms in ε 1 : ω ε = x = { 1 + O(ε) ε 1 + O(ε) ω 0 ω 1 ω = 0 ω 1 = 1 or 1 Example (Non Regular Expansions). Consider x x( + ε) + 1 = 0. assume x = x 0 + εx 1 + ε x + O(ε 3 ) ε 0 : x 0 x = 0 x 0 = 1 twice ε 1 : (1+ε) (1+εx 1 )(+ε)+1 = O(ε ) εx 1 εx 1 +1 = O(ε ) 1 = O(ε ) This contradicts the assumption that there was a regular expansion. The exact roots are: x = +ε± 4ε ε = 1 ( + ε ± ε 4 ε) = 1 ( + ε ± ε 4 ε) = 1 ± ε 1 + O(ε) Try x = x 0 + ε 1 x1 + ε x + ε 0 : x 0 = 1 as before. ε 1 : (1 + ε 1 x 1 ) (1 + ε 1 )( + ε) + 1 = O(ε 3 ) ε 1 x 1 ε 1 x 1 = O(ε) 0 = 0 ε 1 : (1 + ε 1 x 1 + ε(x ) (1 + ε 1 x 1 + εx )( + ε) + 1 = O(ε 3 ) which gives x 1 = 1 x 1 = ±1 so x = 1 ± ε 1 + O(ε 3 ) 1.4. Perturbation Theory of Odes. Example (Regular Problem). Consider the following ODE: ẋ + x = εx, x(0) = 1 We try an expansion of the form: x(t) = x 0 (t) + εx 1 (t) + O(ε ) which leads to: ε 0 : { ẋ0 + x 0 = 0 x x 0 (0) = 1 0 (t) = e t ε 1 : and so { ẋ1 + x 1 = x 0 = e t x 1 (0) = 0 (no ε in x(0) = 1) x 1(t) = e t e t x = e t + ε(e t e t + O(ε )
5 5 For t [0, ] the constants in O definition can be independent of t Sometimes we want only ε > 0 versions of o, O with one sided limits lim We use a series 1, ε 1 3, ε, ε or in general: ϕ0 (ε), ϕ 1 (ε)... ϕ1 +1(ε) = (ε)) is what is needed. o(ϕ1 Example 1.4. (Singular Model Equation). Consider εẋ + x = 1, x(0) = 0, and suppose x = x 0 + εx 1 O(ε ) Then ε(ẋ 0 + εx 1 ) + x 0 + εx 1 = 1 + O(ε ) ε o +. ε 0 : x 0 = 1 but x(0) = 0 cannot solve the initial condition. We can rescale time, ie: t = ετ which gives τ = t dt ε and dε = ε, dx dτ = dx dt dt dτ = εẋ dx + x = 1, x(0) = 0 dτ Now use x = x 0 + εx 1 O(ε ) ε 0 : dx 0 dτ + x 0 = 1, x 0 (0) = 0 x 0 = 1 e τ = 1 e t ε ε 1 : dx 1 dτ + x 1 = 1, x 1 (0) = 0 x 1 = 0 (similarly for ε, ε 3 etc...) Therefore the solution is: x = 1 e tε Example ( Singular In The Domain ). Consider ẋ + εx = 1, x(0) = o, t > 0 and assume x = x 0 + εx 1 + ε x O(ε ) ε 0 : ẋ 0 = 1, x 0 (0) = 0 x 0 = t ε 1 : (t + εx 1 ) + ε(t + εx 1 ) = 1 + O(ε ) 1 + εẋ 1 + ε(t + εtx 1 ) = 1 + O(ε ) εẋ 1 + εt = O(ε ) ẋ 1 + t = 0 x 1 = t3 3 (If we carry on we find the ε term t 5 ) The solution is: This is not regular for t [0, ) x = t ε t3 3 + O(ε3 ) Example (Damped Harmonic Motion with Small Damping (ε > 0)). Consider ẍ + εẋ+x = 0, x(0) = 0, ẋ(0) = 1 and assume x = x 0 + εx 1 + ε x O(ε 3 ) ε 0 : ẍ 0 + x 0 = 0, x 0 (0) = 0, ẋ 0 = 1 x 0 = sin t ε 1 : ẍ 1 + ẋ 0 + x 1 = 0 ẍ 1 + x 1 = cost, x 1 (0) = 0, ẋ 1 (0) = 0 and we should get: x 1 = t sin t whereby the solution is: x = sint 1 εt sin t
6 6 This again is not uniform for t [0, ) The exact solution is: x = e ε t sin( 1 = ε 4 )t, the expansion above is only good for small t Asymptotic Expansions. Definition A sequence of functions {ϕ n }, n = 0, 1,,... is called an asymptotic series as x x 0 if (ie: ϕ n+1 (x) = o(ϕ n (x))) ϕ n+1 (x) lim = 0 x x 0 ϕ n (x) Note: We could have x 0 = or a onesided limit x x + 0 Examples: (i) x 1,1, x,x, 1 x 3,... x 0 + (ii) 1, 1 x, 1 x ln(x), 1, x x 3 ln(x) (iii) tanx, (x π), (sin x) 3, x π Taylor s Theorem: f(x) = N f (n) (x)(x x 0 ) n + n! n=0 remainder { }} { R N (x), f C N+1 [x 0 r, x 0 + r], r > 0 There are remainder formulas to bound R N (x), for example, Cauchy: R N (x) M Nr N+1 (n + 1)!, M N > 0, R N = O((x x 0 ) N+1 ) as x x 0. It is important to remember that Taylor R N 0 as N In fact we know N plenty of power series that do not converge, eg: ( 1) n n!x n diverges for all x. n=0 Another famous nonconvergent series is Stirling s formula for n!: lnn! = n lnn + 1 ln(πn) n n 3 + O( 1 n 4 ) Definition Let {ϕ n } be an asymptotic sequence as x x 0. N The sum a n ϕ n (x) is called an asymptotic expansion of f with N terms if f(x) n=0 N a n ϕ n (x) = o(ϕ N (x)). n=0 The coefficients a n are called the coefficients of the asymptotic expansion. is called an asymptotic series. Note: Some people use the stronger definition: O(ϕ N (x)) Notation f(x) a n ϕ n (x) as x x 0 n=0 a n ϕ n (x) n=0
7 7 Clearly any Taylor series is an asymptotic series. Example: Find an asymptotic expansion for f(x) = (1 + xt) 1 = 1 xt + x t + f(x) = Now, it can be shown that 0 0 t n e t dt = n! and hence 0 e t 1 + xt dt (1 xt + x t + )e t dt f(x) = 1 x +!x 3!x 3 + n!x n which diverges by the ratio test = n x x 0. (n 1)!x n 1 It could still be be an asymptotic expansion however; we d need to check N f(x) ( 1) n n!x n = o(x N ) This is a special case of: n=0 Lemma (Watson s Lemma). Let f be a function with convergent power series and radius of convergence R, and f(t) = O(e αt ) as t (for some α > 0) then: 0 In last example we had f(x) = e au du (looks like Watson s) u + u 0 e at f(t)dt 0 n=0 Example (Incomplete Gamma Function).. γ(a, x) = = x 0 N ( 1) n n=0 n! f n (o) a n+1 e t 1 + x dt as x 0+ let xt = u, a = 1/x, then t a 1 e e t dt = x 0 t n+a 1 dt = x 0 N t a 1 ( t) n dt n! N n=0 n=0 ( 1) n n!(n + a) xn+a Note: Power series under integral is convergent, hence uniformly convergent. We have a convergent power series for γ(a, x)in x Example E i (x) = x e t t dt (exponential integral)
8 8 E 1 (x) = e tx 1 0 t dt (Cauchy principal value), it turns out that E 1 (x) = E i ( x) E i (x) = γ + ln(x) + x + x! + x3 3 3! + O(x4 ) ( N where γ = e x 1 ) ln(x)dx = lim n k ln(n) k=0. ODEs In The Plane We consider systems of ODEs of the form: { ẋ = u(x, y) ẏ = v(x, y) where x(0) = x 0, y(0) = y 0. (Note: A nd order ODE can be expressed as a coupled system of 1 st order ODEs since if ẍ + f(x)ẋ + g(x) = 0 then if we say ẋ = y it follows that ẏ = ẍ so we get { ẋ = y ẏ = f(x) g(x) Where in this case it turns out that u(x, y) = y, v(x, y) = f(x) g(x)).1. Linear Plane autonomous Systems. { ẋ = ax + by the 1D case is easy: ẋ = ax, x(t) = x(0)eat ẏ = cx + dy Example { ẋ = y Consider 3x = y ( x If we say x = then we can write y) x = We solve these two ODEs to get ( ) 3 0 x 0 x(t) = x(0)e 3t, y(t) = y(0)e t (which may be written as) ( ) ( ) e 3t 0 x(0) x(t) = 0 e t y(0) We can construct a Phase plot with solutions being curves in the plane called trajectories or orbits. We could eliminate t as follows: ( x ) 3 y = y(0), x(0) 0 x(0) y x Figure.1.1:
9 9 Example.1.. { ẋ = x Similar to the above example consider ẏ = y We solve in both cases to get x(t) = x(0)e t, y(t) = y(0)e t and eliminate t such that: Noting that x = ( x ) 1 y = y(0) x(0) ( ) 1 0 x we end up with a phase plot which looks like this 0 1 Figure.1.: Example.1.3 (Simple Harmonic Motion: ẍ + ẋ = 0). { ( ) ẋ = y 0 1 x = x ẏ = x 1 0 x(t) = Acost + B sin t, x(0) = A ẋ(t) = Asin t + B cost, ẋ(0) = B ( ) ( ) ( ) x(0)cos t + y(0)sint cost sin t x(0) ẋ = = x(0)sin t + y(0)cost sint cost y(0) } {{ } rotation matrix The orbits are circles. y x Figure.1.3: Example.1.4 (Damped Harmonic Motion: ẍ + bẋ + x = 0). { ( ) ẋ = y 0 1 ẋ = x ẏ = x by 1 b Try x(t) = e λt giving the characteristic polynomial λ + bλ + 1 λ = b b ± 4 1
10 10 If b is small then b 4 and so y 1 < 0 such that λ = b ± i 1 b 4 = α + iβ x(t) = e αt (Acos(βt) + B sin(βt)) y x x Figure.1.4: Theorem.1.5. Let A R be a real matrix with eigenvalues λ 1, λ then: (i) If λ 1 λ are real then there exists an invertible matrix P such that ( ) P 1 λ1 0 AP = 0 λ (ii) If λ 1 = λ then either A is diagonal, A = λi or A is not diagonal and there is a P such that ( ) P 1 λ 1 AP = 0 λ (iii) If λ 1 = α + iβ, λ = α iβ, β 0 then there is a P such that ( ) P 1 α β AP = β α How does this help? Put x = A x and let y = P 1 x then x = P y, y = P 1 x, P x = PA x and so y = P 1 AP y This allows us to generalise the work we did above. Example.1.6. For case 1 in the theorem, consider u = P 1 AP u then u 1 = λ 1 u 1, u = λ u (since P 1 AP u is just a matrix of eigenvectors acting on u) Then u i (t) = u i (0)e λit and so ( ) ( ) e λ 1t 0 u1 (0) u(t) = 0 e λt u (0) ( )( ) ( e λ 1t 0 u1 (0) e λ 1t 0 Now x = P u so x(t) = P 0 e λt = P u (0) 0 e λt u 1 and u are related by eliminating t: ( u 1 (t) = u 1 (0)e λ1t u1 u 1 (0) ) 1 λ 1 = e t, then u = u (0)( u1 u 1 (0) ) ) λ λ 1. ( ) P 1 x1 (0) x (0)
11 11 y x λ 1 > λ > 0 λ 1 < λ < 0 Figure.1.5: For λ 1 λ, distinct eigenvectors v 1, v, A v i = λ i v i Half lines through the origin in eigendirections are trajectories. y eigen directions x e.g. λ 1 > 0 > λ Figure.1.6:
12 1.. Phase Space Plots.. Eigenvalues real, different, same sign Eigenvalues real, different, same sign Node: Source Eigenvalues real, different, opposite sign Node: Sink Eigenvalues real, equal, λ > 0, A = λi Saddle Eigenvalues real, equal λ < 0, A = λi Node: Source Eigenvalues real, equal λ > 0, A λi Node: Sink Eigenvalues real, equal λ < 0, A λi Degenerate Source Eigenvalues complex, λ 1 = α+iβ, λ = α  iβ α < 0, β 0 Degenerate Sink Eigenvalues complex, λ 1 = α+iβ, λ = α  iβ α > 0, β 0 Stable Spiral Eigenvalues purely imaginary, λ 1 = iβ, λ  iβ, β 0 Unstable Spiral Figure..1: Ellipse
13 13 { ẋ = 3x + y Example..1. consider ẏ = x y ( 0 The critical points of this system are at, and the Jacobian is given by 0) ( ) 3 1 ( ) 3 λ 1 we find the eigenvalues by setting A I = = 0, ie: λ λ + 5λ + 4 = 0 λ = 4, 1 this corresponds to a node (sink) As for the associated eigenvectors; ( ) ( A 4I = v = is in the null space (hence an eigenvector) ) ( ) ( 1 1 A I = v = is the other eigenvector 1 ) We can get further information to help in curve sketching by considering isoclines: ẏ ẋ = dy x y = dx 3x + y Figure..:.3. Linear Systems. A linear system for which the eigenvalues are wholly imaginary (λ = ±β) is called called a centre. The characteristic equation, in general of A R = λ (trace A)λ + deta. In this case (for λ imaginary), λ + β = 0, trace A = 0, deta > 0. Consider a simple case: { ẋ = y ẏ = cx eliminate t to get ẏ ẋ = dy dx = cx y y dy = cxdx y + cx = const This is the equation for an ellipse. To determine (ẋ ) the direction of the arrows, set y = 0 then on x axis ẋ = 0, ẏ = cx and is a vector in the direction of solutions, and we that for x positive ẋ is ẏ positive.
14 14 The Oscillatory nature of these graphs don t reflect any real life situations. x(t) Figure.3.1:.4. Linear Approximations.. Consider ẋ = u(x, y), ẏ = v(x, y), Critical points occur when u = v = 0. Let (x 0, y 0 ) be critical points, put ξ = x x 0, η = y y 0 and Taylor expand about (x 0, y 0 ) to get (near equilibrium point) u(x, y) = u(x 0, y 0 ) + ξ u x +η u (x0, y 0) y (x0, y +O(ξ + η ) as (ξ, η) (0, 0) 0) v(x, y) = v(x 0, y 0 ) + ξ v x +η v (x0, y 0) y (x0, y +O(ξ + η ) as (ξ, η) (0, 0) 0) and so ( u u = x v) v x y(t) u ( y ξ v + O(ξ η) + η ) y We can now make the approximation ( ) u u ) ξ = x y ξ η v v + O(ξ η (x0, y x y 0)( + η ) which is a linear system. Example.4.1 (PreditorPrey).. We wish to model the dynamics between predators and prey. Without considering external & environmental variables, as the number of predators increases, we expect the population growth rate of the prey should lessen; this then, should result in a slow down in the growth rate of predators (as there is more competition for fewer prey). let x be a population of prey (eg: rabbits), y be a population of predators (eg: foxes) with x, y > 0 (note: this model relies on large x and y such that we are able to talk about derivatives etc... (since x, y are integers!) { ẋ = x(a αy) A simple model is (a, c, α, γ > 0). ẏ = y( c + γx) Then u(x, y) = x(a αy), v(x, y) = y( c + γx) and so for an equilibrium, u = v = 0, x(a αy), y( c + γx) = 0 so for u = 0, either x = 0 or a αy = 0 (y = a α )
15 15 for v = 0, either y = 0 or c + γx = 0 (x = c γ ) Therefore the criticals are at (0, 0), ( c γ, a α ) More specifically if we put a = 1, α = 1, c = 3 4, γ = 1 4 { then: ẋ = x(1 y ) ẏ = y( x 4 ) with critical points at (0, 0), (3, 0) Near (0, 0): ( ) ( ( ) ξ 1 0 ξ = η 0 4) 3 η The corresponding eigenvalues being: ( 1 λ 1 = 1, v 1 =, λ 0) = 3 ( 0 4, v = 1) This is a saddle. Near (3, ): ( ) ξ = η with eigenvalues: λ 1, = ±i ( ) ( ξ 1 0 η) 3 ie: ξ + η = const, so ellipses. Now dy dx = y( x ) x(1 y ) 3 4 lnx + lny y x 4 = const. It is possible, albeit tricky to show this is a closed curve. x(t) y(t) Figure.4.1: Example.4. (Circular Pendulum).. We consider a circular pendulum given by nondimensional units { }} { ẍ + sin x = 0 where x is an angle. x m mg Figure.4.: Note that for small angles x, x sin x and so we get ẍ + x = 0 (simple harmonic
16 16 motion) We shall solve ẍ + sin x = 0 qualititively since it can t easily be solved analytically. Let ẋ = y = u, ẏ = sin x = v The critical points are at ẋ = ẏ = 0 y = 0, x = nπ, n Z u u ( ) x y 0 1 v v = ( 1) n at y = 0, x = nπ 0 x y ( ) 0 1 ξ Near the critical point we consider ( 1) 0)( n+1 η The characteristic equation is λ 1 ( 1) n+1 λ = 0 = λ + ( 1) n = 0 If n even λ + 1 = 0 λ = ±i, if n odd λ 1 = 0 λ = ±1 ( ( ) 1 1 For n odd we get eigenvectors v 1 =, v 1) = which is a saddle. 1 For n even we get a centre. The centres correspond to small swings The saddles correspond to swings just large enough that they stop at the top Everywhere else corresponds to big swings, where with no damping, the pendulem doesn t stop. Figure.4.3:.5. NonLinear Operators.. ẍ+x = 0 represents simple harmonic motion (SHM) with solution x(t) = Acos(t)+ B sin(t), a centre. Consider ẍ + βẋ + x = 0 making the substitution ẋ = y. The system of ODEs is { ẋ = y given by, and the roots of the resulting char poly are ẏ = β x λ = β ± i 1 β and so for 0 < β < 1 we get a stable spiral Example.5.1 (Stiff Spring System).. In a simple spring, force and hence acceleration is proportional to the extension of the spring (Hooke s Law), instead we think of a stiff spring with force proportional to x + βx 3. For small x this behaves like x, for large x, like x 3. { ẋ = y ẍ + x + βx 3 = 0, ẏ = x βx 3, u = y, v = x βx 3 The critical points are at u = v = 0 y = 0, x + βx 3 = 0 x = 0 or 1 + βx = 0 (no real solutions).
17 17 The only critical is at (0, 0) u u ( ) x y 0 1 v v = 1 3βx = 0 x y This is a centre λ = ±i ( ) 0 1 evaluated at (0, 0) 1 0 Example.5. (Soft Spring).. We could change the sign before β and simulate a soft spring ie: ẍ + x βx 3 = 0 The critical points in this case lie at (0, 0), ( ±1 β, 0) and the jacobian is given by ( ) βx 0 for x close to ±1 β we have a source; which, in physical terms, means we are actually adding energy to the system. 3. Limit Cycles Orbits, that is, trajectories of a system of ODEs cannot cross. x(0) x(0) = x(t) x(t) Figure 3.0.1: ( ) x(t) If x = and x(0) = x(t) (for T 0) then y(t) x(0) = x(t). when this happens we have a periodic orbit, eg a clock, oscillator, cycle in the economy, biology etc... Suppposing we have such a cycle what can be said about what happens around it? It could be the case that both outside and inside the orbit we spiral towards it; but then again, something else could happen instead. The equilibrium point at the centre doesn t give us this information. Figure 3.0.: Example (Cooked up). Consider a system of the contrived form: { ẋ = y x(x + y 1) 1 ẏ = x y(x + y 1)
18 18 we see that by construction, when x, y satisfy x +y = 1 we obtain simple harmonic motion. There is only one equilibriums point and that occurs at (0,0). The Jacobian at this point is given by f x g x f y g y (x,y)=(0,0) = ( ) The characteristic equation is λ λ + = 0 which has roots λ = 1 ± i. This is an unstable spiral. If instead however, we look at this in polar coordinates Differentiating implicitely we get: r = x + y, tan θ = y x rṙ = xẋ + yẏ, θ = xẏ yẋ x + y If we multiply (1) and () by x & y respectively we get; xẋ = x( y x(x + y 1)), yẏ = y(x y(x + y 1)) xẋ + yẏ = (x + y )(r 1) rṙ = r (r 1) ṙ = r(r 1) This reveals that there is also an equilibrium point at r = 1 (ie; for r = 1 we stay on the circle). Furthermore for r > 1, ṙ < 0 which is a stable spiral, whilst for r < 1, ṙ > 0 which is the unstable spiral we found above. r < 0 r > 0 Figure 3.0.3: Now if we multiply (1) and () by y & x respectively and subtract we get: xẏ yẋ = x xy(r 1) + y + xy(r 1) = x + y = r θ = r r = 1 The equilibrium point correctly told us that locally we have an unstable spiral but it failed to illuminate the behaviour as we move further out. We shall establish a couple of results that allow us determine when & where there there exist no closed orbits. First recall Theorem (Divergence Theorem/Divergence theorem in the plane). Let C be a closed curve, let A R be the region it encloses, and u, v be functions with continuous derivatives. Then A u x + v dx dy = y C u dy v dy = where (x(s), y(s)) is a parameterisation of C C u dy ds vdx ds ds ds
19 Theorem (Bendixon s Negative criterion). Consider the system with u and v continuously differentiable. Let A R be a region of the plane for which u x + v y Then there is no closed orbit contained within A. 19 { ẋ = u(x, y) ẏ = v(x, y) does not change sign. Proof. Suppose for contradiction there exists a closed orbit in A. Then this orbit forms a closed curve C in A. A A x(0)= x(t) Figure 3.0.4: Let A A be the region enclosed by C (ie A = C). Then by the divergence theorem T 0 (xẏ yẋ)dt = T 0 u dy dt vdx dt dt dt = but u x + v y is either > 0 or < 0 x, y A hence C u dy v dy = Example (returned to cooked example).. { ẋ = u = y x(x + y 1) ẏ = v = x y(x + y 1) u x = 3x y + 1, v y = x 3y + 1 and so u x + v y = 4x 4y + = 4(x + y ) + A u x + v y dxdy = 0 A 0 For values of x, y such that x + y > 1 this fails to be always positive or always negative; and so we cannot say there exists no closed orbit. (Though we can be confident there is no such orbit for x + y = r < 1 ) Bendixon s Criterion is not much use for answering the question Is there a closed orbit between r = 1 or r = 1? Example (Damped Harmonic Motion).. For ẍ + βẋ + x = 0 we have ẋ = u = y, ẏ = v = βy x This is a stable spiral at (0,0) and u x + v = 0 β which is constant y Hence there is no sign change for u x + v y and so no closed orbits.
20 0 Example (General Damped Oscillator).. This system is characterised by ẍ+f(x)ẋ +g(x) = 0 which with the usual substitution ẋ = y yields } {{ } damping f(x)>0 { ẋ = y ẏ = f(x)y g(x) Now u x + v = 0 f(x) is always negative and so general damped systems of this y form have no closed orbits Theorem (PoincaréBendixon Theorem). Given a region A R and an orbit of a system of ODEs C which remains in A t 0 then C must approach either a limit cycle or equilibrium. Remark (1) orbit = trajectory = solution curve () limit cycle = closed orbit that nearby orbits approach (3) We can use this result with time running backwards, ie: orbits come from unstable equilibrium or closed orbits. 3.. energy (brief).. { ẋ = y consider an oscillator of the form characterising ẍ + f(x) = 0 ẏ = f(x) Since there is no damping we expect energy to be conserved. Consider ε = ẋ y ẋ + F(x) = + F(x) where can be considered kinetic energy, F(x) the potential energy. Then dε(x, y) dt = yẏ + f (x) = (ẍ + F (x))ẋ Set F = f then dε = 0 along solution curves. dt So ε is constant on a solution (x(t), y(t)). we call this a first integral Example 3..1 (Duffing s Equation).. This is the hard spring system we met earlier ẍ + ω x + εx 3 = 0 f(x) = ω x + εx 3 F(x) = ω x + εx4 + some constant we need not worry 4 about in this context of constant solution curves. Then ε(x, y) = y + ω x + εx4 4 for ε > 0. As ε(x, y) is constant then the solutions are bounded for all t (closed curves). We can say that x + y max(, ω) (y + ω x + εx4 4 ) = max(, )ε. Ie; ω solutions stay in the circle. For constant ε we can check that for y 0 there are two solutions for x.
21 1 4. Lindstedt s Method Example 4.0. (Duffing s Equation). ẍ + ω x + εx 3 = 0, 0 < ε << 1 We know the solutions are periodic for y(0) = ẋ(0) 0. The solutions will resemble slightly square ellipses. Figure 4.0.1: consider a straightforward expansion x = x 0 + εx 1 + ε x O(ε 3 ) Terms in ε 0 : ẍ 0 + ω x 0 = 0 x 0 = a cosωt We may suppose without loss of generality that the initial conditions for this system are x(0) = a, and ẋ(0) = 0 since we are just picking out a particular solution curve; we still get all of them. Terms in ε 1 : ẍ 1 + ω x 1 + x 3 0 = 0 ẍ 1 + ω x 1 = a 3 cos 3 (ωt) To deal with cos 3 (ωt) we use the identity that cos 3 (ωt) = 3cos(ωt) 4 + cos(3ωt) 4 and so ẍ 1 + ω x 1 = a 3( 3cos(ωt) + cos(3ωt) ) 4 4 since cos(ωt) appears in the homogeneous solution we would need to introduce a t sin(ωt) term (secular term), and this is at odds with what we already know about this system; that it is bounded as t The trick to get rid of the secular term is to introduce another series: τ = Ωt where Ω = Ω 0 + εω 1 + Example (Lindstedt s Method). We try again with Duffings equation (prefixing ε with a minus sign this time) using the above idea and expecting to see a solution that has periodic orbits for ε small enough. Without loss of generality, we rescale time (setting ω = 1) such that we try to solve ẍ + x εx 3 = 0 We define τ = Ωt where Ω = Ω 0 + εω 1 + and so x(t) becomes x(τ). coupling this with the standard expansion we use for x we have x(τ) = x 0 ((Ω 0 + εω 1 )t) + εx 1 ((Ω 0 + εω 1 )t) We want x i (τ) = x i (τ + π) (ie; period of π for i = 1,, 3,...). For ε = 0 we have Ω 0 = 1 (had we not set ω = 1 earlier then we d have instead
22 Ω 0 = ω; actually we could choose what we want for Ω 0 and depending on how difficult we wnat to make things, this choice determines Ω 1 later. It makes sense to keep things simple and choose Ω 0 such that for ε = 0 we have x(τ) = x(ωt)) Now dτ dx = Ω so dt dt = dx dτ dτ dt = Ωdx dτ (so ẋ = Ωx ) and d x dt = Ω d x dτ and so returning to the ODE we have Ω x εx 3 = 0 ( =Ω0 1 +εω 1 + ) (x 0 + εx 1 + ) + (x 0 + εx 1 + ) ε(x 0 + εx 1 + ) 3 = 0 Terms in ε 0 : x 0 + x 0 = 0 As before we will assume initial conditions x(0) = a, ẋ(0) = 0 to pick a critical point on some curve. We know there exists a solution with these properties by assuming ellipsoidal shaped orbits; and by varying a we get all the curves. Since there is no ε in initial conditions we get Then the solution for x 0 is Terms in ε 1 : x 0 (0) = a, x 1 (0) = 0, x 0(0) = x 1(0) = 0 x 0 = a cosτ x 1 + Ω 1x 0 + x 1 x 3 0 = 0 x 1 + x 1 = Ω 1 ( a cos(τ)) + (a cos(τ)) 3 = 0 x 1 + x 1 = (Ω 1 a a3 )cos(τ) a3 cos(3τ) The whole point of doing this was to eliminate the cos(τ) term which would have introduced a secular term and so we set Ω 1 a a = 0 so it now remains to solve Ω 1 = 3 8 a3 x 1 + x 1 = 1 4 a3 cos(3τ) We try x 1 = Acos(3τ) 9Acos(3τ) + Acos(3τ) = 1 4 a3 cos(3τ) A = a3 3 The general solution for x 1 before applying boundary conditions is x 1 = α cos(τ) + β sin(τ) a3 3 cos(3τ) Now x 1 (0) = 0 α = a3 3 and x 1 (0) = 0 β = 0 giving Thus x 1 = a3 a3 cos(τ) 3 3 cos(3τ) x(t) = a cos(ωt) + a3 (cos(ωt) cos(3ωt)) 3 Where Ω = a +
23 3 5. Method of Multiple scales In the last section we used Lindstedt s method to take account of varying frequencies; now we develop a more general method for situations with two time scales. An example of where this is relevant is the damped circular pendulem ẍ + βẋ + sinx = 0 There are two things going on here; there is an oscillation which is captured by one time scale; and a slow loss of energy due to the damping term captured by another time scale. By considering two scales we are able to capture different features of the system. t T Figure 5.0.: Consider small oscillations and small energy ẍ + εẋ + sin x = 0 (D.H.M) If we did a standard expansion we d end up with a solution x(t, ε) = sin t + ε t sin t + and this is not a uniform expansion for the solution. The exact solution for this system is: 1 ) x = e εt sin (t 1 ε 4 where x(0) = 1 ẋ(0) = 0 1 ε The Method.. (1) Introduce a new variable T = εt, and think of t as fast time, and T as slow time. () Treat t and T as independent variables for the function x(t, t, ε). Using the chain rule we have dx dt = x t t t + x T T t + x ε ε t = x t + ε x T d x dt = d ( x dt t + ε x ) = ( x T t t + ε x ) 1 + ( x T T t + ε x ) ε T = x t + ε x x + ε t T T (3) Try an expansion x = x 0 (t, T) + εx 1 (t, T) + where of course T = εt =0
24 4 (4) Use the extra freedom of x depending on T to kill off any secular terms. 5.. Application of Multiple scales to D.H.M. Example we consider ẍ + εẋ + x = 0 with boundary conditions x(0) = 0, ẋ(0) = 1, this becomes x t + ε x t T + ε x ( x T + ε t + ε x ) + x = 0 T Now let X = x 0 + εx 1 + ε x + to get ( ) ( +ε +ε t t T T (x 0 +εx 1 +ε x + )+ε t +ε T Terms in ε 0 : this is a PDE with solution +x 0 + εx 1 + ε x + = 0 x 0 t + x 0 = 0 x 0 = A 0 (T)cost + B 0 (T)sint ) (x 0 +εx 1 +ε x + ) where A 0 (T), B 0 (T) are some functions of T (as opposed to just constants) Using boundary conditions x(0) = 0, ẋ(0) = 1 then Terms in ε 1 : x 0 (0) = A 0 (T)cost + B 0 (T)sin t = 0 A 0 (0) = 0 ẋ 0 (0) = A 0 (T)sint + B 0 (T)cost = 1 B 0 (0) = 1 x 1 t + x 0 t T + x 0 t + x 1 = 0 Now x 0 = A 0 (T)sin t + B 0 (T)cost and x 0 t t T = da 0(T) sin t + db 0(T) cost dt dt and so it remains to solve x ( 1 t + x 1 = da 0(T) sin t + db ) 0(T) cost + A 0 (T)sint B 0 (T)cost dt dt The RHS contains terms in sin(t), cos(t) which will induce secular terms. we therefore choose A 0 (T) and B 0 (T) such that they go away. In other words: and so da 0 dt + A 0 = 0 A 0 = A 0 (0)e 1 T = 0 db 0 dt + B 0 = 0 B 0 = B 0 (0)e 1 T = e 1 T x 0 (t) = e 1 T sin t = e εt sint to get the x 1 term we would need higher order terms to fully specify it. Notice that with the exception of constants, the first order part captures most of the features of the exact solution. In general we would have multiple time scales: T 0 = t, T 1 = εt,...,t n = ε n t If we consider a series x 0 (T 0, T 1,...,T n ) + εx 1 (T 0, T 1,...,T n ) +... then d dt = t=t 0 + ε T 1 + ε T +
25 5 Example 5.. (Van Der Pol s Equation). Consider ẍ + ε(x 1)ẋ + x = 0 Immediately we see that if x > 1 we have damping, x < 1 we have negative damping; so it would seem there is a tendency to head towards x = 1. Perhaps we could use energy methods; anyhow... ( ) + ε t t T + (x 0 + εx 1 + ) ( +ε((x 0 + 1) + ε ) (x 0 + εx 1 + ) + x 0 + εx 1 + = 0 T Terms in ε 0 : x 0 t + x 0 = 0 x 0 = A(T)cost + B(T)sint We can write this in complex form x 0 = A 0 (T)e it + A 0 (T)e it (noting that z + z = Re(z)) We can justify this step since in general, if z = a + ib then (a + ib)e it + (a ib)e it = a(e it + e it ) + ib(e it e it ) = a cost b sint Terms in ε 1 : x 1 t + x 1 + x 0 t T + (x 0 1) x 0 t x 1 t + x 1 + (A ie it A ie it ) + = 0 [ (Ae it + Ae it ) 1 ] (iae it iae it ) = 0 where A = A (T) = A T If we wish to find A(T) as opposed to x 1 we need only consider secular terms. So we need to equate coefficients of e it (and e t ) to zero and kill them off. considering terms in e it we have: ia ia ia A + ia A = 0 A A + A A = 0 Similarly, if we consider e it we get the conjugate of this expression; ie; whatever A kills off the e it terms also kills off the e it terms. We should now use the polar form of A, as A A = A 3, and if we write A = A e iϕ with ϕ = arg(a) and A = 1 a then Now da dt = 1 da dt eiϕ + 1 dϕ aie iϕ dt which dividing through by e iϕ gives x 0 = a cos(t + ϕ) and we now wish to find a and so it remains to solve a e iϕ + iae iϕ ϕ 1 aeiϕ + a 4 eiϕ a e iϕ = 0 a = 1 a a3 iaϕ = 0 We now equate real and imaginary parts: ϕ = 0 ϕ is constant
Introduction to Differential Calculus. Christopher Thomas
Mathematics Learning Centre Introduction to Differential Calculus Christopher Thomas c 1997 University of Sydney Acknowledgements Some parts of this booklet appeared in a similar form in the booklet Review
More informationMatthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka
Matthias Beck Gerald Marchesi Dennis Pixton Lucas Sabalka Version.5 Matthias Beck A First Course in Complex Analysis Version.5 Gerald Marchesi Department of Mathematics Department of Mathematical Sciences
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationINTRODUCTION TO THE THEORY OF BLACK HOLES
ITPUU09/11 SPIN09/11 INTRODUCTION TO THE THEORY OF BLACK HOLES internet: Gerard t Hooft Institute for Theoretical Physics Utrecht University and Spinoza Institute Postbox 80.195 3508 TD Utrecht, the
More informationA Modern Course on Curves and Surfaces. Richard S. Palais
A Modern Course on Curves and Surfaces Richard S. Palais Contents Lecture 1. Introduction 1 Lecture 2. What is Geometry 4 Lecture 3. Geometry of InnerProduct Spaces 7 Lecture 4. Linear Maps and the Euclidean
More informationSpaceTime Approach to NonRelativistic Quantum Mechanics
R. P. Feynman, Rev. of Mod. Phys., 20, 367 1948 SpaceTime Approach to NonRelativistic Quantum Mechanics R.P. Feynman Cornell University, Ithaca, New York Reprinted in Quantum Electrodynamics, edited
More informationIf A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C?
Problem 3 If A is divided by B the result is 2/3. If B is divided by C the result is 4/7. What is the result if A is divided by C? Suggested Questions to ask students about Problem 3 The key to this question
More informationMUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS
MUSTHAVE MATH TOOLS FOR GRADUATE STUDY IN ECONOMICS William Neilson Department of Economics University of Tennessee Knoxville September 29 289 by William Neilson web.utk.edu/~wneilson/mathbook.pdf Acknowledgments
More information3. Which of the following couldn t be the solution of a differential equation? (a) z(t) = 6
MathQuest: Differential Equations What is a Differential Equation? 1. Which of the following is not a differential equation? (a) y = 3y (b) 2x 2 y + y 2 = 6 (c) tx dx dt = 2 (d) d2 y + 4 dy dx 2 dx + 7y
More informationWhen action is not least
When action is not least C. G. Gray a GuelphWaterloo Physics Institute and Department of Physics, University of Guelph, Guelph, Ontario N1GW1, Canada Edwin F. Taylor b Department of Physics, Massachusetts
More informationFoundations of Data Science 1
Foundations of Data Science John Hopcroft Ravindran Kannan Version /4/204 These notes are a first draft of a book being written by Hopcroft and Kannan and in many places are incomplete. However, the notes
More informationApproximating functions by Taylor Polynomials.
Chapter 4 Approximating functions by Taylor Polynomials. 4.1 Linear Approximations We have already seen how to approximate a function using its tangent line. This was the key idea in Euler s method. If
More informationFEEDBACK CONTROL OF A NONHOLONOMIC CARLIKE ROBOT
FEEDBACK CONTROL OF A NONHOLONOMIC CARLIKE ROBOT Alessandro De Luca Giuseppe Oriolo Dipartimento di Informatica e Sistemistica Università di Roma La Sapienza Via Eudossiana 8, 84 Rome, Italy {deluca,oriolo}@labrob.ing.uniroma.it
More informationFigure 2.1: Center of mass of four points.
Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would
More informationFeedback Control of a Nonholonomic Carlike Robot
Feedback Control of a Nonholonomic Carlike Robot A. De Luca G. Oriolo C. Samson This is the fourth chapter of the book: Robot Motion Planning and Control JeanPaul Laumond (Editor) Laboratoire d Analye
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationWHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? 1. Introduction
WHICH SCORING RULE MAXIMIZES CONDORCET EFFICIENCY? DAVIDE P. CERVONE, WILLIAM V. GEHRLEIN, AND WILLIAM S. ZWICKER Abstract. Consider an election in which each of the n voters casts a vote consisting of
More informationA UNIQUENESS RESULT FOR THE CONTINUITY EQUATION IN TWO DIMENSIONS. Dedicated to Constantine Dafermos on the occasion of his 70 th birthday
A UNIQUENESS RESULT FOR THE CONTINUITY EQUATION IN TWO DIMENSIONS GIOVANNI ALBERTI, STEFANO BIANCHINI, AND GIANLUCA CRIPPA Dedicated to Constantine Dafermos on the occasion of his 7 th birthday Abstract.
More informationA NoNonsense Introduction to General Relativity
A NoNonsense Introduction to General Relativity Sean M. Carroll Enrico Fermi Institute and Department of Physics, University of Chicago, Chicago, IL, 60637 carroll@theory.uchicago.edu c 2001 1 1 Introduction
More informationI. Vectors and Geometry in Two and Three Dimensions
I. Vectors and Geometry in Two and Three Dimensions I.1 Points and Vectors Each point in two dimensions may be labeled by two coordinates (a,b) which specify the position of the point in some units with
More informationFACULTY OF SCIENCE SCHOOL OF MATHEMATICS AND STATISTICS FIRST YEAR MAPLE NOTES
FACULTY OF SCIENCE SCHOOL OF MATHEMATICS AND STATISTICS FIRST YEAR MAPLE NOTES 2015 CRICOS Provider Code 00098G c 2015, School of Mathematics and Statistics, UNSW These notes are copyright c the University
More informationOrthogonal Bases and the QR Algorithm
Orthogonal Bases and the QR Algorithm Orthogonal Bases by Peter J Olver University of Minnesota Throughout, we work in the Euclidean vector space V = R n, the space of column vectors with n real entries
More informationUnderstanding the FiniteDifference TimeDomain Method. John B. Schneider
Understanding the FiniteDifference TimeDomain Method John B. Schneider June, 015 ii Contents 1 Numeric Artifacts 7 1.1 Introduction...................................... 7 1. Finite Precision....................................
More informationTHE PROBLEM OF finding localized energy solutions
600 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 3, MARCH 1997 Sparse Signal Reconstruction from Limited Data Using FOCUSS: A Reweighted Minimum Norm Algorithm Irina F. Gorodnitsky, Member, IEEE,
More informationAn Elementary Introduction to Modern Convex Geometry
Flavors of Geometry MSRI Publications Volume 3, 997 An Elementary Introduction to Modern Convex Geometry KEITH BALL Contents Preface Lecture. Basic Notions 2 Lecture 2. Spherical Sections of the Cube 8
More informationON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION. A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT
ON THE DISTRIBUTION OF SPACINGS BETWEEN ZEROS OF THE ZETA FUNCTION A. M. Odlyzko AT&T Bell Laboratories Murray Hill, New Jersey ABSTRACT A numerical study of the distribution of spacings between zeros
More informationThe Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationRevised Version of Chapter 23. We learned long ago how to solve linear congruences. ax c (mod m)
Chapter 23 Squares Modulo p Revised Version of Chapter 23 We learned long ago how to solve linear congruences ax c (mod m) (see Chapter 8). It s now time to take the plunge and move on to quadratic equations.
More informationErgodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains
Ergodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains Peter Balint 1, Kevin K. Lin 2, and LaiSang Young 3 Abstract. We consider systems of moving particles in 1dimensional
More informationA First Course in General Relativity Bernard F Schutz. Solutions to Selected Exercises
A First Course in General Relativity Bernard F Schutz (2 nd Edition, Cambridge University Press, 2009) Solutions to Selected Exercises (Version 1.0, November 2009) To the user of these solutions: This
More information