Linear systems of ordinary differential equations
|
|
- Rudolph Johnston
- 7 years ago
- Views:
Transcription
1 Linear systems of ordinary differential equations (This is a draft and preliminary version of the lectures given by Prof. Colin Atkinson FRS on 2st, 22nd and 25th April 2008 at Tecnun Introduction. This chapter studies the solution of a system of ordinary differential equations. These kind of problems appear in many physical or chemical models where several variables depend upon the same independent one. For instance, Newton s second law applied to a particle. m d2 x dt 2 = F (x, x 2, x 3, ẋ, ẋ 2, ẋ 3, t m d2 x 2 dt 2 = F 2(x, x 2, x 3, ẋ, ẋ 2, ẋ 3, t m d2 x 3 dt 2 = F 3(x, x 2, x 3, ẋ, ẋ 2, ẋ 3, t Let us consider this system of ordinary differential equations: dx = 3x + 3y dt dy = xz + rx y dt dx dt = xy z where x, y, z are time-dependent variables, and r is a parameter. We find a system of three non-linear ODE s. The interesting thing in this example is the strong dependence of the solution on the paramater r and the initial value conditions (t 0, x 0, y 0, z 0. For the above reasons, the system presents as the chaos theory; a very known theory in Mathematics knows as the butterfly effect. These equations were posed by Lorenz in the study of metheorology. The phrase refers to the idea that a butterfly s wings might create tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate or even prevent the occurrence of a tornado in a certain location. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events leading to large-scale alterations of events. Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different. While the butterfly does not cause the tornado, the flap of its wings is an essential part of the initial conditions resulting in a tornado. Recurrence, the approximate return of a system towards its initial conditions, together with sensitive dependence on initial conditions are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather. ( (2 c 2008 Tecnun (University of Navarra
2 There is an important relationship between the system of ODE and the ODE s of any order superior to the first. It is a matter of fact, that an equation of order nth y (n = F (t, y, y, y,..., y (n (3 where y (n = d n y/dt n. This can be converted in a system of n equations of first order. With this change of variables or in general: x = y x 2 = y x 2 = x... x n = y (n x n = x n x n = F (t, x, x 2,..., x n x = F (t, x, x 2,..., x n x 2 = F 2 (t, x, x 2,..., x n... x n = F n (t, x, x 2,..., x n we reach n non-linear ODE s of first order. There are three questions to be answered: ( What about the existence of solutions? (2 What about uniqueness? (3 What is the sensitivity to the initial conditions? We are going to see in which cases we can assert that a system of ODE has a solution and this is unique. We must consider the following theorem Theorem. Let us assume that in a region A of the space (t, x, x 2,..., x n, the functions F, F 2,..., F n and F, F,..., F x x 2 x n F 2, F 2,..., F 2 x x 2 x n... F n, F n,..., F n x x 2 x n are continuous and such that the point (t 0, x 0, x 0 2,..., x 0 n is an interior point of A. Thus there exists an interval (4 (5 (6 t t 0 < ɛ (7 (local argument where there is a unique solution of the system given by eq. 5, x = Φ (t x 2 = Φ 2 (t... x n = Φ n (t (8 that fulfils the initial condition x 0 = Φ (t 0 x 0 2 = Φ 2 (t 0... x 0 n = Φ n (t 0 (9 c 2008 Tecnun (University of Navarra 2
3 A way to prove this theorem is to use the Taylor s expansion of the functions. Note.- We must point out that this is a sufficient condition theorem. So, weakening the conditions we can define a stronger expression of this theorem to get a unique solution. The systems are classified in the same manner as the ODE s. They are linear and non-linear. If the functions F, F 2,..., F n can be written as x i = P i (t x + P i2 (t x P in (t x n + q i (t (0 with i =, 2,..., n, the system is called linear. If q i (t are equal to zero for all i, this system is called linear and homogeneous; if not, non-homogeneous. For this kind of systems the theorem of existence and uniqueness is simpler and to some extent, more satisfactory. This theorem has global character. Notice that in the general case this theorem it is defined in the neighbourhood of the initial conditions, consequently giving a local character to the existence and uniqueness of the solution. Recall: If an equation is linear, it means we can add solutions together and still satisfy the differential equation, e.g. x = H(x and H is linear, then If x = H(x y x 2 = H(x 2, then H(c x + c 2 x 2 = c H(x + c 2 H(x 2 ( (c x +c 2 x 2 = H(c x +c 2 x 2 = c H(x +c 2 H(x 2 = c x +c 2 x 2 (2 where c, c 2 are constants. This is a nice property fulfilled when an equation is linear. Basic theory of linear systems of ODE s Let us consider a system of n linear differential equations of first order x = P (t x + P 2 (t x + + P n (t x n + q (t x = P 2 (t x + P 22 (t x + + P 2n (t x n + q 2 (t... x = P n (tx + P n (t x + + P nn (t x n + q n (t (3 We can write this as a matrix where x = P(t x + q(t (4 x = (x x 2... x n T q(t = (q (t q 2 (t... q n (t T P (t P 2 (t... P n (t P 2 (t P 22 (t... P 2n (t P(t = P n (t P n2 (t... P nn (t (5 c 2008 Tecnun (University of Navarra 3
4 If q(t = 0, we have an homogeneous system and eq. 4 becomes x = P(t x (6 This notation emphasises the relationship between the linear systems of ODE s and the first order linear differential equations Theorem. If x and x 2 are solutions of eq. 6, so (c x + c 2 x 2 is solution as well. x = P(t x x (c x + c 2 x 2 = P(t (c x + c 2 x 2 (7 2 = P(t x 2 The question to be answered is: how many independent solutions of eq. 6 are there? Let us assume at the moment, if x, x 2,..., x n are solutions of the system, consider the matrix Ψ(t called fundamental matrix given by Ψ(t = (x x 2... x n (8 Its determinant will be x (t x 2 (t... x n (t x 2 (t x 22 (t... x 2n (t Ψ(t =..... = W (t (9. x n (t x n2 (t... x nn (t where W (t is called the wronskian of the system. These solutions will be linearly independent at each point t in an interval (α, β if Example. W (t 0, t (α, β (20 We will find it for a two by two system ( x P (t P = 2 (t x (2 P 2 (t P 22 (t whose solutions are x = (x (t x 2 (t T y x 2 = (x 2 (t x 22 (t T. They verify the system equations (see eq. 2 { x x = P(t x = P x + P 2 x 2 x 2 = P 2 x + P 22 x 2 { (22 x x 2 = P(t x 2 2 = P x 2 + P 2 x 22 x 22 = P 2 x 2 + P 22 x 22 The fundamental matrix, Ψ(t, is ( x (t x Ψ(t = 2 (t x 2 (t x 22 (t (23 c 2008 Tecnun (University of Navarra 4
5 and the wronskian, W (t, The derivative of W (t with respect to t is W (t = x x 22 x 2 x 2 (24 W (t = x x 22 + x x 22 x 2 x 2 x 2 x 2 (25 and substituting x, x 2, x 2, x 22 as a function of x, x 2, x 2, x 22, we obtain W (t = (P + P 22 (x x 22 x 2 x 2 = (trace P(t W (t (26 This is the Abel s formula. Solving the differential equation W (t = W (t 0 e t t 0 (trace P(s ds (27 As e x is never zero, W (t 0 for all finite value of t if trace P(t is integrable and W (t 0 0. In n-dimensions the same happens. This generalises to n-dimensions to give dw = (P + + P nn W (t W (t = W (t 0 e dt Example 2. A Sturm-Liouville equation has the form t t 0 (trace P(s ds d ( dy a(t + b(t y = 0 (28 dt dt This is a second order differential equation of the form a 2 (t d2 y dt 2 + a (t dy dt + a 0(t y = 0 This kind of equations were studied in 8th/9th century and early 20th century, where a 2 (t, a (t and a 0 (t are linear in t. For instance, the vibration of a plate is given by these equations. We now write eq. (28 as a system Then it gives x = y (29 x 2 = dy dt (30 x = x 2 (3 x 2 = a (t a(t x 2 b(t a(t x (32 We could use our theory of systems to get the connection wronskian between x and x 2 independent solutions of the system. However, we consider eq. (28 directly assuming that y and y 2 are two possible solutions. So d ( dy a(t + b(t y = 0 (33 dt dt d ( dy 2 a(t + b(t y2 = 0 (34 dt dt c 2008 Tecnun (University of Navarra 5
6 We now multiply y 2 by eq. (33 and y by eq. (34 and substracting both expressions we get a(t (y y 2 + a (t (y y 2 y 2 y = 0 (35 but and Then eq. (35 becomes then d ( y2 y y y dt 2 = y2 y y y 2 (36 W (t = y 2 y y y 2 a(t dw dt + a (t W = 0 d dt( a(t W (t = 0 dw W = (t dt a a(t [ t a (s ] W (t = W (t 0 exp ds a(s Homogeneous linear system with constant coefficients. Let us consider the system t 0 x = A x (37 where A is real, n n constant matrix. As solution, we will try where a is a constant vector. Then Then we have a solution provided that x = e r t a (38 x = r e r t a (39 r e r t a = A e r t a (A r I a = 0 (40 and for non-trivial solution (i.e. a 0, r must satisfy Procedure. A r I = 0 (4 Finding eigenvalues, r, r 2,..., r n, solution of A r I = 0 and corresponding eigenvectors, a, a 2,..., a n. Then if the n eigenvectors are linearly independent, we have a general solution x = c e r t a + c 2 e r2 t a c n e rn t a n (42 c 2008 Tecnun (University of Navarra 6
7 where c, c 2,..., c n are arbitrary constants. Recall a i = (a i a 2i... a ni and x i = a i e ri t, with i =, 2,..., n. Then the wronskian will be a a 2... a n a 2 a a 2n W (t =..... e (r+r2+ +rn t 0 (43. a n a n2... a nn since a, a 2,..., a n are linearly independent. In a large number of problems, getting the eigenvalues can be very difficult problem. Example 3. ( x = x (44 4 Consider x = a e r t. Then we require ( ( a 0 (A r I = a 2 0 A r I = r 4 r = 0 ( r 2 4 = 0 ( r 2 = 4 r = ±2 r = 3,. So the eigenvalues are 3 and. With r = 3 ( ( ( 2 a 0 = 4 2 a 2 0 So 2 a + a 2 = 0 implies that a = and a 2 = 2. Hence, the eigenvector will be ( With r = ( ( a ( 0 = a 2 0 So 2 a + a 2 = 0 implies that a = and a 2 = 2. Therefore, the eigenvector will be ( The general solution is x = C 2 ( ( e 2 3 t + C 2 e 2 t (45 The equation (45 is a family of solutions since C and C 2 are arbitrary (i.e., if x and x 2 are known at t = 0, we can solve eq. (45 to get C and C 2 for c 2008 Tecnun (University of Navarra 7
8 Figure : Phase plane of example 3 specific solution. Note: eq. (44 does not involve time explicitly. It could be written as dx = x + x 2 (46 dx 2 4 x + x 2 So we can study the problem in 2 d space (x, x 2. This is often called the phase plane. Note that eq. (46 defines dx /dx 2 uniquely except for points where top and bottom terms of the quotient are zero simultaneously. dx dx dt ( ( = a x + a 2 x 2 dx = a a 2 x dt (47 a 2 a 22 x 2 2 dx 2 = a 2 x + a 22 x 2 dt dt dx 2 = a 2 x + a 22 x 2 (48 dx a x + a 2 x 2 { a x + a 2 x 2 = 0 (49 a 2 x + a 22 x 2 = 0 In general, what about A?. If A is hermitian (i.e., A H = A T = A, where A is the complex conjugate of the matrix, then the eigenvalues are real and we can find n linearly-independent eigenvectors. 2. A is non-hermitian. We have the following possibilities (2.a. n real and disttinct eigenvalues and n independent eigenvectors. (2.b. Complex eigenvalues. (2.c. Repeated eigenvalues. c 2008 Tecnun (University of Navarra 8
9 Example 4. Put x = e r t a to get x = ( 4 r 4 r we obtain r = y r 2 = 3. We need the eigenvectors x (50 = 0 (5 r = a = ( 2 T r 2 = 3 a 2 = ( 2 T (52 We have two solutions ( ( x = e 2 t x 2 = e 2 3t (53 To find the general solution, we construct it via a linear combination by adding these two linearly independent solutions together ( ( x = c e t + c 2 2 e 3 t (54 2 c and c 2 are arbitrary constants to be determined by intial conditions or other conditions on x. We can plot (see Figure 2 the family of the solutions in the (x, x 2 plane with an arrow to signify the direction of time (means time increasing. Our solution in components looks like x =c e 3 t + c 2 e t x 2 =2 c e 3 t 2 c 2 e t we can study the motion of a pendulum, we can use systems in order to study position and velocity. Suppose c 2 0 and we are interested in the point x = x 2 = 0 and note x /x 2 = /2 with c 0. We only reach (0, 0 if t since we need e 3 t 0. If c 0 and c 2 0, then x /x 2 = /2. To get (0, 0 we need t +. So (0, 0 is a special point. I can represent the time by an arrow { x c t e t x 2 2 c e t x 2 2 x { x c t + 2 e 3 t (55 x 2 2 c 2 e 3 t x 2 2 x Point (0, 0 is a saddle point. deta = 3 < 0. Thinking of the future, we can observe that c 2008 Tecnun (University of Navarra 9
10 Example 5. Figure 2: Phase plane of Example 4 Let us consider this problem. ( x 3 2 = x ( In order to solve this problem, we obtain the eigenvalues 3 r 2 = 0 ( r and r = 4 y r 2 =. The eigenvectors are r = 4 a = ( 2 T r 2 = a 2 = ( 2 T (58 they are linearly independent. Hence the solutions are ( ( 2 x = e 4 t x 2 = 2 e t (59 And the general solution is ( ( x = c e 4 t 2 + c 2 e t 2 (60 Plotting the paths on the phase plane (x, x 2 { x c t 2 e 4 t x 2 2 c 2 e 4t x 2 2 x { x 0 t + x 2 0 (x, x 2 (0, 0 (6 Point (0, 0 is a stable node. Thinking of the future again, deta = 2 > 0, tracea = 5 < 0 and deta < (tracea 2 /4. c 2008 Tecnun (University of Navarra 0
11 Example 6. Figure 3: Phase plane of Example 5 Let us consider another system x = 0 0 x (62 0 where A is a hermitian matrix r r r = 0 (63 we obtain r = (double and r 2 = 2. The eigenvectors are r = a = ( 0 T, a 2 = ( 2 T r 2 = 2 a 3 = ( T (64 They are linearly independent. Hence, the solutions are x = 0 e t x 2 = 2 e t x 3 = e 2 t (65 And the general solution is x = c e t + c 2 e t c 3 e 2 t (66 Complex eigenvalues. If A is non hermitian (but real and has complex eigenvalues then the determinant A r I = 0 takes complex conjugates eigenvalues A r I = 0 (67 c 2008 Tecnun (University of Navarra
12 As A and I are real, if we work out the conjugate of eq. 67, we obtain A r I = 0 (68 this means that if r is an eigenvalue, its complex conjugate is eigenvalue as well. Therefore the eigenvectors will be complex conjugates, too: The solution associated to r is (A r I x = 0 (A r I x = 0 (69 x = e r t a = e (λ+i µ t (u + i v = = e λ t (cos(µ t + i sin(µ t (u + i v = = e λ t (u cos(µ t v sin(µ t + i e λ t (u sin(µ t + v cos(µ t (70 And the other one x = e λ t (u cos(µ t v sin(µ t i e λ t (u sin(µ t+v cos(µ t (7 We are looking for real solution to this system. We know that a linear combination of these solutions will be a solution as well. Hence, we can take the real and the imaginary parts of these ones x = c e λ t (u cos(µ t v sin(µ t+c 2 e λ t (u sin(µ t+v cos(µ t (72 Example 7.. ( x /2 = x (73 /2 where A is not symmetric. Solving /2 r /2 r = 0 (74 we get r = /2 ± i. The eigenvector is ( a = i Hence the solution is ( ( x = e ( /2+i t = i i (75 e t/2 (cos(t + i sin(t (76 Then x = ( cos(t + i sin(t sin(t + i cos(t ( e t/2 cos(t = sin(t ( e t/2 sin(t +i cos(t e t/2 (77 c 2008 Tecnun (University of Navarra 2
13 The general solution is x = c e t/2 ( cos(t sin(t ( + c 2 e t/2 sin(t cos(t (78 Plotting this family of curves on the phase plane (x, x 2 Figure 4: Phase plane of Example 7 { x 0 t + x 2 0 (x, x 2 (0, 0 (79 Let us study the points (x 0 T y (0 y T. Substituting into eq. 73 x = (x 0 T x = ( x/2 x T dy dx = 2 x = (0 y T x = (y y/2 T dy dx = 2 (80 Point (0, 0 is a stable spiral point. Thinking of the future once again, det A = 5/4 > 0, tracea = < 0 and deta > (tracea 2 /4. Repeated eigenvalues. Let us consider the system given by eq. 37, with A a real non-symmetric matrix. Let us assume that one of its eigenvalues, r, is repeated (m = m >, and there are not m linearly independent eigenvectors, k < m. We should obtain (m k additional linearly independent solutions. How do we amend the procedure to deal with cases when there are no m linearly independent eigenvectors? For example, the exponential of a square matrix: e At = I + At + A2 t An t n ! n! (8 c 2008 Tecnun (University of Navarra 3
14 We recall the Cayley Hamilton theorem. If f(λ is the characteristic poynomial of a matrix A, f(λ = det (A λ I (82 This theorem says that f(a = 0. For instance, f(λ = r 2 i (trace Ar i + det A = r 2 i p r i + q = 0 (83 Hence, by Cayley-Hamilton theorem, f(a = A 2 p A + q I = 0 (84 using this theorem, any expansion function like e A can be reduced to polynomials on A. This is very useful theorem in problems of materials and continuum mechanics. A 2 = p A q I A 3 = p A 2 q A = ( p 2 q A p q I A 4 = p A 3 q A 2 = p (p 2 2q A (p 2 q q I... So, we can use this theorem to get a finite expansion of eq. 8. Differentiating with respect to t that expression Hence (85 de At = A + A 2 t + A3 t An t n dt (2! (n! + = = A (I + At + + An t n ( = Ae At (n! x = e At v = (I + At + A2 t An t n +... v (87 2! n! where v is a constant vector, then So it looks all right. Hence, dx dt = AeAt v = Ax (88 Av = A a i = r i a i A 2 v = r i A a i = r 2 i a i... A n v = r n i a i (89 Hence, substituing into eq. 87 x = ( + r i t + r2 i t2 2! + + rn i tn n! +... a i = e ri t a i (90 c 2008 Tecnun (University of Navarra 4
15 Note that e (A+B t = e A t e B t A B = B A (9 Let us consider e At v = e (A λi t e λi t v, λ (92 I can do this for all λ since (A λ I(λ I = λ A λ 2 I = (λ I (A λ I (93 Let us come back to eq. 87 e λ I t v = (I + λ I t + λ2 I 2 t λn I n t n +... v = e λ t v (94 2! n! hence e At v = e λ t e (A λ I t v (95 But x = e A t v is a solution of the system. Moreover, let us assume that v = a i where a i is an eigenvector of r i, then ( x = e ri t I + t(a r i I + t2 2! (A r i I a i = ( = e ri t a i + t (A r i I a i + t2 2! (A r i I 2 a i +... = But, as a i is eigenvector, it verifies (96 (A r i I a i = 0 = (A r i I 2 a i = (A r i I 3 a i =... (97 Then eq. 96 becomes We can look for another vector v such that x = e ri t a i (98 (A r i I v 0 y (A r i I 2 v = 0 = (A r i I 3 v =... (99 Lemma. Let us assume that the characteristic polynomial of A, non-hermitian, of order n, have repeated roots (r, r 2,..., r k, k < n, of order (m, m 2,..., m k (m + m m k = n, respectively such that f(λ = (λ r m (λ r 2 m2... (λ r k m k (00 if A has only n j < m j eigenvectors of the eigenvalue r j (i.e., (A r j I v = 0 has n j independent solutions, then (A r j I 2 v = 0 has at least n j + independent solutions. In general, if (A r j I m v = 0 has got n j < m j independent solutions, (A r j I m+ v = 0 has at least n j + independent solutions c 2008 Tecnun (University of Navarra 5
16 Example 8. Let it be the system ( x = 0 x (0 In order to solve, we take r 0 r = 0 (02 the root is r =, double, with only one eigenvector obtained from (A r I a = 0 r = a = ( 0 T (03 We cannot solve the second solution we need. So, we have ( x = e t (04 0 We must determine another vector a 2 such that (A r I a 2 0 (05 (A r I 2 a 2 = 0 = (A r I 3 a 2 =... (06 As (A r I 2 = (A r I 3 = 0, any vector a 2 verifies eq. 06. However, according to the inequality given by eq. 05, a 2 cannot be a linearly dependent vector of a (eq. 03. So ( ( ( a 2 = (0 T 0 0 = 0 ( From eq. 96, we obtain x 2 = e t (a 2 + t (A r I a 2 + t2 2! (A r I 2 a = (( ( ( (08 = e t 0 + t + 0 = e t t 0 The general solution is x = c e t ( 0 ( + c 2 e t t (09 Thinking of next paragraph, deta = > 0, tracea = 2 > 0 and deta = (tracea 2 /4. c 2008 Tecnun (University of Navarra 6
17 Figure 5: Phase plane of Example 8 Résumé of the study of a system of two homogeneous equations with constant coefficients. Let us consider the system its characteristic polynomial is x = a x + b y y = c x + d y (0 f(r = r 2 p r + q ( where p = a + d (the trace y q = a d b c (the determinant. Its eigenvalues are r = p ± p 2 4 q (2 2 The study of eigenvalues and eigenvectors is very useful in order to classify the critical point (0, 0 and to know the trajectories on the phase plane.. q < 0. The eigenvalues are positive and negative, respectively. Saddle point. 2. q > 0. 2.a. p 2 > 4q. Both eigenvalues are either positive or negative. - Stable node, if p < 0. - Unstable node, if p > 0. 2.b. p 2 < 4 q. The eigenvalues are complex conjugates. - Stable spiral, if p < 0. - Unstable spiral, if p > 0. - Center, if p = 0. 2.c. p 2 = 4 q. The eigenvalue is a double root of the characteristic polynomial. - Stable node, if p < 0 and there is only one eigenvector. - Unstable node, if p > 0 and there is only one eigenvector. c 2008 Tecnun (University of Navarra 7
18 - Sink point, if p < 0 and there are two independent eigenvectors. - Source point, if p > 0 and there are two independent eigenvectors. 3. q = 0. It means that the matrix rank is and therefore, one row can be obtained multipliying the other one by a constant (c/a = d/b = k. Then, (0, 0 is not an isolated critical point. There is a line y = a x/b, b 0, of critical points. The trajectories on the phase plane are y = k x+e, E being a constant. - If p > 0, paths start at the critical points and go to the infinity. - If p < 0, conversely, the trajectories end at the critical points. It is very convenient and useful to know the plane trace-determinant (p, q. Fundamental matrix of a system Let us consider the homogeneos system given by eq. 6 and let x, x 2,..., x n be a set of its independent solutions. We know that we can build the general solution via a linear combination of them. We denote fundamental matrix, Ψ(t, a matrix whose columns are the solution vectors of eq. 8. The determinant of this matrix is not zero (eq. 9. This determinant is called wronskian (eq. 20. Let us assume that we are looking for a solution x such that x(t 0 = x 0. Then x 0 = c x + c 2 x c n x n = Ψ(t 0 c (3 where c = (c c 2... c n T. As Ψ(t = 0, t, c can be obtained using the inverse matrix of Ψ(t 0 : c = Ψ (t 0 x 0 (4 and the solution will be This matrix is very useful when x = Ψ(t Ψ (t 0 x 0 (5 Ψ(t 0 = I (6 This special set of solutions builds the matrix Φ(t. It verifies x = Φ(t x 0 (7 We obtain that Φ(t = Ψ(t Ψ (t 0. Moreover, with constant coefficients: (. e A t (eq. 8 is a fundamental matrix of fundamental solutions since it verifies eq. 86 and e A 0 = I. (2. If we know two fundamental matrices of the system, Ψ and Ψ 2, there is always a constant matrix C such that Ψ 2 = Ψ C, since each column of Ψ 2 can be obtained by a linear combination of the columns of Ψ. (3. It can be shown that e A t = Ψ(t Ψ (t 0 (see eq. 5. According to paragraphs and 2, there exists a matrix C such that This expression at t = 0 e At = Ψ(t C (8 I = Ψ(0 C C = Ψ (0 (9 c 2008 Tecnun (University of Navarra 8
19 Nonhomogeneous systems. We have x = P(t x + q(t (20 We assume that we have solved x = P(t x. We actually have a procedure for P(t = A, a constant matrix. Consider special cases ( If P(t = A and A has n independent eigenvectors, the procedure is to build the matrix T with the eigenvectors of A: T = (a a 2... a n (2 Then, we change variables x = T y with x = T y. Going back to the system x = T y = A x + q = A T y + q (22 As T is built with n eigenvectors, this is a regular matrix (det T 0, so we can work out T, the inverse of T. Hence, from eq. 4 y = T A T y + T q = D y + h (23 where D is the diagonal matrix of the eigenvalues of A. Therefore, Hence y i(t = r i y i (t + h i (t, i =, 2,..., n (24 y i (t = e rit e ri t h i (t dt + c i e ri t, i =, 2,..., n (25 After obtaining y i, we can get x = T y. This method is only possible if there are n linearly independent eigenvectors, i.e., A is a diagonalizable constant matrix. Because we can reduce the matrix to its diagonal form, the above procedure works. In cases where we do not have the n independent eigenvectors, the matrix A can only be reduced to its Jordan canonical form. (2 Variation of the parameters. Let us consider the system of eq. 4, and we know the solution of the associated homogeneous system (eq. 6. Then we can build the fundamental matrix of the system, Ψ(t (eq. 8, whose columns are the linearly independent of the homogeneous system solutions. We are looking for a solution like x = Ψ(t u(t (26 c 2008 Tecnun (University of Navarra 9
20 where u(t is a vector to be determined such that eq. 8 is a solution of eq. 4. Substituting Ψ (t u(t + Ψ(t u (t = P(t Ψ(t u(t + q(t (27 as we know that Ψ (t = P(t Ψ(t, Ψ(tu (t = q(t u(t = Ψ (tq(tdt + c (28 where c is a constant vector and there exists Ψ (t since the n columns of matrix Ψ(t are linearly independent (eq. 20. The general solution is x = Ψ(t Ψ (t q(t dt + Ψ(t c (29 c 2008 Tecnun (University of Navarra 20
LS.6 Solution Matrices
LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More informationSystem of First Order Differential Equations
CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions
More informationr (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) 1 9.4.4 Write the given system in matrix form x = Ax + f ( ) sin(t) x y 1 0 5 z = dy cos(t)
Solutions HW 9.4.2 Write the given system in matrix form x = Ax + f r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + We write this as ( ) r (t) θ (t) = ( ) ( ) 2 r(t) θ(t) + ( ) sin(t) 9.4.4 Write the given system
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationMATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).
MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0
More information3. Reaction Diffusion Equations Consider the following ODE model for population growth
3. Reaction Diffusion Equations Consider the following ODE model for population growth u t a u t u t, u 0 u 0 where u t denotes the population size at time t, and a u plays the role of the population dependent
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More informationElasticity Theory Basics
G22.3033-002: Topics in Computer Graphics: Lecture #7 Geometric Modeling New York University Elasticity Theory Basics Lecture #7: 20 October 2003 Lecturer: Denis Zorin Scribe: Adrian Secord, Yotam Gingold
More information3.2 Sources, Sinks, Saddles, and Spirals
3.2. Sources, Sinks, Saddles, and Spirals 6 3.2 Sources, Sinks, Saddles, and Spirals The pictures in this section show solutions to Ay 00 C By 0 C Cy D 0. These are linear equations with constant coefficients
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More information1 2 3 1 1 2 x = + x 2 + x 4 1 0 1
(d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationExamination paper for TMA4115 Matematikk 3
Department of Mathematical Sciences Examination paper for TMA45 Matematikk 3 Academic contact during examination: Antoine Julien a, Alexander Schmeding b, Gereon Quick c Phone: a 73 59 77 82, b 40 53 99
More informationLinear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
More informationSimilar matrices and Jordan form
Similar matrices and Jordan form We ve nearly covered the entire heart of linear algebra once we ve finished singular value decompositions we ll have seen all the most central topics. A T A is positive
More informationLecture 1: Schur s Unitary Triangularization Theorem
Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections
More informationLINEAR ALGEBRA. September 23, 2010
LINEAR ALGEBRA September 3, 00 Contents 0. LU-decomposition.................................... 0. Inverses and Transposes................................. 0.3 Column Spaces and NullSpaces.............................
More informationORDINARY DIFFERENTIAL EQUATIONS
ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. SEPTEMBER 4, 25 Summary. This is an introduction to ordinary differential equations.
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationHW6 Solutions. MATH 20D Fall 2013 Prof: Sun Hui TA: Zezhou Zhang (David) November 14, 2013. Checklist: Section 7.8: 1c, 2, 7, 10, [16]
HW6 Solutions MATH D Fall 3 Prof: Sun Hui TA: Zezhou Zhang David November 4, 3 Checklist: Section 7.8: c,, 7,, [6] Section 7.9:, 3, 7, 9 Section 7.8 In Problems 7.8. thru 4: a Draw a direction field and
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationChapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
More informationAu = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.
Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationNOTES ON LINEAR TRANSFORMATIONS
NOTES ON LINEAR TRANSFORMATIONS Definition 1. Let V and W be vector spaces. A function T : V W is a linear transformation from V to W if the following two properties hold. i T v + v = T v + T v for all
More informationMath 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.
Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(
More informationCHAPTER 2. Eigenvalue Problems (EVP s) for ODE s
A SERIES OF CLASS NOTES FOR 005-006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 4 A COLLECTION OF HANDOUTS ON PARTIAL DIFFERENTIAL EQUATIONS
More informationChapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor
Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter
More information4.5 Linear Dependence and Linear Independence
4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,
More informationMATH 551 - APPLIED MATRIX THEORY
MATH 55 - APPLIED MATRIX THEORY FINAL TEST: SAMPLE with SOLUTIONS (25 points NAME: PROBLEM (3 points A web of 5 pages is described by a directed graph whose matrix is given by A Do the following ( points
More informationSecond Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients. y + p(t) y + q(t) y = g(t), g(t) 0.
Second Order Linear Nonhomogeneous Differential Equations; Method of Undetermined Coefficients We will now turn our attention to nonhomogeneous second order linear equations, equations with the standard
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationUniversity of Lille I PC first year list of exercises n 7. Review
University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More informationLinear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices
MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two
More informationApplied Linear Algebra I Review page 1
Applied Linear Algebra Review 1 I. Determinants A. Definition of a determinant 1. Using sum a. Permutations i. Sign of a permutation ii. Cycle 2. Uniqueness of the determinant function in terms of properties
More informationEigenvalues, Eigenvectors, and Differential Equations
Eigenvalues, Eigenvectors, and Differential Equations William Cherry April 009 (with a typo correction in November 05) The concepts of eigenvalue and eigenvector occur throughout advanced mathematics They
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationIntroduction to Matrix Algebra
Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary
More information4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION
4: EIGENVALUES, EIGENVECTORS, DIAGONALIZATION STEVEN HEILMAN Contents 1. Review 1 2. Diagonal Matrices 1 3. Eigenvectors and Eigenvalues 2 4. Characteristic Polynomial 4 5. Diagonalizability 6 6. Appendix:
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationNotes on Orthogonal and Symmetric Matrices MENU, Winter 2013
Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationGeneral Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1
A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationLecture L3 - Vectors, Matrices and Coordinate Transformations
S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between
More information7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
More information13 MATH FACTS 101. 2 a = 1. 7. The elements of a vector have a graphical interpretation, which is particularly easy to see in two or three dimensions.
3 MATH FACTS 0 3 MATH FACTS 3. Vectors 3.. Definition We use the overhead arrow to denote a column vector, i.e., a linear segment with a direction. For example, in three-space, we write a vector in terms
More informationLecture 8 : Dynamic Stability
Lecture 8 : Dynamic Stability Or what happens to small disturbances about a trim condition 1.0 : Dynamic Stability Static stability refers to the tendency of the aircraft to counter a disturbance. Dynamic
More informationMethods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College
Methods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College Equations of Order One: Mdx + Ndy = 0 1. Separate variables. 2. M, N homogeneous of same degree:
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationRecall the basic property of the transpose (for any A): v A t Aw = v w, v, w R n.
ORTHOGONAL MATRICES Informally, an orthogonal n n matrix is the n-dimensional analogue of the rotation matrices R θ in R 2. When does a linear transformation of R 3 (or R n ) deserve to be called a rotation?
More informationA characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
More informationASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1
19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point
More information3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
More informationOrthogonal Diagonalization of Symmetric Matrices
MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More information1 Determinants and the Solvability of Linear Systems
1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped
More informationDERIVATIVES AS MATRICES; CHAIN RULE
DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we
More informationChapter 20. Vector Spaces and Bases
Chapter 20. Vector Spaces and Bases In this course, we have proceeded step-by-step through low-dimensional Linear Algebra. We have looked at lines, planes, hyperplanes, and have seen that there is no limit
More informationLectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain
Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal
More informationNonlinear Systems of Ordinary Differential Equations
Differential Equations Massoud Malek Nonlinear Systems of Ordinary Differential Equations Dynamical System. A dynamical system has a state determined by a collection of real numbers, or more generally
More informationChapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
More informationThe Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
More informationName: Section Registered In:
Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are
More informationx + y + z = 1 2x + 3y + 4z = 0 5x + 6y + 7z = 3
Math 24 FINAL EXAM (2/9/9 - SOLUTIONS ( Find the general solution to the system of equations 2 4 5 6 7 ( r 2 2r r 2 r 5r r x + y + z 2x + y + 4z 5x + 6y + 7z 2 2 2 2 So x z + y 2z 2 and z is free. ( r
More informationBrief Introduction to Vectors and Matrices
CHAPTER 1 Brief Introduction to Vectors and Matrices In this chapter, we will discuss some needed concepts found in introductory course in linear algebra. We will introduce matrix, vector, vector-valued
More information1. First-order Ordinary Differential Equations
Advanced Engineering Mathematics 1. First-order ODEs 1 1. First-order Ordinary Differential Equations 1.1 Basic concept and ideas 1.2 Geometrical meaning of direction fields 1.3 Separable differential
More informationCITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION
No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August
More informationNetwork Traffic Modelling
University of York Dissertation submitted for the MSc in Mathematics with Modern Applications, Department of Mathematics, University of York, UK. August 009 Network Traffic Modelling Author: David Slade
More informationDifferentiation of vectors
Chapter 4 Differentiation of vectors 4.1 Vector-valued functions In the previous chapters we have considered real functions of several (usually two) variables f : D R, where D is a subset of R n, where
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationChapter 6. Orthogonality
6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be
More informationMicroeconomic Theory: Basic Math Concepts
Microeconomic Theory: Basic Math Concepts Matt Van Essen University of Alabama Van Essen (U of A) Basic Math Concepts 1 / 66 Basic Math Concepts In this lecture we will review some basic mathematical concepts
More informationEigenvalues and Eigenvectors
Chapter 6 Eigenvalues and Eigenvectors 6. Introduction to Eigenvalues Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest importance in dynamic problems. The solution
More informationNOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane
Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.
More informationThe Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better!
The Fourth International DERIVE-TI9/89 Conference Liverpool, U.K., -5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue Notre-Dame Ouest Montréal
More informationOscillations. Vern Lindberg. June 10, 2010
Oscillations Vern Lindberg June 10, 2010 You have discussed oscillations in Vibs and Waves: we will therefore touch lightly on Chapter 3, mainly trying to refresh your memory and extend the concepts. 1
More information8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
More informationUnderstanding Poles and Zeros
MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Understanding Poles and Zeros 1 System Poles and Zeros The transfer function
More informationT ( a i x i ) = a i T (x i ).
Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)
More informationRecall that two vectors in are perpendicular or orthogonal provided that their dot
Orthogonal Complements and Projections Recall that two vectors in are perpendicular or orthogonal provided that their dot product vanishes That is, if and only if Example 1 The vectors in are orthogonal
More informationLinear Algebra: Determinants, Inverses, Rank
D Linear Algebra: Determinants, Inverses, Rank D 1 Appendix D: LINEAR ALGEBRA: DETERMINANTS, INVERSES, RANK TABLE OF CONTENTS Page D.1. Introduction D 3 D.2. Determinants D 3 D.2.1. Some Properties of
More informationAn Introduction to Partial Differential Equations
An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion
More informationMean value theorem, Taylors Theorem, Maxima and Minima.
MA 001 Preparatory Mathematics I. Complex numbers as ordered pairs. Argand s diagram. Triangle inequality. De Moivre s Theorem. Algebra: Quadratic equations and express-ions. Permutations and Combinations.
More information6. Define log(z) so that π < I log(z) π. Discuss the identities e log(z) = z and log(e w ) = w.
hapter omplex integration. omplex number quiz. Simplify 3+4i. 2. Simplify 3+4i. 3. Find the cube roots of. 4. Here are some identities for complex conjugate. Which ones need correction? z + w = z + w,
More informationReal Roots of Univariate Polynomials with Real Coefficients
Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials
More informationa 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationSeparable First Order Differential Equations
Separable First Order Differential Equations Form of Separable Equations which take the form = gx hy or These are differential equations = gxĥy, where gx is a continuous function of x and hy is a continuously
More informationNumerical Solution of Differential Equations
Numerical Solution of Differential Equations Dr. Alvaro Islas Applications of Calculus I Spring 2008 We live in a world in constant change We live in a world in constant change We live in a world in constant
More information