Four Lectures on Differential-Algebraic Equations

Size: px
Start display at page:

Download "Four Lectures on Differential-Algebraic Equations"

Transcription

1 Four Lectures on Differential-Algebraic Equations Steffen Schulz Humboldt Universität zu Berlin June 3, 2003 Abstract Differential-algebraic equations DAEs arise in a variety of applications. Therefore their analysis and numerical treatment plays an important role in modern mathematics. This paper gives an introduction to the topic of DAEs. Examples of DAEs are considered showing their importance for practical problems. Several well known index concepts are introduced. In the context of the tractability index existence and uniqueness of solutions for low index linear DAEs is proved. Numerical methods applied to these equations are studied. Mathematics Subject Classification: 34A09, 65L80 Keywords: differential-algebraic equations, numerical integration methods Introduction In this report we consider implicit differential equations f x t, xt, t = 0 f x on an interval I R. If is nonsingular, then it is possible to formally solve for x f in order to obtain an ordinary differential equation. However, if x is singular, this is no longer possible and the solution x has to satisfy certain algebraic constraints. Thus equations where f x is singular are referred to as differentialalgebraic equations or DAEs. These notes aim at giving an introduction to differential-algebraic equations and are based on four lectures given by the author during his stay at the University of Auckland in The first section deals with examples of DAEs. Here problems from different kinds of applications are considered in order to stress the importance of DAEs when modelling practical problems. In the second section each DAE is assigned a number, the index, to measure it s complexity concerning both theoretical and numerical treatment. Several index notions are introduced, each of them stressing different aspects of the DAE considered. Special emphasis is given to the tractability index for linear DAEs. The definition of the tractability index in the second section gives rise to a detailed analysis concerning existence and uniqueness of solutions. The main tool is a procedure to decouple the DAE into it s dynamical and algebraic part. In section three this analysis is carried out for linear DAEs with low index as it was established by März [25]. The results obtained, especially the decoupling procedure, are used in the fourth section to study the behaviour of numerical methods when applied to linear DAEs. The material presented in this section is mainly taken from [8].

2 Examples of differential-algebraic equations Modelling with differential-algebraic equations plays a vital role, among others, for constrained mechanical systems, electrical circuits and chemical reaction kinetics. In this section we will give examples of how DAEs are obtained in these fields. We will point out important characteristics of differential-algebraic equations that distinguish them from ordinary differential equations. More information about differential-algebraic equations can be found in [2, 5] but also in [32].. Constrained mechanical systems Consider the mathematical pendulum in figure.. Let m be the pendulum s mass which is attached to a rod of length l [5]. In order to describe the pendulum in Cartesian coordinates we write down the potential energy Ux, y = mgh = mgl mgy. where xt, yt is the position of the moving mass at time t. The earth s acceleration of gravity is given by g, the pendulum s height is h. If we denote derivatives of x and y by ẋ and ẏ respectively, the kinetic energy is given by x ϕ y l h Figure.: The mathematical pendulum T ẋ, ẏ = 2 mẋ2 + ẏ 2..2 The term ẋ 2 + ẏ 2 describes the pendulum s velocity. The constraint is found to be 0 = gx, y = x 2 + y 2 l are used to form the Lagrange function Lq, q = T ẋ, ẏ Ux, y λ gx, y. Here q denotes the vector q = x, y, λ. Note that λ serves as a Lagrange multiplier. The equations of motion are now given by Euler s equations d L L = 0, k =, 2, 3. dt q k q k We arrive at the system mẍ + 2λx = 0, mÿ mg + 2λy = 0, gx, y = 0..4 By introducing additional variables u=ẋ and v =ẏ we see that.4 is indeed of the form. 2

3 When solving.4 as an initial value problem, we observe that each initial value xt0, yt 0 = x 0, y 0 has to satisfy the constraint.3 consistent initialization. No initial condition can be posed for λ, as λ is determined implicitly by.4. Of course the pendulum can be modeled by the second order ordinary differential equation ϕ = g l sin ϕ when the angle ϕ is used as the dependent variable. However for practical problems a formulation in terms of a system of ordinary differential equations is often not that obvious, if not impossible..2 Electrical circuits Modern simulation of electrical networks is based on modelling techniques that allow an automatic generation of the model equations. One of the techniques most widely used is the modified nodal analysis MNA [7, 8]..2. A simple example To see how the modified nodal analysis works, consider the simple circuit in figure.2 taken from [39]. It consists of a voltage source v V = vt, a resistor with conductance G and a capacitor with capacitance C > 0. The layout of the circuit can be described by 0 A a = 0, 0 e U V G C e 2 Figure.2: A simple circuit where the columns of A a correspond to the voltage, resistive and capacitive branches respectively. The rows represent the network s nodes, so that and indicate the nodes that are connected by each branch under consideration. Thus A a assigns a polarity to each branch. By construction the rows of A a are linearly dependent. However, after deleting one row the remaining rows describe a set of linearly independent equations, The node corresponding to the deleted row will be denoted as the ground node. The matrix 0 A = 0 is called the incidence matrix. It is now possible to formulate basic physical laws in terms of the incidence matrix A [20]. Denote with i and v the vector of branch currents and voltage drops respectively and introduce the vector e of node potentials. For each node the node potential is it s voltage with respect to the ground node. Kirchhoff s Current Law KCL: Kirchhoff s Voltage Law KVL: For each node the sum of all currents is zero. For each loop the sum of all voltages is zero. } } Ai = 0 v = A T e 3

4 For the circuit in figure.2 KCL and KVL read and i V + i G = 0, i G + i C = 0.5a v V = e, v G = e e 2, v C = e 2.5b respectively. If we assume ideal linear devices the equations modelling the resistor and the capacitor are i G = Gv G, Finally we have v V = vt i C = C dv c dt..5c.5d for the independent source which is thought of as the input signal driving the system. The system.5 is called the sparse tableau. The equations of the modified nodal analysis are obtained from the sparse tableau by expressing voltages in terms of node potential via.5b and currents, where possible, by device equations.5c: i V + Ge e 2 = 0 Ge e 2 + C de 2 dt = 0 e = v 0 C e e 2 i V G G + G G The MNA equations reveal typical properties of DAEs: e e 2 i V 0 = 0.6 v i Only certain parts of x = e, e 2, i V T need to be differentiable. It is sufficient if e and i V are continuous. ii Any initial condition xt 0 = x 0 needs to be consistent, i.e. there is a solution passing through x 0. Here this means that we can pose an initial condition for e 2 or i V only. For.6 it is sufficient to solve the ordinary differential equation e 2t = C G vt + e 2 t. e 2 t can be thought of as the output signal. solution are uniquely determined as The remaining components of the e t = vt, i V t = G e t e 2 t. Another important feature that distinguishes DAEs from ordinary differential equations is that the solution process often involves differentiation rather than integration. This is illustrated in the next example. 4

5 .2.2 Another simple example If we replace the independent voltage in figure.2 source by a current source i I = it and the capacitor by an inductor with inductance L, we arrive at the circuit in figure.3. The sparse tableau now reads i I + i G = 0, i G + i L = 0,.7a v I = e, v G = e e 2, v L = e 2,.7b i G = Gv G, i I = it. v L = L di L dt,.7c.7d Thus the modified nodal analysis leads to e e 2 Ge e 2 = it Ge e 2 + i L = 0 L di L dt e 2 = 0 The solution is given by i L = it, e 2 = L di L dt = Ldit dt, e = e 2 + G it = L dit dt + G it,.8 I I G Figure.3: Another simple circuit under the assumption that the current it is differentiable. Notice that all component values are fixed. To solve for e 2 we need to differentiate the current i..3 A transistor amplifier We will now present a more substantial example adapted from [6]. Consider the transistor amplifier circuit in figure.4. P. Rentrop has received this example from K. Glashoff and H.J. Oberle and documented it in [34]. The circuit consists of eight nodes, U e t = 0. sin200πt is an arbitrary 00 Hz input signal and e 8, the node potential of the 8 th node, is the amplified output. The circuit contains two transistors. We model the behaviour of these semiconductor devices by voltage controlled current sources I gate = α ge gate e source, I drain = α ge gate e source, I source = ge gate e source with a constant α = 0.99, g is the nonlinear function g : R R, v gv = β exp v, β = 0 6, U F = U F L 5

6 G 2 G 4 G 6 G 8 2 G C 0 G D S 4 3 C 3 5 G D S C 5 output input Ue G G 3 C 2 G 5 G 7 C 4 G 9 Ub Figure.4: Circuit diagram for the transistor amplifier It is also possible to use PDE models partial differential equations to model semiconductor devices. This approach leads to abstract differential-algebraic systems studied in [23, 35, 40]. The modified nodal analysis can now be carried out as in the previous examples. Consider for instance the second node. KCL implies that 0 = i C i R i R2 i gate,2 = C v C v G G v G2 G 2 α g e 2 e 3 = C e2 e e2 G e 2 U b G2 + α g e 2 e 3 = C e e 2 e2 G + G 2 + Ub G 2 + α g e 2 e 3. U b = 6 is the working voltage of the circuit and the remaining constant parameters of the model are chosen to be G 0 = 0 3, G k = 9 0 3, k =,..., 9, C k = 0 6, k =,..., 5. A similar derivation for the other nodes leads to the quasi-linear system A Dxt = b xt.9 with A = D = C C C C C C C C ,, bx = U e G 0 +e G 0 U b G 2 +e 2 G +G 2 α ge 2 e 3 ge 2 e 3 +e 3 G 3 U b G 4 +e 4 G 4 +αge 2 e 3 U b G 6 +e 5 G 5 +G 6 α ge 5 e 6 ge 5 e 6 +e 6 G 7 U b G 8 +e 7 G 8 +αge 5 e 6 e 8 G 9. A numerical solution of.9 can be calculated using Dassl or Radau5, see [6, 4]. 6

7 A mathematically more general version of.9 is A xt, t Dtxt = b xt, t.0 with a solution dependent matrix A. We identified x i with the node potential e i. Let us assume that N 0 t = ker A xt, t Dt does not dependent on x. We will follow [6] and investigate.0 in more detail. With fy, x, t = A xt, t y b xt, t,.0 can be written as f Dtxt, xt, t = 0.. Denote By, x, t = f xy, x, t and let Qt be a continuous projector function onto N 0 t. Calculate G y, x, t = Ax, tdt + By, x, tqt. For the transistor amplifier. in figure.4 this matrix is always nonsingular. We want to use this matrix in conjunction with the Implicit Function Theorem to derive an ordinary differential equation that determines the dynamical flow of.0. Let Dt be defined by DD D = D, DD = I 5, D DD = D, D D = P := I 8 Q. I k denotes the identity in R k and Dt is a generalized reflexive inverse of Dt. For more information on generalized matrix inverses see section 2.3. on page 8. For a solution x of. define ut = Dtxt, wt = Dt u t + Qtxt. Observe that ADx = ADw and x = P x + Qx = D Dx + Qx = D u + Qw. Thus it holds that Note that. ADw + b x, t F w, u, t := f Dw, D u + Qw, t = 0. u = R u + Dw, since Dw = DD u + DQx = Ru = u R u. The mapping F can be studied without requiring x to be a solution. Let y 0, x 0, t 0 R 5+8+, such that fy 0, x 0, t 0 = 0. For w 0 = Dt 0 y 0 + Qt 0 x 0, u 0 = Dt 0 x 0 it follows that F w 0, u 0, t 0 = f y 0, x 0, t 0 = 0, F ww 0, u 0, t 0 = G y0, x 0, t 0 is nonsingular. 7

8 Due to the Implicit Function Theorem there is a ϱ > 0 and a smooth mapping satisfying ω : B ϱ u0, t 0 I R m ωu 0, t 0 = w 0, F ωu, t, u, t = 0 u, t B ϱ u0, t 0. We use ω to define xt = Dt ut + Qtωut, t, t I. where u is the solution of the ordinary differential equation u t = R tut + Dtω ut, t, ut 0 = Dt 0 x 0..2 x is indeed a solution of.0, since f Dtxt, xt, t = f u, D u + Qωu, t, t = F ω, u, t = 0. This example shows that there is a formulation of the problem in terms of an ordinary differential equation.2 as was the case for the mathematical pendulum in the first example. However,.2 is available only theoretically as it was obtained using the Implicit Function Theorem. Thus we have to deal directly with the DAE formulation.0 when solving the problem. Nevertheless,.2 will play a vital part in analyzing.0 and in analyzing numerical methods applied to.0. In section 3 it will be shown how.2 can be obtained explicitly for linear DAEs. Section 4 is devoted to showing that there are numerical methods that, when applied directly to.0, behave as if they were integrating.2, given that.0 satisfies some additional properties. In this case results concerning convergence and order of numerical methods can be transferred directly from ODE theory to DAEs..4 The Akzo Nobel Problem The last example originates from the Akzo Nobel Central Research in Arnhem, the Netherlands, and is again taken from [6]. It describes a chemical process in which two species, FLB and ZLU, are mixed while carbon dioxide is continously added. The resulting species of importance is ZLA. The reaction equations are given in [5]. 2 FLB + 2 CO 2 k FLBT + H2 O ZLA + FLB k 2 /K k3 k FLB + 2 ZHU + 3 CO 2 FLB.ZHU + 2 CO 2 k FLBT + ZHU LB + nitrate ZLA + H2 O ZLB + ZHU FLB.ZHU The last equation describes an equilibrium where the constant K s = [FLB.ZHU] [FLB] [ZHU] plays a role in parameter estimation. Square brackets denote concentrations. 8

9 The chemical process is appropriately described by the reaction velocities r = k [FLB] 4 [CO 2 ] 2, r 2 = k 2 [FLBT] [ZHU], r 3 = k 2 [FLB] [ZLA], K r 4 = k 3 [FLB] [ZHU] 2, r 5 = k 4 [FLB.ZHU] 2 [CO 2 ] 2, see [6] for details. The inflow of carbon dioxide per volume unit is denoted by F in and satisfies pco2 F in = kla H [CO 2]. kla is the mass transfer coefficient, H the Henry constant and pco 2 is the partial carbon dioxide pressure [6]. It is assumed that pco 2 is independent of [CO 2 ]. The various constants are given by k = 8.7, k 4 = 0.42, K s = 5.83, k 2 = 0.58, K = 34.4, pco 2 = 0.9, k 3 = 0.09, kla = 3.3, H = 737. If we identify the concentrations [FLB], [CO 2 ], [FLBT], [ZHU], [ZLA], [FLB.ZHU] with x,..., x 6 respectively, we obtain the differential-algebraic equation 2r +r 2 r 3 r 4 2 r r 4 2 r 5 +F in x t = r r 2 +r 3 r 2 +r 3 2r 4..3 r 2 r 3 +r 5 0 K s x x 4 x 6 This DAE can be analyzed in a similar way as the previous example. The matrix x 6 x2 G = AD + BQ = x 6 x is always nonsingular. Here, A = D = diag,,,,, 0 was chosen. 9

10 2 Index concepts for DAEs In the last section we saw that DAEs differ in many ways from ordinary differential equations. For instance the circuit in figure.3 lead to a DAE where a differentiation process is involved when solving the equations. This differentiation needs to be carried out numerically, which is an unstable operation. Thus there are some problems to be expected when solving these systems. In this section we try to measure the difficulties arising in the theoretical and numerical treatment of a given DAE. 2. The Kronecker index Let s take linear differential-algebraic equations with constant coefficients as a starting point. These equations are given as Ex t + F xt = qt, t I, 2. with E, F LR m. apriori clear. Even for 2. existence and uniqueness of solutions is not Example 2. For the DAE 0 x t xt = a solution x = x 2 is given by x 2 t = gt and x t = t t 0 gs ds, where the function g CI, R can be chosen arbitrarily. In order to exclude examples like 2. we consider the matrix pencil λe +F. The pair E, F is said to form a regular matrix pencil, if there is a λ such that detλe+f 0. A simultaneous transformation of E and F into Kronecker normal form makes a solution of 2. possible. Theorem 2.2 Kronecker [9] Let E, F form a regular matrix pencil. Then there exist nonsingular matrices U and V such that UEV = I 0, UF V = 0 N C 0, 0 I where N = diagn,..., N k is a block-diagonal matrix of Jordan-blocks N i to the eigenvalue 0. The proof can be found in [9] or [5]. Notice that due to the special structure of N there is µ N such that N µ 0 but N µ = 0. µ is known as N s index of nilpotency. It does not depend on the special choice of U and V. We solve 2. by introducing the transformation x = V u, v at = Uqt. bt 0

11 Thus 2. is equivalent to UEV V xt + UF V V xt = Uqt I 0 ut + 0 N vt C 0 0 I The first equation is an ordinary differential equation u t + Cut = at for the u component. The second equation reads vt = bt Nv t ut vt = bt N b t Nv t = bt Nb t + N 2 v t µ = at. 2.2 bt = = N i b i t 2.3 i=0 determining the v component completely by repeated differentiation of the right hand side b. Since numerical differentiation is an unstable process, the index µ is a measure of numerical difficulty when solving 2.. Definition 2.3 Let E, F form a regular matrix pencil. The Kronecker index of 2. is 0 if E is nonsingular and µ, i.e. N s index of nilpotency, otherwise. 2.2 The differentiation index How can definition 2.3 be generalized to the case of time dependent coefficients or even to nonlinear DAEs? If we consider 2.3 again, it turns out that µ v t = N i b i+ t, i=0 meaning that exactly µ differentiations transform 2.2 into a system of explicit ordinary differential equations. This idea was generalized by Gear, Petzold and Campbell [4, 0, ]. The following definition is taken from [5]. Definition 2.4 The nonlinear DAE f x t, xt, t = has differentiation index µ if µ is the minimal number of differentiations f x t, xt, t df x t, xt, t = 0, = 0,..., dµ f x t, xt, t dt dt µ = such that the equations 2.5 allow to extract an explicit ordinary differential system x t = ϕ xt, t using only algebraic manipulations.

12 We now want to look at four examples to get a feeling of how to calculate the differentiation index. We always assume that the functions involved are smooth enough to apply definition 2.4. Example 2.5 For linear DAEs with constant coefficients forming a regular matrix pencil we have differentiation index µ if and only if the Kronecker index is µ. Example 2.6 Consider the system x = fx, y 2.6a 0 = gx, y. 2.6b The second equation yields 0 = dgx, y dt = g x x, yx + g y x, yy. If g y x, y is nonsingular in a neighbourhood of the solution, 2.6 is transformed to x = fx, y y = g y x, y g x x, yx = g y x, y g x x, yfx, y 2.6a 2.6b and the differentiation index is µ =. The DAE.6 modelling the circuit in figure.2 is of the form 2.6 with e x = e 2, y =, fx, y = G i V C e Ge e e 2 and gx, y = 2 i V. e + v V G Note that g y x, y = is nonsingular so that.6 is an index equation. 0 Example 2.7 The system x = fx, y 2.8a 0 = gx 2.8b can be studied in a similar way. 2.8b gives 0 = dgx dt = g x xx = g x xfx, y = hx, y. 2.8b Comparing with example 2.6 we know that 2.8a, 2.8b is an index system if h y x, y remains nonsingular in a neighbourhood of the solution. If this condition holds, 2.8 is of index 2, as two differentiations produce x = fx, y 2.8a y = h y x, y h x x, yfx, y = g x xf y x, y g xx x fx,y,fx,y +g x xf x x,yfx,y. 2.8b 2.8b defines the hidden constraint of the index 2 equation

13 The DAE.8 modelling the circuit in figure.3 can be written as i L = L e 2 = fi L, e 2 2.0a 0 = i L i I = gi L. 2.0b The remaining variable e is determined by e = e 2 + G i I, where i I is the input current. 2.0 is of the form 2.8 with x = i L and y = e 2. h y x, y = g x f y = L is nonsingular and the index is 2. Example 2.8 Finally take a look at the system x = fx, y y = gx, y, z 2.a 2.b 0 = hx. 2.c Differentiation of 2.c yields and 0 = dhx dt x = x y = 0 = = h x xx = h x xfx, y = ĥx, y 2.c fx,y = fx, y 2.a and 2.b 2.2a gx,y,z ĥx, y = gx 2.c 2.2b is of the form 2.8 with x = x y and y = z. Define hx, y = gxxfx, y and compare with 2.8b to find that 2.2 is of the index 2 if hyx, y = gxxfyx, y = f z ĥ x ĥ y g z = 0 h xx f, + h x f x h xy f, + h x f y = h g x f y g z z remains nonsingular. This shows that 2. is an index 3 system if the matrix h x xf y x, yg z x, y, z is invertible in a neighbourhood of the solution x, y, z. Hidden constraints are given by 2.c but also by hx, y = gxxfx, y = h xx f, f + h x f x f + h x f y g = 0, which is condition 2.8b in terms of the index 2 system 2.2. Consider again the mathematical pendulum from section. in the formulation x = u = f x, y, u, v y = v = f 2 x, y, u, v u = 2 m λx = g x, y, u, v, λ v = +g 2 m λy = g 2x, y, u, v, λ 0 = x 2 + y 2 l 2 = hx, y. 2.3 For l > 0 the value h x,y f u,v g λ = 4 m x2 + y 2 is always nonsingular so that 2.3 is an index 3 problem. 3

14 2.3 The tractability index In definition 2.4 the function f is assumed to be smooth enough to calculate the derivatives 2.5. In applications this smoothness is often not given. For instance in circuit simulation input signals are continuous but often not differentiable. In this section we want to study the tractability index introduced by Griepentrog, März [3]. In fact we consider the generalization of the tractability index proposed by März [25]. The idea is to replace the smoothness requirements for the coefficients by the requirement on certain subspaces to be smooth. To define the tractability index we introduce linear DAEs with properly stated leading terms. A second matrix Dt is used when formulating the DAE as At Dtxt + Btxt = qt. 2.4 In contrast to the standard formulation Etxt + F txt = qt 2.5 the leading term in 2.4 precisely figures out which derivatives are actually involved. The formulation 2.4 was first used in [] to study linear DAEs and their adjoint equations. For 2.5 the adjoint equation E y F y = p is of a different type. For the more general formulation 2.4 the adjoint equation fits nicely into this general form: D A y B y = p. In this section we consider linear DAEs 2.4 with matrix coefficients A C I, LR n, R m, D C I, LR m, R n, B C I, LR m. Neither A nor D needs to be a projector function. Note that At and Dt are rectangular matrices in general. However, A and D are assumed to be well matched in the following sense. Definition 2.9 The leading term of 2.4 is properly stated if ker At im Dt = R n, t I, and there is a continuously differentiable projector function im Rt=im Dt, ker Rt=ker At t I. R C I, LR n with By definition At and Dt have a common constant rank if the leading term is properly stated [25]. Definition 2.0 A function x : I R m is said to be a solution of 2.4 if x C DI, R m = {x CI, R m Dx C I, R n } satisfies 2.4 pointwise. Let us point out that a solution x is a continuous function, but the part Dx : I R n is differentiable. 4

15 We now define a sequence of matrix functions and possibly time-varying subspaces. All relations are meant pointwise for t I. Let G 0 = AD, B 0 = B and for i 0 N i = ker G i, S i = { z R m B i z im G i } = { z R m Bz im G i }, Q i = Q 2 i, im Q i = N i, P i = I Q i, G i+ = G i + B i Q i, B i+ = B i P i G i+ D C i+dp 0 P i, C i+ = DP 0 P i+ D. 2.6 Here, D : I LR n, R m denotes the reflexive generalized inverse of D such that DD D = D, D DD = D, DD = R, D D = P Note that D is uniquely determined by 2.7 and depends only on the choice of Q 0. Section 2.3. contains more details about generalized matrix inverses. Definition 2. The DAE 2.4 with properly stated leading term is said to be a regular DAE with tractability index µ on the interval I if there is a sequence 2.6 such that G i has constant rank r i on I, Q i C I, LR m, DP 0 P i D C I, LR n, i 0, Q i+ Q j = 0, j = 0,..., i, i 0, 0 r 0 r µ < m and r µ = m. 2.4 is said to be a regular DAE if it is regular with some index µ. This index criterion does not depend on the special choice of the projector functions Q i [28]. As proposed in [24] the sequence 2.6 can be calculated automatically. Thus the index can be calculated without the use of derivative arrays [27]. Example 2.2 Consider the DAE t x t t x 2 t + t 0 0 x t x 2 t = 0 taken from [25]. With ker At = {0}, im Dt = R the leading term is properly stated. Calculate t t 2 G 0 t = AtDt = and N t 0 t = { z R 2 α R, z = α t } to find that N 0 t ker Bt. Independently of the choice of Q 0 in 2.6 we have G t = G 0 t + BtQt = G 0 t. 5

16 Similarly it follows that G i t = G 0 t for every i 0. This is not a regular DAE in the sense of definition 2.. Note that for every γ CI, R a solution is given by xt = γt t. Solutions are therefore not uniquely determined. This is the case in spite of the fact that for every t the local matrix pencil λad + B + AD of the reformulated DAE 0 = AtDtx t + Bt + AtD t t t 2 xt = x t + xt t is regular. The following lemma shows that definition 2. is indeed a generalization of the Kronecker index, i.e. in the case of constant coefficients, the Kronecker index and the tractability index for regular DAEs coincide. To show this, define the subspaces S EF = {z R m F z im E}, N E = ker E. for given matrices E, F LR m. Obviously for fixed t I we have N i t = N Gi t and S i t = S Gi tb i t in sequence 2.6. Lemma 2.3 For matrices E, F LR m the following statements are equivalent: N E S EF = {0} 2 For every projector Q E onto N E the matrix E + F Q E is nonsingular. 3 N E S EF = R m 4 E, F form a regular matrix pencil with Kronecker index. Proof: 2 E + F Q E z = 0 implies Q E z S EF. Since Q E z N E too, we have Q E z N E S EF = {0} and Q E z = 0. Thus 0 = Ez + F Q E z = Ez and z N E = im Q E. Therefore z = Q E z = G EF = E + F Q E is nonsingular. Show that Q = Q E G EF F is the projector onto N E along S EF. 3 4 There is exactly one projector Q onto N E along S EF. Since 3 2, we find Q = Q G EF F with G EF = E + F Q. Let P = I Q. Show that λ E+F is nonsingular for λ specp G EF F so that E, F form a regular matrix pencil. Due to theorem 2.2 there are nonsingular matrices U, V GLRm such that V E U = I N = Ē, V F U = C I = F. It follows that N Ē = ker Ē = U N E and S Ē F = {z Rm F z im Ē} = U S EF so that N Ē SĒ F = U N E S EF = {0}. 2.8 On the other hand N = { Ē z 2 R m z = 0, z 2 ker N} and S Ē F = { z 2 R m C z z 2 im Ē} = { z 2 R m z 2 im N}, 6

17 meaning that im N ker N = {0} and N = 0. Thus the Kronecker index is. 4 Kronecker index gives N = 0 and S Ē F = {0}, NĒ SĒ F = {0}. Use 2.8 to see N E S EF = UN Ē SĒ F = {0}. As in the previous section we now want to calculate the index of the DAEs modelling the electrical circuits in figure.2 and Example 2.4 For.6 we calculate G 0 = 0 C 0 and N 0 = R {0} R. Choose G 0 Q 0 = to find that G = G C 0 is nonsingular. For the circuit in figure we therefore have index. Example 2.5 Equation.8 can be written as e G G 0 e 2 + G G L i L 0 0 e it e 2 = 0.8 i L leading to G 0 = 0 0 0, N 0 = R R {0}. With Q 0 = 0 0 it turns out that 0 0 L G 0 0 G = G G 0 is singular and N = { z R 3 α R, z = z 2 = αl, z 3 = α }. 0 L Q = is a projector onto N satisfying Q Q 0 = 0. Finally G 2 = 0 0 L 0 0 L 0 0 G G 0 G G 0 L is nonsingular. Thus the index is 2. Note that the terms C i+ dissappear in 2.6 as Q 0 does not depend on t. Nevertheless, in general the derivatives of C i+ appearing in the definition of B i+ in sequence 2.6 are necessary in order to determine the index correctly. We will illustrate this in the next example which can be found in [25] as well. Example 2.6 The DAE x 2 = q x = fx 2.9a x 3 = q 2 + ηx 2 ηtq x = gx, x 2, x 3 2.9b 0 = q 3 ηtx 2 x 3 = hx 2, x 3 2.9c is easily checked to have differentiation index 3 as repeated differentiation of 2.9c yields 0 = q 3 q 2 + x 2 0 = q 3 q 2 + q x x = q 3 q 2 + q. The index does not depend on the value of η. We now write 2.9 as 0 ηt x q x x η 0 x 2 = q x 3 0 ηt x 3 q 3 7

18 with a properly stated leading term and calculate the sequence 2.6 G 0 = G 2 = ηt ηt , Q 0 =, Q 2 = , G =, G 3 = 0 ηt 0 ηt 0 ηtηt+ ηt+ 0 0 ηt ηt+ 0 ηt, Q = ηt 0 Since det G 3 =, 2.9 is a regular DAE with index 3 independently of η. However, if we dropped the terms C i+ in 2.6 and defined G i+ = G i + B i Q i, B i+ = B i P i with G 0 = AD and B 0 = B we would obtain G 2 = 0 0 ηt++η 0 0 0, Q 2 = +η 0 ηt 0 ηt 0 ηt++ηηt ηt++η, G 3 = 0 0 ηt++η 0 η t det G 3 = + η shows that G 3 is singular for η =. Thus the use of the simpler version of B i would lead to an index criterion not recognizing the index properly. The previous example gives rise to investigating the relationship between G i and G i further. Due to G i P i = G i the matrix G i+ may be written as G i+ = G i + B i P i Q i I Pi D C idp 0 P i Q i. For low indices we thus find G 0 = G 0, G = G, G 2 = G 2 I P D C DP 0 Q with the nonsingular factor I P D C DP 0Q. The matrices G 2 and G 2 have therefore common rank and we had to choose an index 3 example in 2.6 to show the necessity of the second term in the definition of B i+. We don t have to restrict ourselves to linear DAEs 2.4. Nonlinear DAEs A xt, t dx, t, t + b xt, t = can also be considered. For 2.20 the index µ is defined in such a way that all linearizations along solutions have the same index µ in the sense of definition 2.. The index case is studied extensively in [6]. We have already made use of this approach when investigating the transistor amplifier example in section.3. More information on nonlinear DAEs can be found in [27, 29] Some technical details In order to define the sequence 2.6 we introduced the generalized reflexive inverse D of D. Here we want to provide a short summary of the properties of generalized matrix inverses [4]. For a rectangular matrix M LR m, R n, a matrix M LR n, R m is called a generalized inverse of M if MM M = M.., 8

19 If the condition M MM = M holds as well, then M is called a reflexive generalized inverse of M. Observe that for any reflexive generalized inverse M of M the matrices M M 2 = M MM M = M M, MM 2 = MM MM = MM are projectors. Reflexive generalized inverses are not uniquely determined. Uniqueness is obtained if we require M M and MM to be special projectors. We could, for instance, require them to be ortho-projectors M M T = M M, MM T = MM. In this case M is called the Moore-Penrose inverse of M, often denoted by M +. In the case of DAEs with properly stated leading terms we appropriated the projectors P 0 t LR m and Rt LR n to determine D t R n, R m uniquely. D t is the reflexive generalized inverse of Dt defined by DD D = D, D DD = D, DD = R, D D = P If there was another generalized inverse D satisfying 2.2, then D = D D D = D R = D DD = P 0 DD = D DD = D. In definition 2. the condition Q i+ Q j = 0, j = 0,..., i, i is required. We will show briefly that the projectors Q i in sequence 2.6 can always be chosen to satisfy If for a given DAE 2.4 there was an index i such that N i + N i {0}, then 2.4 would not be a regular DAE as all G i would be singular. Thus N 0 N = {0} is a necessary condition for a regular DAE and the projector Q onto N can be chosen such that N 0 ker Q. For an index i let the projectors Q j for j =,..., i satisfy Q j Q k = 0, k = 0,..., j. Then N i+ N i = {0} implies N i+ N j = {0} for j =,..., i and Q i+ can be chosen such that N 0 N N i ker Q i Other index concepts As seen in the previous sections a DAE can be assigned an index in several ways. In the case of linear equations with constant coefficients all index notions coincide with the Kronecker index. Apart from that, each index definition stresses different aspects of the DAE under consideration. While the differentiation index aims at finding possible reformulations in terms of ordinary differential equations, the tractability index is used to study DAEs without the use of derivative arrays. There are several other index concepts available. Here we want to introduce some of them briefly. 9

20 2.4. The perturbation index The perturbation index was introduced for nonlinear DAEs fx t, xt = by Hairer, Lubich and Roche in [4] has perturbation index µ along a solution x on I = [0, T ] if µ is the smallest integer such that, for all functions ˆx having a defect fˆx t, ˆxt = δt, there exists on I an estimate ˆxt xt C ˆx0 x0 + max δξ + + max 0 ξ t 0 ξ t δµ ξ whenever the expression on the right-hand side is sufficiently small. Here C denotes a constant which depends only on f and the length of I. The perturbation index measures the sensitivity of solutions with respect to perturbations of the given problem [5] The geometric index Here we present the geometric index as it is introduced in [38]. Consider the autonomous DAE fx, x = and assume that M 0 = f 0 is a smooth submanifold of R m R m. Then the DAE 2.24 can be written as x, x M 0. Each solution has to satisfy x W 0 = πm 0, where π : R m R m R m is the canonical projection onto the second component. If W 0 is a submanifold of R m, then x, x belongs to the tangent bundle T W 0 of W 0. In other words x, x M = M 0 T W 0. M is called the first reduction of M 0. Iterate this process to obtain a sequence M 0, M, M 2,... of manifolds where M i+ is the first reduction of M i and x, x i 0 M i. The geometric index is defined as the smallest integer µ such that M µ = M µ+. This index notion was introduced in [33] and studied extensively in [3] by Rabier and Rheinboldt. 20

21 2.4.3 The strangeness index This index notion is a generalization of the Kronecker index to DAEs Etx t + F txt = qt, t I R, 2.25 with time-dependent coefficients. It is due to Kunkel and Mehrmann [22]. The matrices U and V in theorem 2.2 now depend on t, i.e is transformed to UEV y + UF V UEV y = Uq Êy + ˆF y = ˆq. The pairs of matrix functions E, F and Ê, ˆF are said to be globally equivalent. For fixed t I define matrices T t, ˆT t, Zt and V t such that the column vectors of T t, ˆT t, Zt and V t span the subspaces ker Et, im E T, ker E T and imzt T NtT t, respectively. Use these matrices to define rt = rank Et, at = rank Zt T NtT t, st = rank V t T Zt T Nt ˆT t. dt = rt st, ut = m rt at st, We assume that the functions r, s and a are constant on I. Then E, F is globally equivalent to the pair I d , 0 F 2 0 F 4 F F 24 F I a 0 0 I s = E, F. The proof can be found in [2]. The value s is called the strangeness of the pair E, F. Denote E, F by E 0, F 0 and s 0 = s. Similarly we define the strangeness s of the pair E, F. If we repeat the procedure described above, we arrive at a sequence of globally equivalent pairs E i, F i, i 0, each having strangeness s i. The strangeness index or s-index is then defined by µ = min{ i = 0,, 2,... s i = 0 }. Relations between the tractability index and the strangeness index are given in [36]. 2

22 3 Solvability of linear DAEs with properly stated leading term In this section we consider linear differential-algebraic equations At Dtxt + Btxt = qt, t I 3. with properly stated leading terms as in definition 2.9. A, B and D are continuous matrix functions with Dt LR m, R n, At LR n, R m Bt LR m, R m, qt R m A function x : I R m is said to be a solution of 3. if x C DI, R m = {x CI, R m Dx C I, R n } satisfies 3. pointwise. As in the previous section we define for t I pointwise G 0 = AD, B 0 = B and for i 0 N i = ker G i, S i = { z R m B i z im G i } = { z R m Bz im G i }, Q i = Q 2 i, im Q i = N i, P i = I Q i, G i+ = G i + B i Q i, B i+ = B i P i G i+ D C i+dp 0 P i, C i+ = DP 0 P i+ D. D is again the reflexive generalized inverse of D from section 2.3. For completeness we repeat the definition of index µ from the previous section. 3.2 Definition 3. The DAE 3. with properly stated leading term is said to be a regular DAE with tractability index µ on the interval I if there is a sequence 3.2 such that G i has constant rank r i on I, Q i C I, LR m, DP 0 P i D C I, LR n, i 0, Q i+ Q j = 0, j = 0,..., i, i 0, 0 r 0 r µ < m and r µ = m. 3. is said to be a regular DAE if it is regular with some index µ. The material presented here is mainly taken from [25], [] and [26]. 22

23 3. Decoupling of linear index- DAEs Let 3. be a regular index DAE with properly stated leading term. definition 3. the Matrix G is nonsingular. Due to Lemma 3.2 The matrices of sequence 3.2 satisfy a P 0 =D D, P 0 D =D, DP 0 =D, DP 0 D =DD =R, b RD = DD D = D, c A = AR = ADD, d Q 0 = G BQ 0, e P 0 = G AD, f P 0 x = P 0 y DP 0 x = DP 0 y a Dx = Dy. Proof: a and b are just the properties of the generalized reflexive inverse D. Remember that R C I, LR n is the smooth projector function realizing the decomposition ker At im Dt = R n provided by the properly stated leading term. ker A = ker R implies c. G Q 0 = ADQ 0 +BQ 2 0 = BQ 0 proves d. Similarly G P 0 = ADP 0 + BQ 0 P 0 = AD shows e. For f we only have to show. If DP 0 z = 0 then P 0 z ker D = ker AD = ker P 0 and thus P 0 z = 0. ker D =ker AD holds due to the properly stated leading term. Let s assume that x is a solution of the DAE 3.. Scaling with G yields Note that A Dx + Bx = q G A Dx + G Bx = G q. 3.3 G A Dx = c G AR Dx = G ADD Dx = e P 0 D Dx, G Bx = G BP 0x + G BQ 0x = G BP 0x + Q 0 x. d Thus multiplication of 3.3 by P 0 and Q 0 from the left shows that A Dx + Bx = q G A Dx + G Bx = G { q P0 D Dx + P 0 G BP 0x = P 0 G q Q 0 G BP 0x + Q 0 x = Q 0 G { q DP0 D Dx + DP0 G BP 0x = DP 0 G f Q 0 G BP 0x + Q 0 x = Q 0 G q a b a { R Dx + DG BP 0x = DG Q 0 G BP 0x + Q 0 x = Q 0 G q q q { Dx R Dx + DG BP 0x = DG Q 0 G BP 0x + Q 0 x = Q 0 G { Dx = R Dx DG BD Dx + DG Q 0 x = Q 0 G BD Dx + Q 0 G q q q q } } } } } 23

24 Every solution x can therefore be written as x = P 0 x + Q 0 x = D Dx + Q 0 x = D Dx Q 0 G BD Dx + Q 0 G q = I Q 0 G BD u + Q 0 G q 3.4 where u = Dx is a solution of the ODE u = R u DG BD u + DG q. 3.5 Definition 3.3 The explicit ordinary differential equation 3.5 is called the inherent regular ODE of the index- equation 3.. Lemma 3.4 i im D is a time varying invariant subspace of 3.5. ii 3.5 is independent of the choice of Q 0. Proof: i Because of im D = im R = keri R multiplication of 3.5 by I R gives I Ru = I RR u = I R Ru and v = I Ru satisfies the ODE v = I R v. If there is t I such that ut = Rt ut im Dt, then vt = 0. This means vt = 0 and thus ut = Rtut for every t. ii Let ˆQ 0 be another projector with im ˆQ 0 = N 0 and let ˆP 0, ˆD be defined as in 2.6. Then Ĝ = G I + Q 0 ˆQ0 P 0 implies Ĝ = I Q 0 ˆQ0 P 0 G and DĜ = DG. Finally note that DĜ B ˆD = DĜ B ˆP 0 D = DĜ BD DĜ B ˆQ 0 D a = d DG BD D ˆQ 0 D = DG BD. The decoupling procedure above and lemma 3.4 enable us to prove existence and uniqueness of solutions for the index DAE 3.. Theorem 3.5 Let 3. be a regular index DAE. For each d im Dt 0, t 0 I, the initial value problem At Dtxt + Btxt = qt, Dt0 xt 0 = d 3.6 is uniquely solvable in C D I, Rm. Proof: There is exactly one solution u C I, R m of the inherent ODE u = R u DG BD u + DG q satisfying the initial condition ut 0 = d. Lemma 3.4 shows that ut = Rtut for every t. Therefore x = I Q 0 G BD u + Q 0 G q C DI, R m is a solution of 3.6 satisfying Dx = u. The decoupling process shows the uniqueness. Note that the initial condition Dt 0 xt 0 = d for d im Dt 0 can be replaced by Dt 0 xt 0 = Dt 0 x 0, x 0 R m. 24

25 3.2 Decoupling of linear index-2 DAEs We now want to repeat the same argument for linear index 2 differential-algebraic equations. We assume that 3. is an index 2 DAE with properly stated leading term. Due to definition 3. and lemma 2.3 we have N t S t = R m. In this section we choose Q to be the canonical projector onto N along S. Lemma 2.3 also implies Q Q 0 = Q G 2 B Q 0 = 0 as required in definition 3.. For the sequence 3.2 to make sense we have to assume DP D C I, LR n. Then DQ D = DP D + DD = DP D + R is also smooth. Note that DQ D and DP D are projector functions. In addition to a, b, c and f from lemma 3.2 we now have Lemma 3.6 g Q = Q G 2 B, h G 2 AD = P P 0, i G 2 B = G 2 BP 0P + P D DP D DQ + Q + Q 0, j Q x = Q y DQ x = DQ y, k ΩΩ Ω = 0 for every projector function Ω C I, LR n. Proof: g follows from lemma 2.3, h can be proved similar to e in lemma 3.2, but i is a consequence of B = BP 0 + BQ 0 = BP 0 P + BP 0 Q + BQ 0 and G 2 Q 0 = BQ 0, G 2 Q = B Q, BP 0 Q = B Q + G 2 P D DP D DQ. To show j assume that DQ z = 0. Then Q z ker D = ker P 0 and Q z = Q 2 z = Q P 0 Q z = 0. Finally 0 = I ΩΩ implies 0 = I Ω Ω + I ΩΩ = Ω Ω + I ΩΩ. In order to decouple 3. in the index 2 case we again assume that x is a solution of the DAE. Since G 2 is nonsingular, we find A Dx + Bx = q G 2 A Dx + G 2 Bx = G 2 q 3.7 P D Dx +G 2 BP 0P x+p D DP D DQ x+q x+q 0 x=g 2 q using a, c, h and i. Due to I = P + Q = P 0 P + Q 0 P + Q we can decouple 3.7 by multiplying with P 0 P, Q 0 P and Q respectively. 3. is therefore equivalent to the system P 0 P D Dx +P 0 P G 2 BP 0P x+p 0 P D DP D DQ x=p 0 P G 2 q, 3.8a Q 0 P D Dx + Q 0 P G 2 BP 0P x +Q 0 P D DP D DQ x + Q 0 x=q 0 P G 2 q, 3.8b Q G 2 BP 0P x + Q x=q G 2 q. 3.8c 25

26 With a and f equation 3.8a takes the form DP D Dx + DP G 2 BP 0P x + DP D DP D DQ x = DP G 2 q. Use the product rule of differentiation to find DP D Dx = DP x DP D Dx. On the other hand DP D DP D DQ = DP D DQ as DP D DP P 0 Q = 0, so that 3.8a is equivalent to DP x DP D DP x + DP G 2 BD DP x = DP G 2 q. 3.8a A similar analysis involving g, j and k from lemma 3.6 yields Q 0 Q D DQ x + Q0 Q D DQ D DP x Each solution x of 3. can thus be written as where x = P 0 x + Q 0 x = D Dx + Q 0 x +Q 0 P G 2 BD DP x + Q 0 x = Q 0 P G 2 q, 3.8b DQ x = DQ G 2 q. 3.8c = D DP x + DQ x + Q 0 x 3.9 = KD u Q 0 Q D DQ D u+q 0 P +P 0 Q G 2 q+q 0Q D DQ G 2 q K = I Q 0 P G 2 B and u = DP x satisfies the ordinary differential equation u DP D u + DP G 2 BD u = DP G 2 q. As in the index case this ODE will be referred to as the inherent regular ODE. Definition 3.7 The explicit ordinary differential equation u = DP D u DP G 2 BD u + DP G 2 q 3.0 is called the inherent regular ODE of the index 2 equation 3.. For the index 2 case we now prove the lemma corresponding to lemma 3.4 from the previous section. Lemma 3.8 i im DP is a time varying invariant subspace of 3.0. ii 3.0 is independent of the choice of Q 0 and thus uniquely determined by the problem data. 26

27 Proof: To prove i, carry out a similar analysis as in the proof of lemma 3.4 but with R replaced by DP D. To see ii consider another projector ˆQ 0 with im ˆQ 0 = N 0 and the relation Ĝ = G I + Q 0 ˆQ0 P 0. The subspaces ˆN = I Q 0 ˆQ0 P 0 N and Ŝ = S are given in terms of N and S so that ˆQ = I + Q 0 ˆQ0 P 0 Q is the canonical projector onto ˆN along Ŝ. This implies D ˆP ˆD = DP D. Use the representation Ĝ 2 = I + Q 0 ˆP0 P P 0 G 2 to see that DP G 2 and DP G 2 BD are independent of the choice of Q 0. As in the previous section we are now able to prove existence and uniqueness of solutions for regular index 2 DAEs with properly stated leading terms. We make use of the function space C DQ G I, R m = { z CI, R m DQ G 2 z C I, R n }. 2 Theorem 3.9 Let 3. be a regular index 2 DAE with q C DQ G I, R m. For 2 each d im Dt 0 P t 0, t 0 I, the initial value problem At Dtxt + Btxt = qt, Dt0 P t 0 xt 0 = d 3. is uniquely solvable in C D I, Rm. Proof: Solve the inherent regular ODE 3.0 with initial value ut 0 = d. Lemma 3.8 yields ut im DtP t for every t and x = KD u Q 0 Q D DQ D u+q 0 P +P 0 Q G 2 q+q 0Q D DQ G 2 q is the desired solution of 3.. The initial condition Dt 0 P t 0 xt 0 = d can be replaced by Dt 0 P t 0 xt 0 = Dt 0 P t 0 x 0 for x 0 R m. 3.3 Remarks In sections.3 and.4 we presented examples of nonlinear differential-algebraic equations f Dx, x, t = 0, where the solution could be expressed as xt = Dt ut + Qtωut, t, t I. u was the solution of u t = R tut + Dtω ut, t, ut 0 = Dt 0 x and ω was implicitly defined by F ω, u, t = f Dω, D u + Qω, t = 0. The ordinary differential equation 3.2 is thus only available theoretically. 27

28 In this section we made use of the sequence 3.2 established in the context of the tractability index in order to perform a refined analysis of linear DAEs with properly stated leading terms. We were able to find explicit expressions of 3.2 for these equations with index and 2. This detailed analysis lead us to results about existence and uniqueness of solutions for DAEs with low index. We were able to figure out precisely what initial conditions are to be posed, namely Dt 0 xt 0 = Dt 0 x 0 and Dt 0 P t 0 xt 0 = Dt 0 P t 0 x 0 in the index and index 2 case respectively. These initial conditions guarantee that solutions u of the inherent regular ODE 3.5 and 3.0 lie in the corresponding invariant subspace. Let us stress that only those solutions of the regular inherent ODE that lie in the invariant subspace are relevant for the DAE. Even if this subspace varies with t we know the dynamical degree of freedom to be rank G 0 and rank G 0 +rank G m for index and 2 respectively [25]. The results presented can be generalized for arbitrary index µ. The inherent regular ODE for an index µ DAE with properly stated leading term is given in [25]. There it is also proved that the index µ is invariant under linear transformations and refactorizations of the original DAE and the inherent regular ODE remains unchanged. Finally let us point out that we assumed A, D and B to be continuous only. The required smoothness of the coefficients in the standard formulation Ex + F x = q 3.3 was replaced by the requirement on certain subspaces to be spanned by smooth functions. Namely, the projectors R, DP D and DQ D are differentiable if DN and DS are spanned by continuously differentiable functions []. However, if the DAE ADx + Bx = q 3.4 is given with smooth coefficients and we orient on C -solutions, then comparisons with concepts for 3.3 can be made via ADx + B AD x = q. On the other hand, if E has constant rank on I and P E C I, LR m is a projector function onto ker E, we can reformulate 3.3 as EP E x + F EP Ex = q with a properly stated leading term. 28

29 4 Numerical methods for linear DAEs with properly stated leading term The last part is devoted to studying the application of numerical methods to linear DAEs of index µ = and µ = 2. From the previous section we know that 3.4 and 3.9 are representations of the exact the solution, respectively. In fact, it turns out that 3.4 is just a special cases of 3.9. To see this, observe that for µ = the matrix G is nonsingular so that Q = 0, P = I and G 2 = G. We therefore treat index and index 2 equations simultaneously in this section. We will show how to apply Runge-Kutta methods to DAEs At Dtxt + Btxt = qt 4. with properly stated leading terms. Results presented here follow the lines of [7, 8, 26]. Runge-Kutta methods for DAEs are also studied in [4]. When using the s-stage Runge-Kutta method c A β T, A = α ij LR s, c = Ae, β R s, e =,..., T R s, to solve an ordinary differential equation x t = F xt, t 4.2 numerically with stepsize h, an approximation x l to the exact solution xt l is used to calculate the approximation x l to xt l = xt l + h via s x l = x l + h β i X li i= where X li is defined by 4.3a X li = F X li, t li, i =,..., s, 4.3b and t li = t l + c i h are intermediate timesteps. The internal stages X i are given by X li = x l + h s α ij X lj. j= 4.3c Observe that 4.3a and 4.3c depend on the method and only 4.3b depends on the equation 4.2. If the ODE 4.2 is replaced by the DAE f x t, xt, t = 0 we also replace 4.3b by f X li, X li, t li = 0, i =,..., s 4.3b in the Runge-Kutta scheme. The matrix f x is singular. Therefore some components of the increments X li need to be calculated from 4.3c as seen in the following trivial example. 29

30 Example 4. If fx, x, t = x qt, then xt = qt. The numerical method 4.3a, 4.3b, 4.3c now reads s s x l = x l + h β i X li, qt li = X li = x l + h α ij X lj. i= This system can be solved if and only if A is nonsingular. In the following, we always assume A to be nonsingular. This leads to an expression of X li in terms of X lj. Lemma 4.2 Let A = α ij be nonsingular and A = α ij. Then Proof: X li =x l +h s α ij X lj, i=,..., s X li = h j= j= s α ij X lj x l, i=,..., s. If denotes the Kronecker product and e m =,..., T R m then = [ Xl ] h A I m e m x l Xl =e m x l +ha I m X ls X l X ls X l X ls j= X ls Now consider the linear DAE 4. with continuous matrix functions At LR n, R m, Dt LR m, R n, Bt LR m, R m. and a properly stated leading term. When applying the numerical scheme 4.3a,4.3b,4.3c we don t want to lose the additional information provided by the properly stated leading term. According to lemma 4.2 we therefore replace 4.3c by [DX] li = s α ij D lj X lj D l x l 4.3c h j= and solve the system A li [DX] li + B lix li = q li, i =,..., s 4.3b for X li. Here we write D l = Dt l, D li = Dt li, A li = At li and so on. Using this ansatz the output value s s x l =x l +h β i α ij X lj x l = β T A e s s x l + β i α ij X lj h i= j= i= j= is computed. For RadauIIA methods this expression simplifies considerably. Definition 4.3 The s-stage RadauIIA method is uniquely determined by requiring Cs, Ds, c s = and choosing c,..., c s to be the zeros of the Gauss-Legendre polynomial p s. For the conditions Cs, Ds see [3]. The Gauss-Legendre polynomial p s is orthogonal to every polynomial of degree less than s. RadauIIA methods are A- and L-stable and have order p = 2s. The last row of A coincides with β T [3, 5]. 30

31 Lemma 4.4 For the s-stage RadauIIA method β T A e = 0 holds and the output value computed by 4.3a, 4.3b, 4.3c, 4.3c is given by the last stage X ls. Proof: β T A e = Z s AA e = 0,..., 0, e = 0 and x l = β T A e x l + s s i= j= β i α ij X lj = X l 0,..., 0, I m = X ls. X ls To summarize these results we present the following algorithm for solving the DAE 4. using RadauIIA methods. Algorithm 4.5 Given an approximation x l to the exact solution xt l and a stepsize h, solve A li [DX] li + B lix li = q li, i =,..., s 4.3b for X li where [DX] li is given by [DX] li = h s α ij D lj X lj D l x l. j= 4.3c Return the output value x l = X ls as an approximation to xt l = xt l + h. The exact solution x of 4. satisfies xt M 0 t = {z R m Btz qt im AtDt } t. Since X li M 0 t li for every i and c s = we have x l = X ls M 0 t ls = M 0 t l for every RadauIIA method. Thus the RadauIIA approximation satisfies the algebraic constraint and RadauIIA methods are especially suited for solving DAEs [4, 5]. 4. Decoupling of the discretized equation Algorithm 4.5 replaces the DAE ADx + Bx = q 4. by the discretized problem A li [DX] li + B lix li = q li, i =,..., s. 4.4 As seen in section 3.2, the analytic solution x of index and index 2 equations 4. can be represented as x = KD u Q 0 Q D DQ D u+q 0 P +P 0 Q G 2 q+q 0Q D DQ G 2 q 4.5 3

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1

1 2 3 1 1 2 x = + x 2 + x 4 1 0 1 (d) If the vector b is the sum of the four columns of A, write down the complete solution to Ax = b. 1 2 3 1 1 2 x = + x 2 + x 4 1 0 0 1 0 1 2. (11 points) This problem finds the curve y = C + D 2 t which

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points Introduction to Algebraic Geometry Bézout s Theorem and Inflection Points 1. The resultant. Let K be a field. Then the polynomial ring K[x] is a unique factorisation domain (UFD). Another example of a

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

MA106 Linear Algebra lecture notes

MA106 Linear Algebra lecture notes MA106 Linear Algebra lecture notes Lecturers: Martin Bright and Daan Krammer Warwick, January 2011 Contents 1 Number systems and fields 3 1.1 Axioms for number systems......................... 3 2 Vector

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

A characterization of trace zero symmetric nonnegative 5x5 matrices

A characterization of trace zero symmetric nonnegative 5x5 matrices A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Outline Lagrangian Constraints and Image Quality Models

Outline Lagrangian Constraints and Image Quality Models Remarks on Lagrangian singularities, caustics, minimum distance lines Department of Mathematics and Statistics Queen s University CRM, Barcelona, Spain June 2014 CRM CRM, Barcelona, SpainJune 2014 CRM

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

More information

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively. Chapter 7 Eigenvalues and Eigenvectors In this last chapter of our exploration of Linear Algebra we will revisit eigenvalues and eigenvectors of matrices, concepts that were already introduced in Geometry

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Quotient Rings and Field Extensions

Quotient Rings and Field Extensions Chapter 5 Quotient Rings and Field Extensions In this chapter we describe a method for producing field extension of a given field. If F is a field, then a field extension is a field K that contains F.

More information

LS.6 Solution Matrices

LS.6 Solution Matrices LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

MATH PROBLEMS, WITH SOLUTIONS

MATH PROBLEMS, WITH SOLUTIONS MATH PROBLEMS, WITH SOLUTIONS OVIDIU MUNTEANU These are free online notes that I wrote to assist students that wish to test their math skills with some problems that go beyond the usual curriculum. These

More information

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

Section 6.1 - Inner Products and Norms

Section 6.1 - Inner Products and Norms Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,

More information

Matrix Representations of Linear Transformations and Changes of Coordinates

Matrix Representations of Linear Transformations and Changes of Coordinates Matrix Representations of Linear Transformations and Changes of Coordinates 01 Subspaces and Bases 011 Definitions A subspace V of R n is a subset of R n that contains the zero element and is closed under

More information

2.1 Introduction. 2.2 Terms and definitions

2.1 Introduction. 2.2 Terms and definitions .1 Introduction An important step in the procedure for solving any circuit problem consists first in selecting a number of independent branch currents as (known as loop currents or mesh currents) variables,

More information

Lecture L3 - Vectors, Matrices and Coordinate Transformations

Lecture L3 - Vectors, Matrices and Coordinate Transformations S. Widnall 16.07 Dynamics Fall 2009 Lecture notes based on J. Peraire Version 2.0 Lecture L3 - Vectors, Matrices and Coordinate Transformations By using vectors and defining appropriate operations between

More information

Circuit Analysis using the Node and Mesh Methods

Circuit Analysis using the Node and Mesh Methods Circuit Analysis using the Node and Mesh Methods We have seen that using Kirchhoff s laws and Ohm s law we can analyze any circuit to determine the operating conditions (the currents and voltages). The

More information

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES I GROUPS: BASIC DEFINITIONS AND EXAMPLES Definition 1: An operation on a set G is a function : G G G Definition 2: A group is a set G which is equipped with an operation and a special element e G, called

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor

Chapter 12 Modal Decomposition of State-Space Models 12.1 Introduction The solutions obtained in previous chapters, whether in time domain or transfor Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Class Meeting # 1: Introduction to PDEs

Class Meeting # 1: Introduction to PDEs MATH 18.152 COURSE NOTES - CLASS MEETING # 1 18.152 Introduction to PDEs, Fall 2011 Professor: Jared Speck Class Meeting # 1: Introduction to PDEs 1. What is a PDE? We will be studying functions u = u(x

More information

Nonlinear Systems of Ordinary Differential Equations

Nonlinear Systems of Ordinary Differential Equations Differential Equations Massoud Malek Nonlinear Systems of Ordinary Differential Equations Dynamical System. A dynamical system has a state determined by a collection of real numbers, or more generally

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Numerical Solution of Differential Equations

Numerical Solution of Differential Equations Numerical Solution of Differential Equations Dr. Alvaro Islas Applications of Calculus I Spring 2008 We live in a world in constant change We live in a world in constant change We live in a world in constant

More information

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued).

MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors. Jordan canonical form (continued). MATH 423 Linear Algebra II Lecture 38: Generalized eigenvectors Jordan canonical form (continued) Jordan canonical form A Jordan block is a square matrix of the form λ 1 0 0 0 0 λ 1 0 0 0 0 λ 0 0 J = 0

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited

The Matrix Elements of a 3 3 Orthogonal Matrix Revisited Physics 116A Winter 2011 The Matrix Elements of a 3 3 Orthogonal Matrix Revisited 1. Introduction In a class handout entitled, Three-Dimensional Proper and Improper Rotation Matrices, I provided a derivation

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

4.5 Linear Dependence and Linear Independence

4.5 Linear Dependence and Linear Independence 4.5 Linear Dependence and Linear Independence 267 32. {v 1, v 2 }, where v 1, v 2 are collinear vectors in R 3. 33. Prove that if S and S are subsets of a vector space V such that S is a subset of S, then

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

α = u v. In other words, Orthogonal Projection

α = u v. In other words, Orthogonal Projection Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Lecture 7 Circuit analysis via Laplace transform

Lecture 7 Circuit analysis via Laplace transform S. Boyd EE12 Lecture 7 Circuit analysis via Laplace transform analysis of general LRC circuits impedance and admittance descriptions natural and forced response circuit analysis with impedances natural

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Separable First Order Differential Equations

Separable First Order Differential Equations Separable First Order Differential Equations Form of Separable Equations which take the form = gx hy or These are differential equations = gxĥy, where gx is a continuous function of x and hy is a continuously

More information

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I

SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Lennart Edsberg, Nada, KTH Autumn 2008 SIXTY STUDY QUESTIONS TO THE COURSE NUMERISK BEHANDLING AV DIFFERENTIALEKVATIONER I Parameter values and functions occurring in the questions belowwill be exchanged

More information

Lecture 1: Schur s Unitary Triangularization Theorem

Lecture 1: Schur s Unitary Triangularization Theorem Lecture 1: Schur s Unitary Triangularization Theorem This lecture introduces the notion of unitary equivalence and presents Schur s theorem and some of its consequences It roughly corresponds to Sections

More information

THREE DIMENSIONAL GEOMETRY

THREE DIMENSIONAL GEOMETRY Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

More information

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

A Brief Review of Elementary Ordinary Differential Equations

A Brief Review of Elementary Ordinary Differential Equations 1 A Brief Review of Elementary Ordinary Differential Equations At various points in the material we will be covering, we will need to recall and use material normally covered in an elementary course on

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Mechanics 1: Conservation of Energy and Momentum

Mechanics 1: Conservation of Energy and Momentum Mechanics : Conservation of Energy and Momentum If a certain quantity associated with a system does not change in time. We say that it is conserved, and the system possesses a conservation law. Conservation

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Course objectives and preliminaries Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical Analysis

More information

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22

FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 FOUNDATIONS OF ALGEBRAIC GEOMETRY CLASS 22 RAVI VAKIL CONTENTS 1. Discrete valuation rings: Dimension 1 Noetherian regular local rings 1 Last day, we discussed the Zariski tangent space, and saw that it

More information

Simulating Stochastic Differential Equations

Simulating Stochastic Differential Equations Monte Carlo Simulation: IEOR E473 Fall 24 c 24 by Martin Haugh Simulating Stochastic Differential Equations 1 Brief Review of Stochastic Calculus and Itô s Lemma Let S t be the time t price of a particular

More information

Differentiation of vectors

Differentiation of vectors Chapter 4 Differentiation of vectors 4.1 Vector-valued functions In the previous chapters we have considered real functions of several (usually two) variables f : D R, where D is a subset of R n, where

More information

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors. 3. Cross product Definition 3.1. Let v and w be two vectors in R 3. The cross product of v and w, denoted v w, is the vector defined as follows: the length of v w is the area of the parallelogram with

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

CS3220 Lecture Notes: QR factorization and orthogonal transformations

CS3220 Lecture Notes: QR factorization and orthogonal transformations CS3220 Lecture Notes: QR factorization and orthogonal transformations Steve Marschner Cornell University 11 March 2009 In this lecture I ll talk about orthogonal matrices and their properties, discuss

More information

II. Linear Systems of Equations

II. Linear Systems of Equations II. Linear Systems of Equations II. The Definition We are shortly going to develop a systematic procedure which is guaranteed to find every solution to every system of linear equations. The fact that such

More information

Second Order Linear Partial Differential Equations. Part I

Second Order Linear Partial Differential Equations. Part I Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction

More information

Controllability and Observability

Controllability and Observability Controllability and Observability Controllability and observability represent two major concepts of modern control system theory These concepts were introduced by R Kalman in 1960 They can be roughly defined

More information

1 Lecture 3: Operators in Quantum Mechanics

1 Lecture 3: Operators in Quantum Mechanics 1 Lecture 3: Operators in Quantum Mechanics 1.1 Basic notions of operator algebra. In the previous lectures we have met operators: ˆx and ˆp = i h they are called fundamental operators. Many operators

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 6. Eigenvalues and Singular Values In this section, we collect together the basic facts about eigenvalues and eigenvectors. From a geometrical viewpoint,

More information

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

More information

The continuous and discrete Fourier transforms

The continuous and discrete Fourier transforms FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Let s first see how precession works in quantitative detail. The system is illustrated below: ...

Let s first see how precession works in quantitative detail. The system is illustrated below: ... lecture 20 Topics: Precession of tops Nutation Vectors in the body frame The free symmetric top in the body frame Euler s equations The free symmetric top ala Euler s The tennis racket theorem As you know,

More information

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1

ASEN 3112 - Structures. MDOF Dynamic Systems. ASEN 3112 Lecture 1 Slide 1 19 MDOF Dynamic Systems ASEN 3112 Lecture 1 Slide 1 A Two-DOF Mass-Spring-Dashpot Dynamic System Consider the lumped-parameter, mass-spring-dashpot dynamic system shown in the Figure. It has two point

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013 Notes on Orthogonal and Symmetric Matrices MENU, Winter 201 These notes summarize the main properties and uses of orthogonal and symmetric matrices. We covered quite a bit of material regarding these topics,

More information

Mathematical Physics, Lecture 9

Mathematical Physics, Lecture 9 Mathematical Physics, Lecture 9 Hoshang Heydari Fysikum April 25, 2012 Hoshang Heydari (Fysikum) Mathematical Physics, Lecture 9 April 25, 2012 1 / 42 Table of contents 1 Differentiable manifolds 2 Differential

More information

1. First-order Ordinary Differential Equations

1. First-order Ordinary Differential Equations Advanced Engineering Mathematics 1. First-order ODEs 1 1. First-order Ordinary Differential Equations 1.1 Basic concept and ideas 1.2 Geometrical meaning of direction fields 1.3 Separable differential

More information

Lecture 5 Least-squares

Lecture 5 Least-squares EE263 Autumn 2007-08 Stephen Boyd Lecture 5 Least-squares least-squares (approximate) solution of overdetermined equations projection and orthogonality principle least-squares estimation BLUE property

More information

More than you wanted to know about quadratic forms

More than you wanted to know about quadratic forms CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences More than you wanted to know about quadratic forms KC Border Contents 1 Quadratic forms 1 1.1 Quadratic forms on the unit

More information

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes Solution by Inverse Matrix Method 8.2 Introduction The power of matrix algebra is seen in the representation of a system of simultaneous linear equations as a matrix equation. Matrix algebra allows us

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 1: Initial value problems in ODEs Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the Numerical

More information

1 Lecture: Integration of rational functions by decomposition

1 Lecture: Integration of rational functions by decomposition Lecture: Integration of rational functions by decomposition into partial fractions Recognize and integrate basic rational functions, except when the denominator is a power of an irreducible quadratic.

More information

3. INNER PRODUCT SPACES

3. INNER PRODUCT SPACES . INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

More information

Envelope Theorem. Kevin Wainwright. Mar 22, 2004

Envelope Theorem. Kevin Wainwright. Mar 22, 2004 Envelope Theorem Kevin Wainwright Mar 22, 2004 1 Maximum Value Functions A maximum (or minimum) value function is an objective function where the choice variables have been assigned their optimal values.

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information