# An Introduction to Mathematical Optimal Control Theory Version 0.2

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 An Introduction to Mathematical Optimal Control Theory Version.2 By Lawrence C. Evans Department of Mathematics University of California, Berkeley Chapter 1: Introduction Chapter 2: Controllability, bang-bang principle Chapter 3: Linear time-optimal control Chapter 4: The Pontryagin Maximum Principle Chapter 5: Dynamic programming Chapter 6: Game theory Chapter 7: Introduction to stochastic control theory Appendix: Proofs of the Pontryagin Maximum Principle Exercises References 1

2 PREFACE These notes build upon a course I taught at the University of Maryland during the fall of My great thanks go to Martino Bardi, who took careful notes, saved them all these years and recently mailed them to me. Faye Yeager typed up his notes into a first draft of these lectures as they now appear. Scott Armstrong read over the notes and suggested many improvements: thanks, Scott. Stephen Moye of the American Math Society helped me a lot with AMSTeX versus LaTeX issues. My thanks also to Atilla Yilmaz for spotting lots of typos and errors, which I have corrected. I have radically modified much of the notation (to be consistent with my other writings), updated the references, added several new examples, and provided a proof of the Pontryagin Maximum Principle. As this is a course for undergraduates, I have dispensed in certain proofs with various measurability and continuity issues, and as compensation have added various critiques as to the lack of total rigor. This current version of the notes is not yet complete, but meets I think the usual high standards for material posted on the internet. Please me at with any corrections or comments. 2

3 CHAPTER 1: INTRODUCTION 1.1. The basic problem 1.2. Some examples 1.3. A geometric solution 1.4. Overview 1.1 THE BASIC PROBLEM. DYNAMICS. We open our discussion by considering an ordinary differential equation (ODE) having the form (1.1) { ẋ(t) = f(x(t)) (t > ) x() = x. We are here given the initial point x R n and the function f : R n R n. The unknown is the curve x : [, ) R n, which we interpret as the dynamical evolution of the state of some system. CONTROLLED DYNAMICS. We generalize a bit and suppose now that f depends also upon some control parameters belonging to a set A R m ; so that f : R n A R n. Then if we select some value a A and consider the corresponding dynamics: { ẋ(t) = f(x(t), a) (t > ) x() = x, we obtain the evolution of our system when the parameter is constantly set to the value a. The next possibility is that we change the value of the parameter as the system evolves. For instance, suppose we define the function α : [, ) A this way: a 1 t t 1 α(t) = a 2 t 1 < t t 2 t 2 < t t 3 etc. a 3 for times < t 1 < t 2 < t 3... and parameter values a 1, a 2, a 3, A; and we then solve the dynamical equation (1.2) { ẋ(t) = f(x(t), α(t)) (t > ) x() = x. The picture illustrates the resulting evolution. The point is that the system may behave quite differently as we change the control parameters. 3

4 α α α α Controlled dynamics More generally, we call a function α : [, ) A a control. Corresponding to each control, we consider the ODE (ODE) { ẋ(t) = f(x(t), α(t)) (t > ) x() = x, and regard the trajectory x( ) as the corresponding response of the system. NOTATION. (i) We will write f(x, a) = f 1 (x, a). f n (x, a) to display the components of f, and similarly put x 1 (t) x(t) =.. x n (t) We will therefore write vectors as columns in these notes and use boldface for vector-valued functions, the components of which have superscripts. (ii) We also introduce A = {α : [, ) A α( ) measurable} 4

5 to denote the collection of all admissible controls, where α 1 (t) α(t) =.. α m (t) Note very carefully that our solution x( ) of (ODE) depends upon α( ) and the initial condition. Consequently our notation would be more precise, but more complicated, if we were to write x( ) = x(, α( ), x ), displaying the dependence of the response x( ) upon the control and the initial value. PAYOFFS. Our overall task will be to determine what is the best control for our system. For this we need to specify a specific payoff (or reward) criterion. Let us define the payoff functional (P) P[α( )] := T r(x(t), α(t)) dt + g(x(t)), where x( ) solves (ODE) for the control α( ). Here r : R n A R and g : R n R are given, and we call r the running payoff and g the terminal payoff. The terminal time T > is given as well. THE BASIC PROBLEM. Our aim is to find a control α ( ), which maximizes the payoff. In other words, we want P[α ( )] P[α( )] for all controls α( ) A. Such a control α ( ) is called optimal. This task presents us with these mathematical issues: (i) Does an optimal control exist? (ii) How can we characterize an optimal control mathematically? (iii) How can we construct an optimal control? These turn out to be sometimes subtle problems, as the following collection of examples illustrates. 1.2 EXAMPLES EXAMPLE 1: CONTROL OF PRODUCTION AND CONSUMPTION. Suppose we own, say, a factory whose output we can control. Let us begin to construct a mathematical model by setting x(t) = amount of output produced at time t. 5

6 We suppose that we consume some fraction of our output at each time, and likewise can reinvest the remaining fraction. Let us denote α(t) = fraction of output reinvested at time t. This will be our control, and is subject to the obvious constraint that α(t) 1 for each time t. Given such a control, the corresponding dynamics are provided by the ODE { ẋ(t) = kα(t)x(t) x() = x. the constant k > modelling the growth rate of our reinvestment. Let us take as a payoff functional P[α( )] = T (1 α(t))x(t) dt. The meaning is that we want to maximize our total consumption of the output, our consumption at a given time t being (1 α(t))x(t). This model fits into our general framework for n = m = 1, once we put A = [, 1], f(x, a) = kax, r(x, a) = (1 a)x, g. α α A bang-bang control As we will see later in 4.4.2, an optimal control α ( ) is given by { 1 if t t α (t) = if t < t T for an appropriate switching time t T. In other words, we should reinvest all the output (and therefore consume nothing) up until time t, and afterwards, we should consume everything (and therefore reinvest nothing). The switchover time t will have to be determined. We call α ( ) a bang bang control. EXAMPLE 2: REPRODUCTIVE STATEGIES IN SOCIAL INSECTS 6

7 The next example is from Chapter 2 of the book Caste and Ecology in Social Insects, by G. Oster and E. O. Wilson [O-W]. We attempt to model how social insects, say a population of bees, determine the makeup of their society. Let us write T for the length of the season, and introduce the variables w(t) = number of workers at time t q(t) = number of queens α(t) = fraction of colony effort devoted to increasing work force The control α is constrained by our requiring that α(t) 1. We continue to model by introducing dynamics for the numbers of workers and the number of queens. The worker population evolves according to { ẇ(t) = µw(t) + bs(t)α(t)w(t) w() = w. Here µ is a given constant (a death rate), b is another constant, and s(t) is the known rate at which each worker contributes to the bee economy. We suppose also that the population of queens changes according to { q(t) = νq(t) + c(1 α(t))s(t)w(t) q() = q, for constants ν and c. Our goal, or rather the bees, is to maximize the number of queens at time T: P[α( )] = q(t). So in terms of our general notation, we have x(t) = (w(t), q(t)) T and x = (w, q ) T. We are taking the running payoff to be r, and the terminal payoff g(w, q) = q. The answer will again turn out to be a bang bang control, as we will explain later. EXAMPLE 3: A PENDULUM. We look next at a hanging pendulum, for which θ(t) = angle at time t. If there is no external force, then we have the equation of motion { θ(t) + λ θ(t) + ω 2 θ(t) = θ() = θ 1, θ() = θ2 ; the solution of which is a damped oscillation, provided λ >. 7

8 Now let α( ) denote an applied torque, subject to the physical constraint that α 1. Our dynamics now become { θ(t) + λ θ(t) + ω 2 θ(t) = α(t) θ() = θ 1, θ() = θ2. Define x 1 (t) = θ(t), x 2 (t) = θ(t), and x(t) = (x 1 (t), x 2 (t)). Then we can write the evolution as the system ) ( ) ( ) (ẋ1 θ x 2 ẋ(t) = = = θ λx 2 ω 2 = f(x, α). x 1 + α(t) We introduce as well for ẋ 2 τ P[α( )] = 1 dt = τ, τ = τ(α( )) = first time that x(τ) = (that is, θ(τ) = θ(τ) =.) We want to maximize P[ ], meaning that we want to minimize the time it takes to bring the pendulum to rest. Observe that this problem does not quite fall within the general framework described in 1.1, since the terminal time is not fixed, but rather depends upon the control. This is called a fixed endpoint, free time problem. EXAMPLE 4: A MOON LANDER This model asks us to bring a spacecraft to a soft landing on the lunar surface, using the least amount of fuel. We introduce the notation We assume that h(t) = height at time t v(t) = velocity = ḣ(t) m(t) = mass of spacecraft (changing as fuel is burned) α(t) = thrust at time t and Newton s law tells us that α(t) 1, mḧ = gm + α, the right hand side being the difference of the gravitational force and the thrust of the rocket. This system is modeled by the ODE v(t) = g + α(t) m(t) ḣ(t) = v(t) ṁ(t) = kα(t). 8

9 A spacecraft landing on the moon We summarize these equations in the form ẋ(t) = f(x(t), α(t)) for x(t) = (v(t), h(t), m(t)). We want to minimize the amount of fuel used up, that is, to maximize the amount remaining once we have landed. Thus P[α( )] = m(τ), where τ denotes the first time that h(τ) = v(τ) =. This is a variable endpoint problem, since the final time is not given in advance. We have also the extra constraints h(t), m(t). EXAMPLE 5: ROCKET RAILROAD CAR. Imagine a railroad car powered by rocket engines on each side. We introduce the variables q(t) = position at time t v(t) = q(t) = velocity at time t α(t) = thrust from rockets, where 1 α(t) 1, 9

10 A rocket car on a train track the sign depending upon which engine is firing. We want to figure out how to fire the rockets, so as to arrive at the origin with zero velocity in a minimum amount of time. Assuming the car has mass m, the law of motion is m q(t) = α(t). for We rewrite by setting x(t) = (q(t), v(t)) T. Then ( ) 1 ẋ(t) = x(t) + ( 1) α(t) x() = x = (q, v ) T. Since our goal is to steer to the origin (, ) in minimum time, we take P[α( )] = τ 1 dt = τ, τ = first time that q(τ) = v(τ) =. 1.3 A GEOMETRIC SOLUTION. To illustrate how actually to solve a control problem, in this last section we introduce some ad hoc calculus and geometry methods for the rocket car problem, Example 5 above. First of all, let us guess that to find an optimal solution we will need only to consider the cases a = 1 or a = 1. In other words, we will focus our attention only upon those controls for which at each moment of time either the left or the right rocket engine is fired at full power. (We will later see in Chapter 2 some theoretical justification for looking only at such controls.) CASE 1: Suppose first that α 1 for some time interval, during which { q = v v = 1. Then v v = q, 1

11 and so 1 2 (v2 ) = q. Let t belong to the time interval where α 1 and integrate from t to t: Consequently v 2 (t) 2 v2 (t ) 2 = q(t) q(t ). (1.1) v 2 (t) = 2q(t) + (v 2 (t ) 2q(t )). } {{ } b In other words, so long as the control is set for α 1, the trajectory stays on the curve v 2 = 2q + b for some constant b. α CASE 2: Suppose now α 1 on some time interval. Then as above { q = v v = 1, and hence 1 2 (v2 ) = q. Let t 1 belong to an interval where α 1 and integrate: (1.2) v 2 (t) = 2q(t) + (2q(t 1 ) v 2 (t 1 )). } {{ } c Consequently, as long as the control is set for α 1, the trajectory stays on the curve v 2 = 2q + c for some constant c. 11

12 α GEOMETRIC INTERPRETATION. Formula (1.1) says if α 1, then (q(t), v(t)) lies on a parabola of the form v 2 = 2q + b. Similarly, (1.2) says if α 1, then (q(t), v(t)) lies on a parabola v 2 = 2q + c. Now we can design an optimal control α ( ), which causes the trajectory to jump between the families of right and left pointing parabolas, as drawn. Say we start at the black dot, and wish to steer to the origin. This we accomplish by first setting the control to the value α = 1, causing us to move down along the second family of parabolas. We then switch to the control α = 1, and thereupon move to a parabola from the first family, along which we move up and to the left, ending up at the origin. See the picture. 1.4 OVERVIEW. Here are the topics we will cover in this course: Chapter 2: Controllability, bang-bang principle. In this chapter, we introduce the simplest class of dynamics, those linear in both the state x( ) and the control α( ), and derive algebraic conditions ensuring that the system can be steered into a given terminal state. We introduce as well some abstract theorems from functional analysis and employ them to prove the existence of so-called bang-bang optimal controls. Chapter 3: Time-optimal control. 12

13 α α How to get to the origin in minimal time In Chapter 3 we continue to study linear control problems, and turn our attention to finding optimal controls that steer our system into a given state as quickly as possible. We introduce a maximization principle useful for characterizing an optimal control, and will later recognize this as a first instance of the Pontryagin Maximum Principle. Chapter 4: Pontryagin Maximum Principle. Chapter 4 s discussion of the Pontryagin Maximum Principle and its variants is at the heart of these notes. We postpone proof of this important insight to the Appendix, preferring instead to illustrate its usefulness with many examples with nonlinear dynamics. Chapter 5: Dynamic programming. Dynamic programming provides an alternative approach to designing optimal controls, assuming we can solve a nonlinear partial differential equation, called the Hamilton-Jacobi-Bellman equation. This chapter explains the basic theory, works out some examples, and discusses connections with the Pontryagin Maximum Principle. Chapter 6: Game theory. We discuss briefly two-person, zero-sum differential games and how dynamic programming and maximum principle methods apply. Chapter 7: Introduction to stochastic control theory. This chapter provides a very brief introduction to the control of stochastic differential equations by dynamic programming techniques. The Itô stochastic calculus tells us how the random effects modify the corresponding Hamilton-Jacobi-Bellman equation. 13

14 Appendix: Proof of the Pontryagin Maximum Principle. We provide here the proof of this important assertion, discussing clearly the key ideas. 14

15 CHAPTER 2: CONTROLLABILITY, BANG-BANG PRINCIPLE 2.1 Definitions 2.2 Quick review of linear ODE 2.3 Controllability of linear equations 2.4 Observability 2.5 Bang-bang principle 2.6 References 2.1 DEFINITIONS. We firstly recall from Chapter 1 the basic form of our controlled ODE: { ẋ(t) = f(x(t), α(t)) (ODE) x() = x. Here x R n, f : R n A R n, α : [, ) A is the control, and x : [, ) R n is the response of the system. This chapter addresses the following basic CONTROLLABILITY QUESTION: Given the initial point x and a target set S R n, does there exist a control steering the system to S in finite time? For the time being we will therefore not introduce any payoff criterion that would characterize an optimal control, but instead will focus on the question as to whether or not there exist controls that steer the system to a given goal. In this chapter we will mostly consider the problem of driving the system to the origin S = {}. DEFINITION. We define the reachable set for time t to be and the overall reachable set Note that C(t) = set of initial points x for which there exists a control such that x(t) =, C = set of initial points x for which there exists a control such that x(t) = for some finite time t. C = t C(t). Hereafter, let M n m denote the set of all n m matrices. We assume for the rest of this and the next chapter that our ODE is linear in both the state x( ) and the control α( ), and consequently has the form { ẋ(t) = Mx(t) + Nα(t) (t > ) (ODE) x() = x, 15

16 where M M n n and N M n m. We assume the set A of control parameters is a cube in R m : A = [ 1, 1] m = {a R m a i 1, i = 1,..., m}. 2.2 QUICK REVIEW OF LINEAR ODE. This section records for later reference some basic facts about linear systems of ordinary differential equations. DEFINITION. Let X( ) : R M n n be the unique solution of the matrix ODE { Ẋ(t) = MX(t) (t R) X() = I. We call X( ) a fundamental solution, and sometimes write X(t) = e tm := t k M k, k! the last formula being the definition of the exponential e tm. Observe that k= X 1 (t) = X( t). THEOREM 2.1 (SOLVING LINEAR SYSTEMS OF ODE). (i) The unique solution of the homogeneous system of ODE { ẋ(t) = Mx(t) x() = x is is x(t) = X(t)x = e tm x. (ii) The unique solution of the nonhomogeneous system { ẋ(t) = Mx(t) + f(t) x() = x. t x(t) = X(t)x + X(t) X 1 (s)f(s) ds. This expression is the variation of parameters formula. 2.3 CONTROLLABILITY OF LINEAR EQUATIONS. 16

17 According to the variation of parameters formula, the solution of (ODE) for a given control α( ) is x(t) = X(t)x + X(t) t X 1 (s)nα(s) ds, where X(t) = e tm. Furthermore, observe that x C(t) if and only if (2.1) there exists a control α( ) A such that x(t) = if and only if (2.2) = X(t)x + X(t) if and only if t X 1 (s)nα(s) ds for some control α( ) A (2.3) x = t X 1 (s)nα(s) ds for some control α( ) A. We make use of these formulas to study the reachable set: THEOREM 2.2 (STRUCTURE OF REACHABLE SET). (i) The reachable set C is symmetric and convex. (ii) Also, if x C( t), then x C(t) for all times t t. DEFINITIONS. (i) We say a set S is symmetric if x S implies x S. (ii) The set S is convex if x, ˆx S and λ 1 imply λx + (1 λ)ˆx S. Proof. 1. (Symmetry) Let t and x C(t). Then x = t X 1 (s)nα(s) ds for some admissible control α A. Therefore x = t X 1 (s)n( α(s)) ds, and α A since the set A is symmetric. Therefore x C(t), and so each set C(t) symmetric. It follows that C is symmetric. 2. (Convexity) Take x, ˆx C; so that x C(t), ˆx C(ˆt) for appropriate times t, ˆt. Assume t ˆt. Then Define a new control x = t X 1 (s)nα(s) ds for some control α A, ˆx = ˆt X 1 (s)n ˆα(s) ds for some control ˆα A. { α(s) if s t α(s) := if s > t. 17

18 Then ˆt x = X 1 (s)n α(s) ds, and hence x C(ˆt). Now let λ 1, and observe ˆt λx + (1 λ)ˆx = X 1 (s)n(λ α(s) + (1 λ)ˆα(s)) ds. Therefore λx + (1 λ)ˆx C(ˆt) C. 3. Assertion (ii) follows from the foregoing if we take t = ˆt. A SIMPLE EXAMPLE. Let n = 2 and m = 1, A = [ 1, 1], and write x(t) = (x 1 (t), x 2 (t)) T. Suppose {ẋ1 = ẋ 2 = α(t). This is a system of the form ẋ = Mx + Nα, for ( ) ( ) M =, N = 1 Clearly C = {(x 1, x 2 ) x 1 = }, the x 2 axis. We next wish to establish some general algebraic conditions ensuring that C contains a neighborhood of the origin. DEFINITION. The controllability matrix is G = G(M, N) := [N, MN, M 2 N,..., M n 1 N]. } {{ } n (mn) matrix THEOREM 2.3 (CONTROLLABILITY MATRIX). We have rankg = n if and only if C. NOTATION. We write C for the interior of the set C. Remember that rank of G = number of linearly independent rows of G = number of linearly independent columns of G. Clearly rank G n. 18

19 Proof. 1. Suppose first that rank G < n. This means that the linear span of the columns of G has dimension less than or equal to n 1. Thus there exists a vector b R n, b, orthogonal to each column of G. This implies b T G = and so b T N = b T MN = = b T M n 1 N =. 2. We claim next that in fact (2.4) b T M k N = for all positive integers k. To confirm this, recall that p(λ) := det(λi M) is the characteristic polynomial of M. The Cayley Hamilton Theorem states that So if we write then Therefore and so p(m) =. p(λ) = λ n + β n 1 λ n β 1 λ 1 + β, p(m) = M n + β n 1 M n β 1 M + β I =. M n = β n 1 M n 1 β n 2 M n 2 β 1 M β I, b T M n N = b T ( β n 1 M n 1...)N =. Similarly, b T M n+1 N = b T ( β n 1 M n...)n =, etc. The claim (2.4) is proved. Now notice that b T X 1 (s)n = b T e sm N = b T ( s) k M k N ( s) k = b T M k N =, k! k! according to (2.4). k= k= 3. Assume next that x C(t). This is equivalent to having x = t X 1 (s)nα(s) ds for some control α( ) A. Then t b x = b T X 1 (s)nα(s) ds =. This says that b is orthogonal x. In other words, C must lie in the hyperplane orthogonal to b. Consequently C =. 19

20 4. Conversely, assume / C. Thus / C (t) for all t >. Since C(t) is convex, there exists a supporting hyperplane to C(t) through. This means that there exists b such that b x for all x C(t). Choose any x C(t). Then for some control α, and therefore Thus t We assert that therefore t x = X 1 (s)nα(s) ds t b x = b T X 1 (s)nα(s) ds. b T X 1 (s)nα(s) ds for all controls α( ). (2.5) b T X 1 (s)n, a proof of which follows as a lemma below. We rewrite (2.5) as (2.6) b T e sm N. Let s = to see that b T N =. Next differentiate (2.6) with respect to s, to find that For s = this says We repeatedly differentiate, to deduce b T ( M)e sm N. b T MN =. b T M k N = for all k =, 1,..., and so b T G =. This implies rank G < n, since b. LEMMA 2.4 (INTEGRAL INEQUALITIES). Assume that (2.7) for all α( ) A. Then t b T X 1 (s)nα(s) ds b T X 1 (s)n. 2

21 Proof. Replacing α by α in (2.7), we see that for all α( ) A. Define t b T X 1 (s)nα(s) ds = v(s) := b T X 1 (s)n. If v, then v(s ) for some s. Then there exists an interval I such that s I and v on I. Now define α( ) A this way: { α(s) = (s / I) α(s) = v(s) v(s) 1 n (s I), where v := ( n i=1 v i 2)1 2. Then t = v(s) α(s) ds = This implies the contradiction that v in I. I v(s) v(s) n v(s) ds = 1 n I v(s) ds DEFINITION. We say the linear system (ODE) is controllable if C = R n. THEOREM 2.5 (CRITERION FOR CONTROLLABILITY). Let A be the cube [ 1, 1] n in R n. Suppose as well that rankg = n, and Re λ < for each eigenvalue λ of the matrix M. Then the system (ODE) is controllable. Proof. Since rank G = n, Theorem 2.3 tells us that C contains some ball B centered at. Now take any x R n and consider the evolution { ẋ(t) = Mx(t) x() = x ; in other words, take the control α( ). Since Re λ < for each eigenvalue λ of M, then the origin is asymptotically stable. So there exists a time T such that x(t) B. Thus x(t) B C; and hence there exists a control α( ) A steering x(t) into in finite time. EXAMPLE. We once again consider the rocket railroad car, from 1.2, for which n = 2, m = 1, A = [ 1, 1], and ( ) ( ) 1 ẋ = x + α. 1 Then G = [N, MN] = 21 ( ) 1. 1

22 Therefore rankg = 2 = n. Also, the characteristic polynomial of the matrix M is ( ) λ 1 p(λ) = det(λi M) = det = λ 2. λ Since the eigenvalues are both, we fail to satisfy the hypotheses of Theorem 2.5. This example motivates the following extension of the previous theorem: THEOREM 2.6 (IMPROVED CRITERION FOR CONTROLLABIL- ITY). Assume rankg = n and Re λ for each eigenvalue λ of M. Then the system (ODE) is controllable. Proof. 1. If C R n, then the convexity of C implies that there exist a vector b and a real number µ such that (2.8) b x µ for all x C. Indeed, in the picture we see that b (x z ) ; and this implies (2.8) for µ := b z. We will derive a contradiction. 2. Given b, µ R, our intention is to find x C so that (2.8) fails. Recall x C if and only if there exist a time t > and a control α( ) A such that Then Define x = t X 1 (s)nα(s) ds. t b x = b T X 1 (s)nα(s) ds v(s) := b T X 1 (s)n 22

23 3. We assert that (2.9) v. To see this, suppose instead that v. Then k times differentiate the expression b T X 1 (s)n with respect to s and set s =, to discover b T M k N = for k =, 1, 2,.... This implies b is orthogonal to the columns of G, and so rank G < n. This is a contradiction to our hypothesis, and therefore (2.9) holds. 4. Next, define α( ) this way: α(s) := { v(s) v(s) if v(s) if v(s) =. Then t t b x = v(s)α(s) ds = v(s) ds. We want to find a time t > so that t v(s) ds > µ. In fact, we assert that (2.1) v(s) ds = +. To begin the proof of (2.1), introduce the function φ(t) := t v(s) ds. We will find an ODE φ satisfies. Take p( ) to be the characteristic polynomial of M. Then ( p d ) ( v(t) = p d ) ( [b T e tm N] = b (p T d ) ) e tm N = b T (p(m)e tm )N, dt dt dt since p(m) =, according to the Cayley Hamilton Theorem. But since p ( dt) d v(t), it follows that d ( dt p d ) ( φ(t) = p d ) ( ddt ) ( dt dt φ = p d ) v(t) =. dt Hence φ solves the (n + 1) th order ODE ( d dt p d ) φ(t) =. dt We also know φ( ). Let µ 1,..., µ n+1 be the solutions of µp( µ) =. According to ODE theory, we can write φ(t) = sum of terms of the form p i (t)e µ it 23

24 for appropriate polynomials p i ( ). Furthermore, we see that µ n+1 = and µ k = λ k, where λ 1,..., λ n are the eigenvalues of M. By assumption Re µ k, for k = 1,..., n. If v(s) ds <, then φ(t) t v(s) ds as t ; that is, φ(t) as t. This is a contradiction to the representation formula of φ(t) = Σp i (t)e µ it, with Re µ i. Assertion (2.1) is proved. 5. Consequently given any µ, there exists t > such that b x = t a contradiction to (2.8). Therefore C = R n. v(s) ds > µ, 2.4 OBSERVABILITY We again consider the linear system of ODE { ẋ(t) = Mx(t) (ODE) x() = x where M M n n. In this section we address the observability problem, modeled as follows. We suppose that we can observe (O) y(t) := Nx(t) (t ), for a given matrix N M m n. Consequently, y(t) R m. The interesting situation is when m << n and we interpret y( ) as low-dimensional observations or measurements of the high-dimensional dynamics x( ). OBSERVABILITY QUESTION: Given the observations y( ), can we in principle reconstruct x( )? In particular, do observations of y( ) provide enough information for us to deduce the initial value x for (ODE)? DEFINITION. The pair (ODE),(O) is called observable if the knowledge of y( ) on any time interval [, t] allows us to compute x. More precisely, (ODE),(O) is observable if for all solutions x 1 ( ),x 2 ( ), Nx 1 ( ) Nx 2 ( ) on a time interval [, t] implies x 1 () = x 2 (). TWO SIMPLE EXAMPLES. (i) If N, then clearly the system is not observable. 24

25 (ii) On the other hand, if m = n and N is invertible, then clearly x(t) = N 1 y(t) is observable. The interesting cases lie between these extremes. THEOREM 2.7 (OBSERVABILITY AND CONTROLLABILITY). The system {ẋ(t) = Mx(t) (2.11) y(t) = Nx(t) is observable if and only if the system (2.12) ż(t) = M T z(t) + N T α(t), A = R m is controllable, meaning that C = R n. INTERPRETATION. This theorem asserts that somehow observability and controllability are dual concepts for linear systems. Proof. 1. Suppose (2.11) is not observable. Then there exist points x 1 x 2 R n, such that { ẋ1 (t) = Mx 1 (t), x 1 () = x 1 ẋ 2 (t) = Mx 2 (t), x 2 () = x 2 but y(t) := Nx 1 (t) Nx 2 (t) for all times t. Let Then but Now Thus x(t) := x 1 (t) x 2 (t), x := x 1 x 2. ẋ(t) = Mx(t), x() = x, Nx(t) = (t ). x(t) = X(t)x = e tm x. Ne tm x = (t ). Let t =, to find Nx =. Then differentiate this expression k times in t and let t =, to discover as well that NM k x = for k =, 1, 2,.... Hence (x ) T (M k ) T N T =, and hence (x ) T (M T ) k N T =. This implies (x ) T [N T, M T N T,..., (M T ) n 1 N T ] =. Since x, rank[n T,..., (M T ) n 1 N T ] < n. Thus problem (2.12) is not controllable. Consequently, (2.12) controllable implies (2.11) is observable. 25

26 2. Assume now (2.12) not controllable. Then rank[n T,..., (M T ) n 1 N T ] < n, and consequently according to Theorem 2.3 there exists x such that (x ) T [N T,..., (M T ) n 1 N T ] =. That is, NM k x = for all k =, 1, 2,..., n 1. We want to show that y(t) = Nx(t), where { ẋ(t) = Mx(t) x() = x. According to the Cayley Hamilton Theorem, we can write M n = β n 1 M n 1 β I. for appropriate constants. Consequently NM n x =. Likewise, M n+1 = M( β n 1 M n 1 β I) = β n 1 M n β M; and so NM n+1 x =. Similarly, NM k x = for all k. Now x(t) = X(t)x = e Mt x t k M k = x ; k! and therefore Nx(t) = N k= tk M k k! x =. We have shown that if (2.12) is not controllable, then (2.11) is not observable. k= 2.5 BANG-BANG PRINCIPLE. For this section, we will again take A to be the cube [ 1, 1] m in R m. DEFINITION. A control α( ) A is called bang-bang if for each time t and each index i = 1,..., m, we have α i (t) = 1, where α 1 (t) α(t) =.. α m (t) THEOREM 2.8 (BANG-BANG PRINCIPLE). Let t > and suppose x C(t), for the system ẋ(t) = Mx(t) + Nα(t). Then there exists a bang-bang control α( ) which steers x to at time t. To prove the theorem we need some tools from functional analysis, among them the Krein Milman Theorem, expressing the geometric fact that every bounded convex set has an extreme point. 26

27 2.5.1 SOME FUNCTIONAL ANALYSIS. We will study the geometry of certain infinite dimensional spaces of functions. NOTATION: L = L (, t; R m ) = {α( ) : (, t) R m sup α(s) < }. s t α L = sup α(s). s t DEFINITION. Let α n L for n = 1,... and α L. We say α n converges to α in the weak* sense, written provided t α n α, α n (s) v(s) ds t α(s) v(s) ds as n, for all v( ) : [, t] R m satisfying t v(s) ds <. We will need the following useful weak* compactness theorem for L : ALAOGLU S THEOREM. Let α n A, n = 1,.... Then there exists a subsequence α nk and α A, such that α nk α. DEFINITIONS. (i) The set K is convex if for all x, ˆx K and all real numbers λ 1, λx + (1 λ)ˆx K. (ii) A point z K is called extreme provided there do not exist points x, ˆx K and < λ < 1 such that z = λx + (1 λ)ˆx. KREIN-MILMAN THEOREM. Let K be a convex, nonempty subset of L, which is compact in the weak topology. Then K has at least one extreme point APPLICATION TO BANG-BANG CONTROLS. The foregoing abstract theory will be useful for us in the following setting. We will take K to be the set of controls which steer x to at time t, prove it satisfies the hypotheses of Krein Milman Theorem and finally show that an extreme point is a bang-bang control. 27

28 So consider again the linear dynamics (ODE) { ẋ(t) = Mx(t) + Nα(t) x() = x. Take x C(t) and write K = {α( ) A α( ) steers x to at time t}. LEMMA 2.9 (GEOMETRY OF SET OF CONTROLS). The collection K of admissible controls satisfies the hypotheses of the Krein Milman Theorem. Proof. Since x C(t), we see that K. Next we show that K is convex. For this, recall that α( ) K if and only if x = t Now take also ˆα K and λ 1. Then x = t X 1 (s)nα(s) ds. X 1 (s)n ˆα(s) ds; and so t x = X 1 (s)n(λα(s) + (1 λ)ˆα(s)) ds Hence λα + (1 λ)ˆα K. Lastly, we confirm the compactness. Let α n K for n = 1,.... According to Alaoglu s Theorem there exist n k and α A such that α nk α. We need to show that α K. Now α nk K implies x = t X 1 (s)nα nk (s) ds by definition of weak-* convergence. Hence α K. t X 1 (s)nα(s) ds We can now apply the Krein Milman Theorem to deduce that there exists an extreme point α K. What is interesting is that such an extreme point corresponds to a bang-bang control. THEOREM 2.1 (EXTREMALITY AND BANG-BANG PRINCIPLE). The control α ( ) is bang-bang. 28

### Lecture 11. 3.3.2 Cost functional

Lecture 11 3.3.2 Cost functional Consider functions L(t, x, u) and K(t, x). Here L : R R n R m R, K : R R n R sufficiently regular. (To be consistent with calculus of variations we would have to take L(t,

### 3. INNER PRODUCT SPACES

. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.

### Taylor Polynomials and Taylor Series Math 126

Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will

### Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

### Math 241, Exam 1 Information.

Math 241, Exam 1 Information. 9/24/12, LC 310, 11:15-12:05. Exam 1 will be based on: Sections 12.1-12.5, 14.1-14.3. The corresponding assigned homework problems (see http://www.math.sc.edu/ boylan/sccourses/241fa12/241.html)

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Solutions to old Exam 1 problems

Solutions to old Exam 1 problems Hi students! I am putting this old version of my review for the first midterm review, place and time to be announced. Check for updates on the web site as to which sections

### MATHEMATICS (CLASSES XI XII)

MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)

### LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

### CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

### Tangent and normal lines to conics

4.B. Tangent and normal lines to conics Apollonius work on conics includes a study of tangent and normal lines to these curves. The purpose of this document is to relate his approaches to the modern viewpoints

### Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

MATH 30 Differential Equations Spring 006 Linear algebra and the geometry of quadratic equations Similarity transformations and orthogonal matrices First, some things to recall from linear algebra Two

### CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

### The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

### Copyrighted Material. Chapter 1 DEGREE OF A CURVE

Chapter 1 DEGREE OF A CURVE Road Map The idea of degree is a fundamental concept, which will take us several chapters to explore in depth. We begin by explaining what an algebraic curve is, and offer two

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

### THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

### Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

### 1 if 1 x 0 1 if 0 x 1

Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

### 1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics

### 3.7 Non-autonomous linear systems of ODE. General theory

3.7 Non-autonomous linear systems of ODE. General theory Now I will study the ODE in the form ẋ = A(t)x + g(t), x(t) R k, A, g C(I), (3.1) where now the matrix A is time dependent and continuous on some

### ME128 Computer-Aided Mechanical Design Course Notes Introduction to Design Optimization

ME128 Computer-ided Mechanical Design Course Notes Introduction to Design Optimization 2. OPTIMIZTION Design optimization is rooted as a basic problem for design engineers. It is, of course, a rare situation

### Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

### 15 Kuhn -Tucker conditions

5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

### A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

A QUIK GUIDE TO THE FOMULAS OF MULTIVAIABLE ALULUS ontents 1. Analytic Geometry 2 1.1. Definition of a Vector 2 1.2. Scalar Product 2 1.3. Properties of the Scalar Product 2 1.4. Length and Unit Vectors

### Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Solutions for Review Problems

olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

### The Ideal Class Group

Chapter 5 The Ideal Class Group We will use Minkowski theory, which belongs to the general area of geometry of numbers, to gain insight into the ideal class group of a number field. We have already mentioned

### DERIVATIVES AS MATRICES; CHAIN RULE

DERIVATIVES AS MATRICES; CHAIN RULE 1. Derivatives of Real-valued Functions Let s first consider functions f : R 2 R. Recall that if the partial derivatives of f exist at the point (x 0, y 0 ), then we

### By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### Lecture 12 Basic Lyapunov theory

EE363 Winter 2008-09 Lecture 12 Basic Lyapunov theory stability positive definite functions global Lyapunov stability theorems Lasalle s theorem converse Lyapunov theorems finding Lyapunov functions 12

### Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

### The Method of Lagrange Multipliers

The Method of Lagrange Multipliers S. Sawyer October 25, 2002 1. Lagrange s Theorem. Suppose that we want to maximize (or imize a function of n variables f(x = f(x 1, x 2,..., x n for x = (x 1, x 2,...,

### Rolle s Theorem. q( x) = 1

Lecture 1 :The Mean Value Theorem We know that constant functions have derivative zero. Is it possible for a more complicated function to have derivative zero? In this section we will answer this question

### (a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Name: Solutions to Practice Final. Consider the line r(t) = 3 + t, t, 6. (a) Find symmetric equations for this line. (b) Find the point where the first line r(t) intersects the surface z = x + y. (a) We

### THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING 1. Introduction The Black-Scholes theory, which is the main subject of this course and its sequel, is based on the Efficient Market Hypothesis, that arbitrages

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Lecture 7: Finding Lyapunov Functions 1

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

### FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.

FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is

### Course 221: Analysis Academic year , First Semester

Course 221: Analysis Academic year 2007-08, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................

### Parametric Curves. (Com S 477/577 Notes) Yan-Bin Jia. Oct 8, 2015

Parametric Curves (Com S 477/577 Notes) Yan-Bin Jia Oct 8, 2015 1 Introduction A curve in R 2 (or R 3 ) is a differentiable function α : [a,b] R 2 (or R 3 ). The initial point is α[a] and the final point

### MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

### MATH1231 Algebra, 2015 Chapter 7: Linear maps

MATH1231 Algebra, 2015 Chapter 7: Linear maps A/Prof. Daniel Chan School of Mathematics and Statistics University of New South Wales danielc@unsw.edu.au Daniel Chan (UNSW) MATH1231 Algebra 1 / 43 Chapter

### Understanding Basic Calculus

Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

### System of First Order Differential Equations

CHAPTER System of First Order Differential Equations In this chapter, we will discuss system of first order differential equations. There are many applications that involving find several unknown functions

### CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/54

CHAPTER 7 APPLICATIONS TO MARKETING Chapter 7 p. 1/54 APPLICATIONS TO MARKETING State Equation: Rate of sales expressed in terms of advertising, which is a control variable Objective: Profit maximization

### Figure 2.1: Center of mass of four points.

Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would

### The Not-Formula Book for C1

Not The Not-Formula Book for C1 Everything you need to know for Core 1 that won t be in the formula book Examination Board: AQA Brief This document is intended as an aid for revision. Although it includes

### 2.5 Gaussian Elimination

page 150 150 CHAPTER 2 Matrices and Systems of Linear Equations 37 10 the linear algebra package of Maple, the three elementary 20 23 1 row operations are 12 1 swaprow(a,i,j): permute rows i and j 3 3

### Summary of week 8 (Lectures 22, 23 and 24)

WEEK 8 Summary of week 8 (Lectures 22, 23 and 24) This week we completed our discussion of Chapter 5 of [VST] Recall that if V and W are inner product spaces then a linear map T : V W is called an isometry

### Vector Spaces II: Finite Dimensional Linear Algebra 1

John Nachbar September 2, 2014 Vector Spaces II: Finite Dimensional Linear Algebra 1 1 Definitions and Basic Theorems. For basic properties and notation for R N, see the notes Vector Spaces I. Definition

### Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain

Lectures notes on orthogonal matrices (with exercises) 92.222 - Linear Algebra II - Spring 2004 by D. Klain 1. Orthogonal matrices and orthonormal sets An n n real-valued matrix A is said to be an orthogonal

### Lectures 5-6: Taylor Series

Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### Lecture 31: Second order homogeneous equations II

Lecture 31: Second order homogeneous equations II Nathan Pflueger 21 November 2011 1 Introduction This lecture gives a complete description of all the solutions to any differential equation of the form

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### The Mathematics of Origami

The Mathematics of Origami Sheri Yin June 3, 2009 1 Contents 1 Introduction 3 2 Some Basics in Abstract Algebra 4 2.1 Groups................................. 4 2.2 Ring..................................

### Calculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum

Calculus C/Multivariate Calculus Advanced Placement G/T Essential Curriculum UNIT I: The Hyperbolic Functions basic calculus concepts, including techniques for curve sketching, exponential and logarithmic

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

### A First Course in Elementary Differential Equations. Marcel B. Finan Arkansas Tech University c All Rights Reserved

A First Course in Elementary Differential Equations Marcel B. Finan Arkansas Tech University c All Rights Reserved 1 Contents 1 Basic Terminology 4 2 Qualitative Analysis: Direction Field of y = f(t, y)

### (Refer Slide Time: 00:00:56 min)

Numerical Methods and Computation Prof. S.R.K. Iyengar Department of Mathematics Indian Institute of Technology, Delhi Lecture No # 3 Solution of Nonlinear Algebraic Equations (Continued) (Refer Slide

### PROBLEM SET. Practice Problems for Exam #2. Math 2350, Fall Nov. 7, 2004 Corrected Nov. 10 ANSWERS

PROBLEM SET Practice Problems for Exam #2 Math 2350, Fall 2004 Nov. 7, 2004 Corrected Nov. 10 ANSWERS i Problem 1. Consider the function f(x, y) = xy 2 sin(x 2 y). Find the partial derivatives f x, f y,

### max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

### 1 Orthogonal projections and the approximation

Math 1512 Fall 2010 Notes on least squares approximation Given n data points (x 1, y 1 ),..., (x n, y n ), we would like to find the line L, with an equation of the form y = mx + b, which is the best fit

### 1 Local Brouwer degree

1 Local Brouwer degree Let D R n be an open set and f : S R n be continuous, D S and c R n. Suppose that the set f 1 (c) D is compact. (1) Then the local Brouwer degree of f at c in the set D is defined.

### PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

### Two vectors are equal if they have the same length and direction. They do not

Vectors define vectors Some physical quantities, such as temperature, length, and mass, can be specified by a single number called a scalar. Other physical quantities, such as force and velocity, must

### Lecture 3. Mathematical Induction

Lecture 3 Mathematical Induction Induction is a fundamental reasoning process in which general conclusion is based on particular cases It contrasts with deduction, the reasoning process in which conclusion

### ORDINARY DIFFERENTIAL EQUATIONS

ORDINARY DIFFERENTIAL EQUATIONS GABRIEL NAGY Mathematics Department, Michigan State University, East Lansing, MI, 48824. SEPTEMBER 4, 25 Summary. This is an introduction to ordinary differential equations.

### correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

Topic 1 2.1 mode MultipleSelection text How can we approximate the slope of the tangent line to f(x) at a point x = a? This is a Multiple selection question, so you need to check all of the answers that

### τ θ What is the proper price at time t =0of this option?

Now by Itô s formula But Mu f and u g in Ū. Hence τ θ u(x) =E( Mu(X) ds + u(x(τ θ))) 0 τ θ u(x) E( f(x) ds + g(x(τ θ))) = J x (θ). 0 But since u(x) =J x (θ ), we consequently have u(x) =J x (θ ) = min

### Taylor and Maclaurin Series

Taylor and Maclaurin Series In the preceding section we were able to find power series representations for a certain restricted class of functions. Here we investigate more general problems: Which functions

### Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

### Overview of Math Standards

Algebra 2 Welcome to math curriculum design maps for Manhattan- Ogden USD 383, striving to produce learners who are: Effective Communicators who clearly express ideas and effectively communicate with diverse

### LS.6 Solution Matrices

LS.6 Solution Matrices In the literature, solutions to linear systems often are expressed using square matrices rather than vectors. You need to get used to the terminology. As before, we state the definitions

### 4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS Ben Goldys and Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2015 B. Goldys and M. Rutkowski (USydney) Slides 4: Single-Period Market

PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

### The Fourth International DERIVE-TI92/89 Conference Liverpool, U.K., 12-15 July 2000. Derive 5: The Easiest... Just Got Better!

The Fourth International DERIVE-TI9/89 Conference Liverpool, U.K., -5 July 000 Derive 5: The Easiest... Just Got Better! Michel Beaudin École de technologie supérieure 00, rue Notre-Dame Ouest Montréal

### Ideal Class Group and Units

Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

### DRAFT. Algebra 1 EOC Item Specifications

DRAFT Algebra 1 EOC Item Specifications The draft Florida Standards Assessment (FSA) Test Item Specifications (Specifications) are based upon the Florida Standards and the Florida Course Descriptions as

### PowerTeaching i3: Algebra I Mathematics

PowerTeaching i3: Algebra I Mathematics Alignment to the Common Core State Standards for Mathematics Standards for Mathematical Practice and Standards for Mathematical Content for Algebra I Key Ideas and

### General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1

A B I L E N E C H R I S T I A N U N I V E R S I T Y Department of Mathematics General Theory of Differential Equations Sections 2.8, 3.1-3.2, 4.1 Dr. John Ehrke Department of Mathematics Fall 2012 Questions

### 1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

### Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm.

Chapter 4, Arithmetic in F [x] Polynomial arithmetic and the division algorithm. We begin by defining the ring of polynomials with coefficients in a ring R. After some preliminary results, we specialize

### Date: April 12, 2001. Contents

2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

### x1 x 2 x 3 y 1 y 2 y 3 x 1 y 2 x 2 y 1 0.

Cross product 1 Chapter 7 Cross product We are getting ready to study integration in several variables. Until now we have been doing only differential calculus. One outcome of this study will be our ability

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

### Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks

Thnkwell s Homeschool Precalculus Course Lesson Plan: 36 weeks Welcome to Thinkwell s Homeschool Precalculus! We re thrilled that you ve decided to make us part of your homeschool curriculum. This lesson

### Math 4310 Handout - Quotient Vector Spaces

Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable

### BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

### Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 10 Boundary Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction