Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

Size: px
Start display at page:

Download "Sampled-Data Model Predictive Control for Constrained Continuous Time Systems"

Transcription

1 Sampled-Data Model Predictive Control for Constrained Continuous Time Systems Rolf Findeisen, Tobias Raff, and Frank Allgöwer Institute for Systems Theory and Automatic Control, University of Stuttgart, Germany Summary. Typically one desires to control a nonlinear dynamical system in an optimal way taking constraints on the states and inputs directly into account. Classically this problem falls into the field of optimal control. Often, however, it is difficult, if not impossible, to find a closed solution of the corresponding Hamilton-Jacobi-Bellmann equation. One possible control strategy that overcomes this problem is model predictive control. In model predictive control the solution of the Hamilton-Jacobi-Bellman equation is avoided by repeatedly solving an open-loop optimal control problem for the current state, which is a considerably simpler task, and applying the resulting control open-loop for a short time. The purpose of this paper is to provide an introduction and overview to the field of model predictive control for continuous time systems. Specifically we consider the so called sampled-data nonlinear model predictive control approach. After a short review of the main principles of model predictive control some of the theoretical, computational and implementation aspects of this control strategy are discussed and underlined considering two example systems. Key words. Model predictive control, constrained systems, sampled-data 1 Introduction Many methods for the control of dynamical systems exist. Besides the question of stability often the achieved performance as well as the satisfaction of constraints on the states and inputs are of paramount importance. One classical approach to take these points into account is the design of an optimal feedback controller. As is well known, however, it is often very hard, if not impossible, to derive a closed solution for the corresponding feedback controller. One possible approach to overcome this problem is the application of model predictive control (MPC), often also referred to as receding horizon control or moving horizon control. Basically in model predictive control the

2 2 Findeisen, Raff, Allgöwer optimal control problem is solved repeatedly at specific sampling instants for the current, fixed system state. The first part of the resulting open-loop input is applied to the system until the next sampling instant, at which the optimal control problem for the new system state is solved again. Since the optimal control problem is solved at every sampling instant only for one fixed initial condition, the solution is much easier to obtain than to obtain a closed solution of the Hamilton-Jacobi-Bellmann partial differential equation (for all possible initial conditions) of the original optimal control problem. In general one distinguishes between linear and nonlinear model predictive control (NMPC). Linear MPC refers to MPC schemes that are based on linear dynamical models of the system and in which linear constraints on the states and inputs and a quadratic cost function are employed. NMPC refers to MPC schemes that use for the prediction of the system behavior nonlinear models and that allow to consider non-quadratic cost functions and nonlinear constraints on the states and inputs. By now linear MPC is widely used in industrial applications [40, 41, 75, 77, 78]. For example [78] reports more than 4500 applications spanning a wide range from chemicals to aerospace industries. Also many theoretical and implementation issues of linear MPC theory have been studied so far [55, 68, 75]. Many systems are, however, inherently nonlinear and the application of linear MPC schemes leads to poor performance of the closed-loop. Driven by this shortcoming and the desire to directly use first principles based nonlinear models there is a steadily increasing interrest in the theory and application of NMPC. Over the recent years many progress in the area of NMPC (see for example [1, 17, 68, 78]) has been made. However, there remain a series of open questions and hurdles that must be overcome in order that theoretically well founded practical application of NMPC is possible. In this paper we focus on an introduction and overview of NMPC for continuous time systems with sampled state information, i.e. we consider the stabilization of continuous time systems by repeatedly applying input trajectories that are obtained from the solution of an open-loop optimal control problem at discrete sampling instants. In the following we shortly refer to this as sampled-data NMPC. In comparison to NMPC for discrete time systems (see e.g. [1, 17, 68]) or instantaneous NMPC [68], where the optimal input is recalculated at all times (no open-loop input signal is applied to the system), the inter sampling behavior of the system while the open-loop input is applied must be taken into account, see e.g. [25, 27, 44, 45, 62]. In Section 2 we review the basic principle of NMPC. Before we focus on the theoretical questions, we shortly outline in Section 2.3 how the resulting openloop optimal control problem can be solved. Section 3 contains a discussion on how stability in sampled-data NMPC can be achieved. Section 4 discusses robustness issues in NMPC and Section 5 considers the output feedback problem for NMPC. Before concluding in Section 8 we consider in Section 6 the sampled-data NMPC control of a simple nonlinear example system and in Section 7 the pendulum benchmark example considered throughout this book.

3 Sampled-Data Model Predictive Control for Constrained Systems 3 2 Principles of Sampled-Data Model Predictive Control In model predictive control the input applied to the system (1) is given by the repeated solution of a (finite) horizon open-loop optimal control problem subject to the system dynamics, state and input constraints: Based on measurements obtained at a sampling time (in the following denoted by t i ), the controller predicts the dynamic behavior of the system over the so called control/prediction horizon T p and determines the input such that an open-loop performance objective is minimized. Under the assumption that the prediction horizon spans to infinity and that there are no disturbances and no model plant mismatch, one could apply the resulting input open-loop to the system and achieve (under certain assumptions) convergence to the origin. However, due to external disturbances, model plant mismatch and the use of finite prediction horizons the actual predicted state and the true system state differ. Thus, to counteract this deviation and to suppress the disturbances it is necessary to in cooperate feedback. In model predictive control this is achieved by applying the obtained optimal open-loop input only until the next sampling instant at which the whole process prediction and optimization is repeated (compare Figure 1), thus moving the prediction horizon forward. closed-loop input u control/prediction horizon T p closed-loop input u control/prediction horizon T p closed-loop open loop input ū closed-loop open loop input ū state x state x predicted state x predicted state x t i t i+1 t i + T p t i t i+1 t i+2 t i+1 + T p sampling time t i sampling time t i+1 Fig. 1. Principle of model predictive control. The whole procedure can be summarized by the following steps: 1. Obtain estimates of the current state of the system 2. Obtain an admissible optimal input by minimizing the desired cost function over the prediction horizon using the system model and the current state estimate for prediction 3. Implement the obtained optimal input until the next sampling instant 4. Continue with 1. Considering this control strategy various questions such as closed-loop stability, robustness to disturbances/model uncertainties and the efficient solution of the resulting open-loop optimal control problem arise.

4 4 Findeisen, Raff, Allgöwer 2.1 Mathematical Formulation of Sampled-Data NMPC Throughout the paper we consider the stabilization of time-invariant nonlinear systems of the form ẋ(t) = f(x(t), u(t)) a.e. t 0, x(0) = x 0, (1) where x R n denotes the system state and u R m is the control or input to the system. We assume that the vector field f : R n R m R n is locally Lipschitz continuous with f(0, 0) = 0. The objective is to (optimally) stabilize the system subject to the input and state constraints: u(t) U R m, x(t) X R n, t 0, where U R m is assumed to be compact and X R n is assumed to be simply connected with (0, 0) X U. Remark 1. (Rate constraints on the inputs) If rate constraints u(t) U, t 0 (2) on the inputs must be considered, they can be transformed to the given form by adding integrators in the system before the inputs, see for example Section 7. Note, however, that this transforms the input constraint u U to constraints on the integrator states. We denote the solution of (1) (if it exists) starting at a time t 1 from a state x(t 1 ), applying a (piecewise continuous) input u : [t 1, t 2 ] R m by x(τ; u( ), x(t 1 )), τ [t 1, t 2 ]. In sampled-data NMPC an open-loop optimal control problem is solved at the discrete sampling instants t i. We assume that these sampling instants are given by a partition π of the time axis: Definition 1. (Partition) A partition is a series π = (t i ), i N of (finite) positive real numbers such that t 0 = 0, t i < t i+1 and t i for i. Furthermore, π := sup i N (t i+1 t i ) denotes the upper diameter of π and π := inf i N (t i+1 t i ) denotes the lower diameter of π. Whenever t and t i occur together, t i should be taken as the closest previous sampling instant with t i < t. The input applied in between the sampling instants, i.e. in the interval [t i, t i+1 ), in NMPC is given by the solution of the open-loop optimal control problem min ū( ) L [0,Tp] J(x(t i ), ū( )) (3a) subject to: x(τ)=f( x(τ), ū(τ)), x(t i )=x(t i ) (3b) ū(τ) U, x(τ) X τ [t i, t i + T p ] (3c) x(t i + T p ) E. (3d)

5 Sampled-Data Model Predictive Control for Constrained Systems 5 Here the bar denotes predicted variables, i.e. x( ) is the solution of (3b) driven by the input ū( ) : [t i, t i + T p ] U with the initial condition x(t i ). The distinction between the real system state x of (1) and the predicted state x in the controller is necessary since due to the moving horizon nature even in the nominal case the predicted states will differ from the real states at least after one sampling instant. As cost functional J minimized over the control horizon T p π > 0 we consider ti+t p J(x(t i ), ū( )):= F ( x(τ), ū(τ))dτ + E( x(t i + T p )), (4) t i where the stage cost F : X U X is assumed to be continuous, satisfies F (0, 0) = 0, and is lower bounded by positive semidefinite function α F : R R + 0, i.e. α F (x) F (x, u) (x, u) X U. We furthermore assume that the autonomous system f(x, 0) is zero state detectable via α(x), i.e. (x 0 ) X, α F (x(τ; x 0 )) = 0 x(τ; x 0 ) as t, where x(τ; x 0 ) denotes the solution of the system ẋ = f(x, 0) starting from x(0) = x 0. The so called terminal region constraint E and the so called terminal penalty term E are typically used to enforce stability or to increase the performance of the closed-loop, see Section 3. The solution of the optimal control problem (3) is denoted by ū ( ; x(t i )). It defines the open-loop input that is applied to the system until the next sampling instant t i+1 : u(t; x(t i ))=ū (t; x(t i )), t [t i, t i+1 ). (5) As noted above, the control u(t; x(t i )) is a feedback, since it is recalculated at each sampling instant using the new state measurement. We limit the presentation to input signals that are piecewise continuous and refer to an admissible input as: Definition 2. (Admissible Input) An input u : [0, T p ] R m for a state x 0 is called admissible, if it is: a) piecewise continuous, b) u(τ) U τ [0, T p ], c) x(τ; u( ), x 0 ) X τ [0, T p ], d) x(t p ; u( ), x 0 ) E. We furthermore consider an admissible set of problem (3) as: Definition 3. (Admissible Set) A set X X is called admissible, if for all x 0 X there exists a piecewise continuous input ũ : [0, T p ] U such that a) x(τ; x 0, ũ( )) X, τ [0, T p ] and b) x(t p ; x 0, ũ( )) E. Without further (possibly very strong) restrictions it is often not clear if for a given x an admissible input nor if the minimum of (3) exists. While the existence of an admissible input is related to constrained controllability, the existence of an optimal solution of (3) is in general non trivial to answer. For simplicity of presentation we assume in the following, that the set R denotes an admissible set that admits an optimal solution of (3), i.e. one obtains the following assumption:

6 6 Findeisen, Raff, Allgöwer Assumption 1 (Set R) There exists an admissible set R such that (3) admits for all x 0 R an optimal (not necessarily unique) solution. It is possible to derive existence results for (3) considering measurable inputs and imposing certain convexity and compactness see for example [36, 37, 73] and [4, 35, 82]. However, often it is not possible to check the necessary conditions a priory. The main reason for imposing Assumption 1 is the requirement that an optimal/feasible solution at one sampling instant should guarantee (under certain assumptions) the existence of an optimal/feasible solution at the next sampling instant (see Section 3). The optimal value of the cost functional (4) plays an important role in many considerations. It is typically denoted as value function: Definition 4. (Value function) The value function V (x) is defined as the minimal value of the cost for the state x: V (x) = J(ū ( ; x); x). The value function is for example used in the proof of convergence and stability. It often serves as a Lyapunov function /decreasing function candidate, see Section 3 and [1, 68]. In comparison to sampled-data NMPC for continuous time systems, in instantaneous NMPC the input is defined by the solution of the optimal control problem (3) at all times: u(x(t)) = ū (t; x(t)), i.e. no open-loop input is applied, see e.g. [67, 68]. Considering that the solution of the open-loop optimal control problem requires an often non negligible time, this approach can not be applied in practice. Besides the continuous time considerations results for NMPC of discrete time systems are also available (see e.g. [1, 17, 68]). We do not go into further details here. Remark 2. (Hybrid nature of sampled-data predictive control) Note, that in sampled-data NMPC the input applied in between the recalculation instants t i and t i+1 is given by the solution of the open-loop optimal control problem (3) at time t i, i.e. the closed-loop is given by ẋ(t) = f (x(t), u(t; x(t i ))). (6) Thus, strictly speaking, the behavior of the system is not only defined by the current state. Rigorously one has to consider a hybrid system [43, 46, 74, 84] consisting of the discrete state x(t i ), the continuous state x(t). This is especially important for the stability considerations in Section 3, since the the discrete memory x(t i ) must be taken into account. 2.2 Inherent Characteristics and Problems of NMPC One of the key problems in predictive control schemes is that the actual closedloop input and states differ from the predicted open-loop ones, even if no model plant mismatch and no disturbances are present. This stems from the fact, that at the next sampling instant the (finite) prediction horizon moves

7 Sampled-Data Model Predictive Control for Constrained Systems 7 forward, allowing to consider more information thus leading to a mismatch of the trajectories. The difference between the predicted and the closed-loop trajectories has two immediate consequences. Firstly, the actual goal to compute a feedback such that the performance objective over an often desired infinite horizon of the closed-loop is minimized is not achieved. Secondly there is in general no guarantee that the closed-loop system will be stable at all. It is indeed easy to construct examples for which the closed-loop becomes unstable if a short finite horizon is chosen. Hence, when using finite prediction horizons special attention is required to guarantee stability (see Section 3). Summarizing, the key characteristics and properties of NMPC are: NMPC allows the direct use of nonlinear models for prediction. NMPC allows the explicit consideration of state and input constraints. In NMPC a time domain performance criteria is minimized on-line. In NMPC the predicted behavior is in general different from the closedloop behavior. For the application of NMPC an open-loop optimal control problem must be solved on-line. To perform the prediction the system states must be measured or estimated. Remark 3. In this paper we mainly focus on NMPC for the stabilization of time-invariant continuous time nonlinear systems. However, note that NMPC is also applicable to a large class of other systems, i.e. discrete time systems, delay systems, time-varying systems, and distributed parameter systems, for more details see for example [1, 17, 68]. Furthermore, NMPC is also well suited for tracking problems or problems where one has to perform transfer between different steady states optimally, see e.g. [28, 58, 70]. Before we summarize the available stability results for sampled-data NMPC, we comment in the next section on the numerical solution of the open-loop optimal control problem. 2.3 Numerical Aspects of Sampled-Data NMPC Predictive control circumvents the solution of the Hamilton-Jacobi-Bellman equation by solving the open-loop optimal control problem at every sampling instant only for the currently measured system state. An often untraceable problem is replaced by a traceable one. In linear MPC the solution of the optimal control problem (3) can often be cast as a convex quadratic program, which can be solved efficiently. This is one of the main reasons for the practical success of linear MPC. In NMPC, however, at every sampling instant a general nonlinear open-loop optimal control problem (3) must be solved on-line. Thus one important precondition for the application of NMPC, is the availability of reliable and efficient numerical dynamic optimization algorithms for the optimal control problem (3). Solving (3) numerically efficient and fast is,

8 8 Findeisen, Raff, Allgöwer however, not a trivial task and has attracted many research interest in recent years (see e.g. [2, 5, 6, 18, 22 24, 56, 64 66, 81, 83]). Typically so called direct solution methods [6, 7, 76] are used, i.e. the original infinite dimensional problem is turned into a finite dimensional one by discretizing the input (and also possibly the state). Basically this is done by parameterizing the input (and possibly the states) finitely and to solve/approximate the differential equations during the optimization. We do not go into further details and instead refer to [7, 22, 66]. However, we note that recent studies have shown the usage of special dynamic optimizers and tailored NMPC schemes allows to employ NMPC to practically relevant problems (see e.g. [2, 24, 29, 34, 65, 81]), even with todays computational power. Remark 4. (Sub optimality and NMPC) Since the optimal control problem (3) is typically non convex, it is questionable if the globally minimizing input can be found at all. While the usage of a non optimal admissible input might lead to an increase in the cost, it is not crucial to find the global minima for stability of the closed-loop, as outlined in the next Section. 3 Nominal Stability of Sampled-Data NMPC As outlined one elementary question in NMPC is whether a finite horizon NMPC strategy does guarantee stability of the closed-loop. While a finite prediction and control horizon is desirable from an implementation point of view, the difference between the predicted state trajectory and the resulting closed-loop behavior can lead to instability. Here we review some central ideas how stability can be achieved. No attempt is made to cover all existing approaches and methods, especially those which consider instantaneous or discrete time NMPC. We do also only consider the nominal case, i.e. it is assumed that no external disturbances act on the system and that there is no model mismatch between the system model used for prediction and the real system. Stability by an infinite prediction horizon: The most intuitive way to achieve stability/convergence to the origin is to use an infinite horizon cost, i.e. T p in the optimal control problem (3) is set to. In this case the open-loop input and state trajectories resulting from (3) at a specific sampling instant are coincide with the closed-loop trajectories of the nonlinear system due to Bellman s principle of optimality [3]. Thus, the remaining parts of the trajectories at the next sampling instant are still optimal (end pieces of optimal trajectories are optimal). Since the first part of the optimal trajectory has been already implemented and the cost for the remaining part and thus the value function is decreasing, which implies under mild conditions convergence of the states. Detailed derivations can for example be found in [51, 52, 67, 68].

9 Sampled-Data Model Predictive Control for Constrained Systems 9 Stability for finite prediction horizons: In the case of finite horizons the stability of the closed-loop is not guaranteed a priori if no precautions are taken. By now a series of approaches exist, that achieve closed-loop stability. In most of these approaches the terminal penalty E and the terminal region constraint E are chosen suitable to guarantee stability or the standard NMPC is modified to achieve stability. The additional terms are not motivated by physical restrictions or performance requirements, they have the sole purpose to enforce stability. Therefore, they are usually called stability constraints. Stability via a zero terminal constraint: One possibility to enforce stability with a finite prediction horizon is to add the so called zero terminal equality constraint at the end of the prediction horizon, i.e. x(t + T p ) = 0 (7) is added to the optimal control problem (3) [9, 52, 67, 69]. This leads to stability of the closed-loop, if the optimal control problem has a solution at t = 0. Similar to the infinite horizon case the feasibility at one sampling instant does imply feasibility at the following sampling instants and a decrease in the value function. One disadvantage of a zero terminal constraint is that the predicted system state is forced to reach the origin in finite time. This leads to feasibility problems for short prediction/control horizon lengths, i.e. to small regions of attraction. Furthermore, from a computational point of view, an exact satisfaction of a zero terminal equality constraint does require in general an infinite number of iterations in the optimization and is thus not desirable. The main advantages of a zero terminal constraint are the straightforward application and the conceptual simplicity. Dual mode control: One of the first sampled-data NMPC approaches avoiding an infinite horizon or a zero terminal constraint is the so called dual-mode NMPC approach [71]. Dual-mode is based on the assumption that a local (linear) controller is available for the nonlinear system. Based on this local linear controller a terminal region and a quadratic terminal penalty term which are added to the open-loop optimal control problem similar to E and E such that: 1.) the terminal region is invariant under the local control law, 2.) the terminal penalty term E enforces a decrease in the value function. Furthermore the prediction horizon is considered as additional degree of freedom in the optimization. The terminal penalty term E can be seen as an approximation of the infinite horizon cost inside of the terminal region E under the local linear control law. Note, that the dual-mode control is not strictly a pure NMPC controller, since the open-loop optimal control problem is only repeatedly solved until the system state enters the terminal set E, which is achieved in finite time. Once the system state is inside E the control is switched to the local control law u = Kx, thus the name dual-mode NMPC. Thus the local control is utilized to establish asymptotic stability while the NMPC feedback is used to increase the region of attraction of the local control law.

10 10 Findeisen, Raff, Allgöwer Based on the results in [71] it is shown in [12] that switching to the local control law is not necessary to establish stability. Control Lyapunov function approaches: In the case that E is a global control Lyapunov function for the system, the terminal region constraint x(t + T p ) E is actual not necessary. Even if the control Lyapunov is not globally valid, convergence to the origin can be achieved [50] and it can be established that for increasing prediction horizon length the region of attraction of the infinite horizon NMPC controller is recovered [48, 50]. Approaches using a control Lyapunov functions as terminal penalty term and no terminal region constraint are typically referred to as control Lyapunov function based NMPC approaches. Unified conditions for convergence: Besides the outlined approaches there exist a series of approaches [11, 12, 14, 61, 71] that are based on the consideration of an (virtual) local control law that is able to stabilize the system inside of the terminal region and where the terminal penalty E provides an upper bound on the optimal infinite horizon cost. The following theorem covers most of the existing stability results. It establishes conditions for the convergence of the closed-loop states under sampleddata NMPC. It is a slight modification of the results given in [10, 11, 36]. The proof is outlined here since it gives a basic idea on the general approach how convergence and stability is achieved in NMPC. Theorem 1. (Convergence of sampled-data NMPC) Suppose that (a) the terminal region E X is closed with 0 E and that the terminal penalty E(x) C 1 is positive semi-definite (b) x E there exists an (admissible) input u E : [0, π] U such that x(τ) E and E x f(x(τ), u E(τ)) + F (x(τ), u E (τ)) 0 τ [0, π] (8) (c) x(0) R Then for the closed-loop system (1), (5) x(t) 0 for t. Proof. See [26]. Loosely speaking, E is a F -conform local control Lyapunov function in the terminal set E. The terminal region constraint enforces feasibility at the next sampling instant and allows, similarly to the infinite horizon case, to show that the value function is strictly decreasing. Thus stability can be established. Note that this result is nonlocal in nature, i.e. there exists a region of attraction R which is of at least the size of E. Various ways to determine a suitable terminal penalty term and terminal region exist. Examples are the use of a control Lyapunov function as terminal penalty E [49, 50] or the use of a local nonlinear or linear control law to determine a suitable terminal penalty E and a terminal region E [11, 12, 14, 61, 71].

11 Sampled-Data Model Predictive Control for Constrained Systems 11 Remark 5. (Sub optimality) Note that we need the rather strict Assumption 1 on the set R to ensure the existence of a new optimal solution at t i+1 based on the existence of an optimal solution at t i. The existence of an admissible input at t i+1, i.e. ũ is already guaranteed due to existence of local controller, i.e. condition (b). In principle the existence of an optimal solution at the next time instance is not really required for the convergence result. The admissible input, which is a concatenation of the remaining old input and the local control already leads to a decrease in the cost function and thus convergence. To increase performance from time instance to time instance one could require that the cost decreases from time instance to time instance more than the decrease resulting from an application of the old admissible control, i.e. feasibility implies convergence [12, 79]. Remark 6. (Stabilization of systems that require discontinuous inputs) In principle Theorem 1 allows to consider the stabilization of systems that can only be stabilized by feedback that is discontinuous in the state [36], e.g. nonholonomic mechanical systems. However, for such systems it is in general rather difficult to determine a suitable terminal region and a terminal penalty term. To weaken the assumptions in this case, it is possible to drop the continuous differentiability requirement on E, requiring merely that E is only Lipschitz continuous in E. From Rademacker s theorem [16] it then follows that E is continuously differentiable almost everywhere and that (8) holds for almost all τ and the proof remains nearly unchanged. More details can be found in [37]. Remark 7. (Special input signals) Basically it is also possible to consider only special classes of input signals, e.g. one could require that the input is piecewise continuous in between sampling instants or that the input is parameterized as polynomial in time or as a spline. Modifying Assumption 1, namely that the optimal control problem posses a solution for the considered input class, and that condition (8) holds for the considered inputs, the proof of Theorem 1 remains unchanged. The consideration of such inputs can for example be of interest, if only piecewise constant inputs can be implemented on the real system or if the numerical on-line of the optimal control problem allows only the consideration of such inputs. One example of such an expansion are the consideration of piecewise constant inputs as in [61, 62]. So far only conditions for the convergence of the states to the origin where outlined. In many control applications also the question of asymptotic stability in the sense of Lyapunov is of interest. Even so that this is possible for the sampled-data setup considered here, we do not go into further details, see e.g. [26, 37]. Concluding, the nominal stability question of NMPC is by now well understood and a series of NMPC schemes exist, that guarantee the closedloop stability.

12 12 Findeisen, Raff, Allgöwer 4 Robustness of Sampled-Data NMPC The results reviewed so far base on the assumption that the real system coincides with the model used for prediction, i.e. no model/plant mismatch or external disturbances are present. Clearly, this is very unrealistic and the development of a NMPC framework to address robustness issues is of paramount importance. In general one distinguishes between the inherent robustness properties of NMPC and the design of NMPC controllers that take the uncertainty/disturbances directly into account. Typically NMPC schemes that take uncertainty that acts on the system directly into account are based on game-theoretic considerations. Practically they often require the on-line solution of a min-max problem. A series of different approaches can be distinguished. We do not go into details here and instead refer to [8, 13, 38, 53, 54, 57, 59, 60]. Instead we are interested in the so called inherent robustness properties of sampled-data NMPC. By inherent robustness we mean the robustness of NMPC to uncertainties/disturbances without taking them directly into account. As shown sampled-data NMPC posses under certain conditions inherent robustness properties. This property stems from the close relation of NMPC to optimal control. Results on the inherent robustness of instantaneous NMPC can for example be found in [9, 63, 68]. Discrete time results are given in [42, 80] and results for sampled-data NMPC are given in [33, 71]. Typically these results consider additive disturbances of the following form: ẋ = f(x, u) + p(x, u, w) (9) where p : R n R m R l R n describes the model uncertainty/disturbance, and where w W R l might be an exogenous disturbance acting on the system. However, assuming that f locally Lipschitz in u these results can be simply expanded to the case of input disturbances. This type of disturbances is of special interrest, since it allows to capture the influence of numerical solution of the open-loop optimal control problem. Further examples of input disturbances are neglected fast actuator dynamics, computational delays, or numerical errors in the solution of the underlying optimal control problem. For example, inherent robustness was used in [20, 21] to establish stability of a NMPC scheme that employs approximated solutions of the optimal control problem. Summarizing, some preliminary results for the inherent robustness and the robust design of NMPC controller exist. However, these result are either not implementable since they require a high computational load or they are not directly applicable due to their restrictive assumptions.

13 Sampled-Data Model Predictive Control for Constrained Systems 13 5 Output Feedback Sampled-Data NMPC One of the key obstacles for the application of NMPC is that at every sampling instant t i the system state is required for prediction. However, often not all system states are directly accessible, i.e. only an output y = h(x, u) (10) is directly available for feedback, where y R p are the measured outputs and where h : R n R m R p maps the state and input to the output. To overcome this problem one typically employs a state observer for the reconstruction of the states. In principle, instead of the optimal feedback (5) the disturbed feedback u(t; ˆx(t i ))=ū (t; ˆx(t i )), t [t i, t i+1 ) (11) is applied. Yet, due to the lack of a general nonlinear separation principle, stability is not guaranteed, even if the state observer and the NMPC controller are both stable. Several researchers have addressed this problem (see for example for a review [32]). The approach in [19] derives local uniform asymptotic stability of contractive NMPC in combination with a sampled state estimator. In [58], see also [80], asymptotic stability results for observer based discrete-time NMPC for weakly detectable systems are given. The results allow, in principle, to estimate a (local) region of attraction of the output feedback controller from Lipschitz constants. In [72] an optimization based moving horizon observer combined with a certain NMPC scheme is shown to lead to (semi-global) closed-loop stability. In [30, 31, 47], where semi-global stability results for output feedback NMPC using high-gain observers are derived. Furthermore, in [32], based on the inherent robustness properties of NMPC as outlined in Section 4 for a broad class of state feedback nonlinear model predictive controllers, conditions, on the observer that guarantee that the closed-loop is semi-global practically stable. Even so that a series of output feedback results for NMPC using observers for state recovery exist, most of these approaches are fare away from being implementable. Thus, further research has to address this important question to allow for a practical application of NMPC. 6 A Simple Nonlinear Example The following example is thought to show some of the inherent properties of sampled-data NMPC and to show how Theorem 1 can be used to design a stabilizing NMPC controller that takes constraints into account. We consider the following second order system [39] ẋ 1 (t) = x 2 (t) ẋ 2 (t) = x 1 (t) + x 2 (t) sinh(x 2 1 (t) + x2 2 (t)) + u(t), (12b) (12a)

14 14 Findeisen, Raff, Allgöwer which should be stabilized with the bounded control u(t) U := {u R u 1} t 0 where the stage cost is given by F (x, u) = x u 2. (13) According to Theorem 1 we achieve stability if we can find a terminal region E and a C 1 terminal penalty E(x) such that (8) is satisfied. For this we consider the unconstrained infinite horizon optimal control problem for (12). One can verify that the control law minimizes the corresponding cost J (x, u( )) = u (x) = x 2 e x2 1 +x2 2 (14) 0 ( x 2 2 (τ) + u 2 (τ) ) dτ, (15) and that the associated value function, which will be used as terminal penalty term, is given by E(x) := V (x) = e x2 1 +x (16) It remains to find a suitable terminal region. According to Theorem 1 (b) for all x E there must exist an open-loop input u ɛ which satisfies the constraints such that (8) is satisfied. If we define E as E := {x R 2 E(x) α} (17) we know that along solution trajectories of the closed-loop system controlled by u (x), i.e. ẋ = f(x, u ), the following holds E x f(x, u (x)) + F (x(τ), u (x)) = 0, (18) however, α must be chosen such that u (x) U. It can be verified, that for α = 1 β 1, where β satisfies 1 βeβ2 = 0, u (x) U x E. The derived terminal penalty term E(x) and the terminal region E are designed to satisfy the conditions of Theorem 1, thus the resulting NMPC controller should be able to stabilize the closed-loop. The resulting NMPC controller with the prediction horizon set to T p = 2 is compared to a feedback linearizing controller and the optimal controller (14) (where the input of both is limited to the set U by saturation). The feedback linearizing controller used is given by: u Fl (x) := x 2 ( 1 + sinh(x x 2 2 )), (19) which stabilizes the system globally, if the input is unconstrained. The really implemented input for the feedback linearizing controller (and the unconstrained optimal controller (14)) is given by

15 Sampled-Data Model Predictive Control for Constrained Systems 15 u(x) = sign(u Fl (x)) min{1, u Fl (x) }, (20) { where the sign operator is defined as usual, i.e. sign(x) := 1, x<0 1, x 0. For the NMPC controller the sampling instants where given by an equidistant partition of the time axis, i.e. π = (t i ) with t i+1 = t i +δ and t 0 = 0, where the sampling time δ is δ = 0.1. The open-loop optimal control problem (3) is solved by a direct solution method. Specifically the input signal is parametrized as piecewise constant with a time discretization of 0.05 over the prediction horizon, i.e. at every sampling instant an optimization problem with 40 free variables is solved. Figure 2 shows the simulation results in the phase plan for the initial conditions x 1 (0) = and x 2 (0) = 0.2 of all three controllers. Note that x x 1 Fig. 2. Phase plot x 1 over x 2 starting from the initial condition x(0) = [ 1.115, 0.2] for the NMPC controller (black solid), the saturated feedback linearizing controller (dark gray solid) and the saturated optimal controller (gray solid). The inner ellipsoid (gray dashed) is the border of the terminal region E of the NMPC controller, while the outer curve (black dashed) are the points for which the optimal controller u (x) just satisfies the input constraint (saturation not active). the initial conditions are such that for all controllers the input constraints are not active at the beginning. However, after some time the maximum applicable input is reached, i.e. the saturation in (20) is active. As can be seen, both, the optimal controller and the feedback linearizing controller are not able to stabilize the system for the considered initial conditions. In comparison the NMPC controller is able to stabilize the system while meeting the input con-

16 16 Findeisen, Raff, Allgöwer 0 x x u Time Fig. 3. Simulation results starting from the initial condition x(0) = [ 1.115, 0.2] for the NMPC controller (black solid), the saturated feedback linearizing controller (dark gray solid) and the saturated optimal controller (gray solid). straints (see Figure 3). Note, that inside of the terminal region the NMPC controller and the optimal control law u (x) coincide, since the constraints are not active and since (18) is satisfied with equality. Thus, the terminal penalty term E(x) can be seen as an approximation of the cost that appears up to infinity. As can be seen from this example, if a value function/lyapunov function and a local controller as well as the corresponding region of attraction is know, NMPC can be utilized to increase the overall region of attraction of the closed-loop while satisfying the input and state constraints. 7 Inverted Pendulum Benchmark Example As a second example, underling the achievable performance in the case of input and state constraints, we consider the benchmark inverted pendulum on a cart system ẋ(t) = x(t) u(t) z(t) (21) around its upright position. The variable x 1 denotes the horizontal speed of the pendulum, x 2 the horizontal displacement of the pendulum and x 3

17 Sampled-Data Model Predictive Control for Constrained Systems 17 the horizontal speed of the cart. The load z represents a horizontal force on the pendulum which is persistent with unknown but bounded magnitude. Furthermore, u is the force by the actuator on the cart which is constrained in the magnitude by u 1.25 and in the slew rate by du(t)/dt 2s 1. In order to take the slew rate constraint on the control input into account, the system (21) is augmented by an integrator at the control input. Thus, the input constraint u 1.25 is transformed to a state constraint on the new state, e.g. ξ With the state ξ = [ξ 1, ξ 2, ξ 3, ξ 4 ] T = [x 1, x 2, x 3, u] T and the new control input v(t) = du(t)/dt one obtains the augmented system ξ(t) = ξ(t) v(t) z(t). (22) Therefore, the input constraints of the system (21) are ξ 4 < 1.25 and dv(t)/dt < 2s 1 for the system (22). Note, that the input constraints of the system (22) can be casted in the optimal control problem (3). In the x 1 z ϕ u x 3 Fig. 4. Inverted pendulum on a cart. following, two control problems are considered. The control objective of the first problem is to track a reference signal r while the control objective of the second problem is stabilize the system under the influence of a disturbance z. For both control problems the stage cost is chosen as F (ξ, v) = (ξ ξ s ) T (ξ ξ s) + ɛv 2, where ɛ is a small positive parameter, e.g. ɛ = , and ξ s the set point which depends on the reference signal r and the disturbance z. The parameter

18 18 Findeisen, Raff, Allgöwer ɛ in the stage cost is chosen so small in order to recover the classical quadratic stage cost on the state x and on the input u. To guarantee closed-loop stability, the terminal cost E and the terminal region E are are calculated off-line by a procedure as in the quasi infinite horizon model predictive control scheme described in [12, 15]. The resulting terminal cost E is given by E(ξ) = (ξ ξ s ) T and the terminal region E is given by (ξ ξ s) E = {ξ R 4 E(ξ) 3.2}. (23) Note, that the design of the terminal penalty term and terminal region constraint is rather easy, since the system itself is linear. Furthermore, the control and prediction horizon is chosen to T P = 6 and the sampling time is chosen to δ = Tracking In the following the tracking problem is studied. The control objective is that the state variable x 1 tracks asymptotically the reference signal r. However, the tracking problem cannot directly solved via the NMPC controller with the optimal control problem (3). Therefore, the tracking problem was considered as a sequence of set point changes. The set points of the system (22) depend on the reference signal r, i.e. ξ s = [r 0 r 0] T. Figure 5 shows the closedloop system states x, the control input u and the reference signal r. Figure 5 shows that the reference signal r is asymptotically tracked while satisfying the constraints. 7.2 Disturbance Attenuation In the following the task is to stabilize the state x 1 under a persistent disturbance z with unknown but bounded magnitude. It is assumed that the full state ξ can be measured but not the disturbance z. Also in this control problem the NMPC controller with the optimal control problem (3) cannot directly be applied to stabilize the state x 1 under the disturbance z. A typical approach to solve such kind of disturbance attenuation problems in model predictive control is to estimate the disturbance z via an observer and to use the estimated disturbance ẑ in the prediction of the model predictive controller. The disturbance z can be estimated via the observer

19 Sampled-Data Model Predictive Control for Constrained Systems x 1, r x 2, x u Time Fig. 5. Simulation results of r (gray solid), x 1 (black solid), x 2 (gray solid), x 3 (black solid), and u (black solid) for the tracking problem ˆφ(t) = ˆφ(t) + 0 v(t) + L(y(t) ŷ(t)) ŷ(t) = ˆφ(t), (24) where ˆφ = [ˆξ ẑ] T is the augmented state. The observer gain L was chosen such that the disturbance z is estimated sufficiently fast in order to obtain a good performance. Figure 6 shows the closed-loop system states x, the control input u and the disturbance z. As can be seen the state x 1 is asymptotically stabilized under the disturbance z while satisfying the constraints. In summary, in all considered cases NMPC shows good performance while satisfying the constraints.

20 20 Findeisen, Raff, Allgöwer 2 x 1, z x 2, x u Time Fig. 6. Simulation results of z (gray solid), x 1 (black solid), x 2 (gray solid), x 3 (black solid), and u (black solid) for the disturbance attenuation problem. 8 Conclusions Model predictive control, especially linear model predictive control, is by now widely applied in practice. However, increasing productivity demands, tighter environmental regulations, higher quality specifications and demanding economical considerations require to operate process over a wide region of operating conditions, for which linear models are often not adequate. This inadequacy has lead in recent years to an increased theoretical and practical interest in NMPC. In this paper reviewed the main principles and the existing results of sampleddata NMPC for continuous time systems subject to constraints. As outlined, in NMPC an open-loop optimal control problem is solved repeatedly at fixed sampling instant considering the current system state and the resulting control is applied open-loop for a short time. Since NMPC is based on an openloop optimal control problem, it allows the direct consideration of a nonlinear system model and the inclusion of constraints on states and inputs. As outlined a series of questions for NMPC, such as the stability of the closed-loop, are by now well understood. Nevertheless, many open questions remain, before NMPC can be applied successfully in practice.

21 References Sampled-Data Model Predictive Control for Constrained Systems F. Allgöwer, T.A. Badgwell, J.S. Qin, J.B. Rawlings, and S.J. Wright. Nonlinear predictive control and moving horizon estimation An introductory overview. In P.M. Frank, editor, Advances in Control, Highlights of ECC 99, pages Springer, London, R.A. Bartlett, A. Wächter, and L.T. Biegler. Active set vs. interior point strategies for model predictive control. In Proc. Amer. Contr. Conf., pages , Chicago, Il, R. Bellman. Dynamic Programming. Princeton University Press, Princeton, New Jersey, L.D. Berkovitz. Optimal Control Theory. Springer-Verlag, New York, L. Biegler. Efficient solution of dynamic optimization and NMPC problems. In F. Allgöwer and A. Zheng, editors, Nonlinear Predictive Control, pages Birkhäuser, Basel, L.T. Biegler and J.B Rawlings. Optimization approaches to nonlinear model predictive control. In W.H. Ray and Y. Arkun, editors, Proc. 4th International Conference on Chemical Process Control - CPC IV, pages AIChE, CACHE, T. Binder, L. Blank, H.G. Bock, R. Burlisch, W. Dahmen, M. Diehl, T. Kronseder, W. Marquardt, J.P. Schlöder, and O. von Stryk. Introduction to model based optimization of chemical processes on moving horizons. In M. Groetschel, S.O. Krumke, and J. Rambau, editors, Online Optimization of Large Scale Systems: State of the Art, pages Springer, Berlin, R. Blauwkamp and T. Basar. A receding-horizon approach to robust output feedback control for nonlinear systems. In Proc. 38th IEEE Conf. Decision Contr., pages , San Diego, C.C. Chen and L. Shaw. On receding horizon feedback control. Automatica, 18(3): , H. Chen. Stability and Robustness Considerations in Nonlinear Model Predictive Control. Fortschr.-Ber. VDI Reihe 8 Nr VDI Verlag, Düsseldorf, H. Chen and F. Allgöwer. Nonlinear model predictive control schemes with guaranteed stability. In R. Berber and C. Kravaris, editors, Nonlinear Model Based Process Control, pages Kluwer Academic Publishers, Dodrecht, H. Chen and F. Allgöwer. A quasi-infinite horizon nonlinear model predictive control scheme with guaranteed stability. Automatica, 34(10): , H. Chen, C.W. Scherer, and F. Allgöwer. A game theoretic approach to nonlinear robust receding horizon control of constrained systems. In Proc. Amer. Contr. Conf., pages , Albuquerque, W. Chen, D.J. Ballance, and J. O Reilly. Model predictive control of nonlinear systems: Computational burden and stability. IEE Proceedings, Part D, 147(4): , W. Chen, D.J. Ballance, and J. O Reilly. Optimisation of attraction domains of nonlinear mpc via lmi methods. In Proc. Amer. Contr. Conf., pages , Arlington, F.H. Clark, Y.S. Leydaev, R.J. Stern, and P.R. Wolenski. Nonsmooth Analysis and Control Theory. Number 178 in Graduate Texts in Mathematics. Springer Verlag, New York, 1998.

22 22 Findeisen, Raff, Allgöwer 17. G. De Nicolao, L. Magni, and R. Scattolini. Stability and robustness of nonlinear receding horizon control. In F. Allgöwer and A. Zheng, editors, Nonlinear Predictive Control, pages Birkhäuser, Basel, N.M.C. de Oliveira and L.T. Biegler. An extension of Newton-type algorithms for nonlinear process control. Automatica, 31(2): , S. de Oliveira Kothare and M. Morari. Contractive model predictive control for constrained nonlinear systems. IEEE Trans. Aut. Control, 45(6): , M. Diehl, R. Findeisen, F. Allgöwer, J.P. Schlöder, and H.G. Bock. Stability of nonlinear model predictive control in the presence of errors due to numerical online optimization. In Proc. 43th IEEE Conf. Decision Contr., pages , Maui, M. Diehl, R. Findeisen, H.G. Bock, J.P. Schlöder, and F. Allgöwer. Nominal stability of the real-time iteration scheme for nonlinear model predictive control. IEE Control Theory Appl., 152(3): , M. Diehl, R. Findeisen, Z. Nagy, H.G. Bock, J.P. Schlöder, and F. Allgöwer. Real-time optimization and nonlinear model predictive control of processes governed by differential-algebraic equations. J. Proc. Contr., 4(12): , M. Diehl, R. Findeisen, S. Schwarzkopf, I. Uslu, F. Allgöwer, H.G. Bock, and J.P. Schlöder. An efficient approach for nonlinear model predictive control of largescale systems. Part I: Description of the methodology. Automatisierungstechnik, 12: , M. Diehl, R. Findeisen, S. Schwarzkopf, I. Uslu, F. Allgöwer, H.G. Bock, and J.P. Schlöder. An efficient approach for nonlinear model predictive control of large-scale systems. Part II: Experimental evaluation considering the control of a distillation column. Automatisierungstechnik, 1:22 29, A.M. Elaiw and Gyurkovics É. Multirate sampling and delays in receding horizon stabilization of nonlinear systems. In Proc. 16th IFAC World Congress, Prague, Czech Republic, R. Findeisen. Nonlinear Model Predictive Control: A Sampled-Data Feedback Perspective. Fortschr.-Ber. VDI Reihe 8 Nr. 1087, VDI Verlag, Düsseldorf, R. Findeisen and F. Allgöwer. Stabilization using sampled-data open-loop feedback a nonlinear model predictive control perspective. In Proc. Symposium on Nonlinear Control Systems, NOLCOS 2004, Stuttgart, Germany, R. Findeisen, H. Chen, and F. Allgöwer. Nonlinear predictive control for setpoint families. In Proc. Amer. Contr. Conf., pages , Chicago, R. Findeisen, M. Diehl, I. Uslu, S. Schwarzkopf, F. Allgöwer, H.G. Bock, J.P. Schlöder, and E.D. Gilles. Computation and performance assessment of nonlinear model predictive control. In Proc. 42th IEEE Conf. Decision Contr., pages , Las Vegas, R. Findeisen, L. Imsland, F. Allgöwer, and B.A. Foss. Output feedback nonlinear predictive control - a separation principle approach. In Proc. of 15th IFAC World Congress, Barcelona, Spain, Paper ID 2204 on CD-ROM. 31. R. Findeisen, L. Imsland, F. Allgöwer, and B.A. Foss. Output feedback stabilization for constrained systems with nonlinear model predictive control. Int. J. of Robust and Nonlinear Control, 13(3-4): , R. Findeisen, L. Imsland, F. Allgöwer, and B.A. Foss. State and output feedback nonlinear model predictive control: An overview. Europ. J. Contr., 9(2-3): , 2003.

Nonlinear Model Predictive Control: From Theory to Application

Nonlinear Model Predictive Control: From Theory to Application J. Chin. Inst. Chem. Engrs., Vol. 35, No. 3, 299-315, 2004 Nonlinear Model Predictive Control: From Theory to Application Frank Allgöwer [1], Rolf Findeisen, and Zoltan K. Nagy Institute for Systems Theory

More information

Chapter 3 Nonlinear Model Predictive Control

Chapter 3 Nonlinear Model Predictive Control Chapter 3 Nonlinear Model Predictive Control In this chapter, we introduce the nonlinear model predictive control algorithm in a rigorous way. We start by defining a basic NMPC algorithm for constant reference

More information

Predictive Control Algorithms: Stability despite Shortened Optimization Horizons

Predictive Control Algorithms: Stability despite Shortened Optimization Horizons Predictive Control Algorithms: Stability despite Shortened Optimization Horizons Philipp Braun Jürgen Pannek Karl Worthmann University of Bayreuth, 9544 Bayreuth, Germany University of the Federal Armed

More information

C21 Model Predictive Control

C21 Model Predictive Control C21 Model Predictive Control Mark Cannon 4 lectures Hilary Term 216-1 Lecture 1 Introduction 1-2 Organisation 4 lectures: week 3 week 4 { Monday 1-11 am LR5 Thursday 1-11 am LR5 { Monday 1-11 am LR5 Thursday

More information

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x Lecture 4. LaSalle s Invariance Principle We begin with a motivating eample. Eample 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum Dynamics of a pendulum with friction can be written

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione Impulse and step response models 2 At the beginning of the 80, the early formulations

More information

Optimization of warehousing and transportation costs, in a multiproduct multi-level supply chain system, under a stochastic demand

Optimization of warehousing and transportation costs, in a multiproduct multi-level supply chain system, under a stochastic demand Int. J. Simul. Multidisci. Des. Optim. 4, 1-5 (2010) c ASMDO 2010 DOI: 10.1051/ijsmdo / 2010001 Available online at: http://www.ijsmdo.org Optimization of warehousing and transportation costs, in a multiproduct

More information

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

LOOP TRANSFER RECOVERY FOR SAMPLED-DATA SYSTEMS 1

LOOP TRANSFER RECOVERY FOR SAMPLED-DATA SYSTEMS 1 LOOP TRANSFER RECOVERY FOR SAMPLED-DATA SYSTEMS 1 Henrik Niemann Jakob Stoustrup Mike Lind Rank Bahram Shafai Dept. of Automation, Technical University of Denmark, Building 326, DK-2800 Lyngby, Denmark

More information

Fast Nonlinear Model Predictive Control Algorithms and Applications in Process Engineering

Fast Nonlinear Model Predictive Control Algorithms and Applications in Process Engineering Fast Nonlinear Model Predictive Control Algorithms and Applications in Process Engineering Moritz Diehl, Optimization in Engineering Center (OPTEC) & Electrical Engineering Department (ESAT) K.U. Leuven,

More information

Control Systems with Actuator Saturation

Control Systems with Actuator Saturation Control Systems with Actuator Saturation Analysis and Design Tingshu Hu Zongli Lin With 67 Figures Birkhauser Boston Basel Berlin Preface xiii 1 Introduction 1 1.1 Linear Systems with Actuator Saturation

More information

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Fault Accomodation Using Model Predictive Methods - Jovan D. Bošković and Raman K.

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Fault Accomodation Using Model Predictive Methods - Jovan D. Bošković and Raman K. FAULT ACCOMMODATION USING MODEL PREDICTIVE METHODS Scientific Systems Company, Inc., Woburn, Massachusetts, USA. Keywords: Fault accommodation, Model Predictive Control (MPC), Failure Detection, Identification

More information

MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory

MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory Jun WuÝ, Sheng ChenÞand Jian ChuÝ ÝNational Laboratory of Industrial Control Technology Institute of Advanced Process Control Zhejiang University,

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

More information

Modeling and Verification of Sampled-Data Hybrid Systems

Modeling and Verification of Sampled-Data Hybrid Systems Modeling and Verification of Sampled-Data Hybrid Systems Abstract B. Izaias Silva and Bruce H. Krogh Dept. of Electrical and Computer Engineering, Carnegie Mellon University (Izaias /krogh)@cmu.edu We

More information

Towards Dual MPC. Tor Aksel N. Heirung B. Erik Ydstie Bjarne Foss

Towards Dual MPC. Tor Aksel N. Heirung B. Erik Ydstie Bjarne Foss 4th IFAC Nonlinear Model Predictive Control Conference International Federation of Automatic Control Towards Dual MPC Tor Aksel N. Heirung B. Erik Ydstie Bjarne Foss Department of Engineering Cybernetics,

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

More information

Real-Time Systems Versus Cyber-Physical Systems: Where is the Difference?

Real-Time Systems Versus Cyber-Physical Systems: Where is the Difference? Real-Time Systems Versus Cyber-Physical Systems: Where is the Difference? Samarjit Chakraborty www.rcs.ei.tum.de TU Munich, Germany Joint work with Dip Goswami*, Reinhard Schneider #, Alejandro Masrur

More information

Stochastic Gradient Method: Applications

Stochastic Gradient Method: Applications Stochastic Gradient Method: Applications February 03, 2015 P. Carpentier Master MMMEF Cours MNOS 2014-2015 114 / 267 Lecture Outline 1 Two Elementary Exercices on the Stochastic Gradient Two-Stage Recourse

More information

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics

More information

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example. An Example 2 3 4 Outline Objective: Develop methods and algorithms to mathematically model shape of real world objects Categories: Wire-Frame Representation Object is represented as as a set of points

More information

Coordinated Path Following Control and Formation Control of Mobile Robots

Coordinated Path Following Control and Formation Control of Mobile Robots Coordinated Path Following Control and Formation Control of Mobile Robots Dissertation der Fakultät für Informations- und Kognitionswissenschaften der Eberhard-Karls-Universität Tübingen zur Erlangung

More information

Neuro-Dynamic Programming An Overview

Neuro-Dynamic Programming An Overview 1 Neuro-Dynamic Programming An Overview Dimitri Bertsekas Dept. of Electrical Engineering and Computer Science M.I.T. September 2006 2 BELLMAN AND THE DUAL CURSES Dynamic Programming (DP) is very broadly

More information

Fuzzy Differential Systems and the New Concept of Stability

Fuzzy Differential Systems and the New Concept of Stability Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida

More information

Elgersburg Workshop 2010, 1.-4. März 2010 1. Path-Following for Nonlinear Systems Subject to Constraints Timm Faulwasser

Elgersburg Workshop 2010, 1.-4. März 2010 1. Path-Following for Nonlinear Systems Subject to Constraints Timm Faulwasser #96230155 2010 Photos.com, ein Unternehmensbereich von Getty Images. Alle Rechte vorbehalten. Steering a Car as a Control Problem Path-Following for Nonlinear Systems Subject to Constraints Chair for Systems

More information

19 LINEAR QUADRATIC REGULATOR

19 LINEAR QUADRATIC REGULATOR 19 LINEAR QUADRATIC REGULATOR 19.1 Introduction The simple form of loopshaping in scalar systems does not extend directly to multivariable (MIMO) plants, which are characterized by transfer matrices instead

More information

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL Exit Time problems and Escape from a Potential Well Escape From a Potential Well There are many systems in physics, chemistry and biology that exist

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems In Chapters 8 and 9 of this book we have designed dynamic controllers such that the closed-loop systems display the desired transient

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms Al-Duwaish H. and Naeem, Wasif Electrical Engineering Department/King Fahd University of Petroleum and Minerals

More information

Brief Paper. of discrete-time linear systems. www.ietdl.org

Brief Paper. of discrete-time linear systems. www.ietdl.org Published in IET Control Theory and Applications Received on 28th August 2012 Accepted on 26th October 2012 Brief Paper ISSN 1751-8644 Temporal and one-step stabilisability and detectability of discrete-time

More information

MOVING HORIZON ESTIMATION FOR AN INDUSTRIAL GAS PHASE POLYMERIZATION REACTOR. Jasmeer Ramlal Kenneth V. Allsford John D.

MOVING HORIZON ESTIMATION FOR AN INDUSTRIAL GAS PHASE POLYMERIZATION REACTOR. Jasmeer Ramlal Kenneth V. Allsford John D. MOVING HORIZON ESTIMATION FOR AN INDUSTRIAL GAS PHASE POLYMERIZATION REACTOR Jasmeer Ramlal Kenneth V. Allsford John D. Hedengren, Sasol Polymers, 56 Grosvenor Road, Bryanston, Randburg, South Africa 225

More information

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 510838, 15 pages doi:10.1155/2010/510838 Research Article Stability Analysis for Higher-Order Adjacent Derivative

More information

Gaussian Process Model Based Predictive Control

Gaussian Process Model Based Predictive Control Gaussian Process Model Based Predictive Control Juš Kocijan, Roderick Murray-Smith, Carl Edward Rasmussen, Agathe Girard Abstract Gaussian process models provide a probabilistic non-parametric modelling

More information

Load Balancing and Switch Scheduling

Load Balancing and Switch Scheduling EE384Y Project Final Report Load Balancing and Switch Scheduling Xiangheng Liu Department of Electrical Engineering Stanford University, Stanford CA 94305 Email: liuxh@systems.stanford.edu Abstract Load

More information

Lectures 5-6: Taylor Series

Lectures 5-6: Taylor Series Math 1d Instructor: Padraic Bartlett Lectures 5-: Taylor Series Weeks 5- Caltech 213 1 Taylor Polynomials and Series As we saw in week 4, power series are remarkably nice objects to work with. In particular,

More information

Infinitely Repeated Games with Discounting Ù

Infinitely Repeated Games with Discounting Ù Infinitely Repeated Games with Discounting Page 1 Infinitely Repeated Games with Discounting Ù Introduction 1 Discounting the future 2 Interpreting the discount factor 3 The average discounted payoff 4

More information

Adaptive Control Using Combined Online and Background Learning Neural Network

Adaptive Control Using Combined Online and Background Learning Neural Network Adaptive Control Using Combined Online and Background Learning Neural Network Eric N. Johnson and Seung-Min Oh Abstract A new adaptive neural network (NN control concept is proposed with proof of stability

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

CHAPTER 1 Splines and B-splines an Introduction

CHAPTER 1 Splines and B-splines an Introduction CHAPTER 1 Splines and B-splines an Introduction In this first chapter, we consider the following fundamental problem: Given a set of points in the plane, determine a smooth curve that approximates the

More information

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation

CONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in

More information

15 Limit sets. Lyapunov functions

15 Limit sets. Lyapunov functions 15 Limit sets. Lyapunov functions At this point, considering the solutions to ẋ = f(x), x U R 2, (1) we were most interested in the behavior of solutions when t (sometimes, this is called asymptotic behavior

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Functional Optimization Models for Active Queue Management

Functional Optimization Models for Active Queue Management Functional Optimization Models for Active Queue Management Yixin Chen Department of Computer Science and Engineering Washington University in St Louis 1 Brookings Drive St Louis, MO 63130, USA chen@cse.wustl.edu

More information

Figure 2.1: Center of mass of four points.

Figure 2.1: Center of mass of four points. Chapter 2 Bézier curves are named after their inventor, Dr. Pierre Bézier. Bézier was an engineer with the Renault car company and set out in the early 196 s to develop a curve formulation which would

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Relations Between Time Domain and Frequency Domain Prediction Error Methods - Tomas McKelvey

CONTROL SYSTEMS, ROBOTICS, AND AUTOMATION - Vol. V - Relations Between Time Domain and Frequency Domain Prediction Error Methods - Tomas McKelvey COTROL SYSTEMS, ROBOTICS, AD AUTOMATIO - Vol. V - Relations Between Time Domain and Frequency Domain RELATIOS BETWEE TIME DOMAI AD FREQUECY DOMAI PREDICTIO ERROR METHODS Tomas McKelvey Signal Processing,

More information

Reliability Guarantees in Automata Based Scheduling for Embedded Control Software

Reliability Guarantees in Automata Based Scheduling for Embedded Control Software 1 Reliability Guarantees in Automata Based Scheduling for Embedded Control Software Santhosh Prabhu, Aritra Hazra, Pallab Dasgupta Department of CSE, IIT Kharagpur West Bengal, India - 721302. Email: {santhosh.prabhu,

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Clustering and scheduling maintenance tasks over time

Clustering and scheduling maintenance tasks over time Clustering and scheduling maintenance tasks over time Per Kreuger 2008-04-29 SICS Technical Report T2008:09 Abstract We report results on a maintenance scheduling problem. The problem consists of allocating

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Identification algorithms for hybrid systems

Identification algorithms for hybrid systems Identification algorithms for hybrid systems Giancarlo Ferrari-Trecate Modeling paradigms Chemistry White box Thermodynamics System Mechanics... Drawbacks: Parameter values of components must be known

More information

Dynamic Real-time Optimization with Direct Transcription and NLP Sensitivity

Dynamic Real-time Optimization with Direct Transcription and NLP Sensitivity Dynamic Real-time Optimization with Direct Transcription and NLP Sensitivity L. T. Biegler, R. Huang, R. Lopez Negrete, V. Zavala Chemical Engineering Department Carnegie Mellon University Pittsburgh,

More information

An optimal transportation problem with import/export taxes on the boundary

An optimal transportation problem with import/export taxes on the boundary An optimal transportation problem with import/export taxes on the boundary Julián Toledo Workshop International sur les Mathématiques et l Environnement Essaouira, November 2012..................... Joint

More information

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion

The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion The QOOL Algorithm for fast Online Optimization of Multiple Degree of Freedom Robot Locomotion Daniel Marbach January 31th, 2005 Swiss Federal Institute of Technology at Lausanne Daniel.Marbach@epfl.ch

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

DIEF, Department of Engineering Enzo Ferrari University of Modena e Reggio Emilia Italy Online Trajectory Planning for robotic systems

DIEF, Department of Engineering Enzo Ferrari University of Modena e Reggio Emilia Italy Online Trajectory Planning for robotic systems DIEF, Department of Engineering Enzo Ferrari University of Modena e Reggio Emilia Italy Online Trajectory Planning for robotic systems Luigi Biagiotti Luigi Biagiotti luigi.biagiotti@unimore.it Introduction

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Mapping an Application to a Control Architecture: Specification of the Problem

Mapping an Application to a Control Architecture: Specification of the Problem Mapping an Application to a Control Architecture: Specification of the Problem Mieczyslaw M. Kokar 1, Kevin M. Passino 2, Kenneth Baclawski 1, and Jeffrey E. Smith 3 1 Northeastern University, Boston,

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania Moral Hazard Itay Goldstein Wharton School, University of Pennsylvania 1 Principal-Agent Problem Basic problem in corporate finance: separation of ownership and control: o The owners of the firm are typically

More information

IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 1

IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 1 This article has been accepted for inclusion in a future issue of this journal Content is final as presented, with the exception of pagination IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 1 An Improved

More information

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1 (R) User Manual Environmental Energy Technologies Division Berkeley, CA 94720 http://simulationresearch.lbl.gov Michael Wetter MWetter@lbl.gov February 20, 2009 Notice: This work was supported by the U.S.

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Quasi-static evolution and congested transport

Quasi-static evolution and congested transport Quasi-static evolution and congested transport Inwon Kim Joint with Damon Alexander, Katy Craig and Yao Yao UCLA, UW Madison Hard congestion in crowd motion The following crowd motion model is proposed

More information

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay

Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding

More information

Passive control. Carles Batlle. II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July 18-22 2005

Passive control. Carles Batlle. II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July 18-22 2005 Passive control theory I Carles Batlle II EURON/GEOPLEX Summer School on Modeling and Control of Complex Dynamical Systems Bertinoro, Italy, July 18-22 25 Contents of this lecture Change of paradigm in

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Quasi Contraction and Fixed Points

Quasi Contraction and Fixed Points Available online at www.ispacs.com/jnaa Volume 2012, Year 2012 Article ID jnaa-00168, 6 Pages doi:10.5899/2012/jnaa-00168 Research Article Quasi Contraction and Fixed Points Mehdi Roohi 1, Mohsen Alimohammady

More information

Fast Model Predictive Control Using Online Optimization Yang Wang and Stephen Boyd, Fellow, IEEE

Fast Model Predictive Control Using Online Optimization Yang Wang and Stephen Boyd, Fellow, IEEE IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, VOL 18, NO 2, MARCH 2010 267 Fast Model Predictive Control Using Online Optimization Yang Wang and Stephen Boyd, Fellow, IEEE Abstract A widely recognized

More information

Introduction to Engineering System Dynamics

Introduction to Engineering System Dynamics CHAPTER 0 Introduction to Engineering System Dynamics 0.1 INTRODUCTION The objective of an engineering analysis of a dynamic system is prediction of its behaviour or performance. Real dynamic systems are

More information

Cloud Storage and Online Bin Packing

Cloud Storage and Online Bin Packing Cloud Storage and Online Bin Packing Doina Bein, Wolfgang Bein, and Swathi Venigella Abstract We study the problem of allocating memory of servers in a data center based on online requests for storage.

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008 This chapter is available free to all individuals, on understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

SINGLE-STAGE MULTI-PRODUCT PRODUCTION AND INVENTORY SYSTEMS: AN ITERATIVE ALGORITHM BASED ON DYNAMIC SCHEDULING AND FIXED PITCH PRODUCTION

SINGLE-STAGE MULTI-PRODUCT PRODUCTION AND INVENTORY SYSTEMS: AN ITERATIVE ALGORITHM BASED ON DYNAMIC SCHEDULING AND FIXED PITCH PRODUCTION SIGLE-STAGE MULTI-PRODUCT PRODUCTIO AD IVETORY SYSTEMS: A ITERATIVE ALGORITHM BASED O DYAMIC SCHEDULIG AD FIXED PITCH PRODUCTIO Euclydes da Cunha eto ational Institute of Technology Rio de Janeiro, RJ

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Copyright 2014 IEEE. Reprinted from Proceedings of the 2014 European Control Conference

Copyright 2014 IEEE. Reprinted from Proceedings of the 2014 European Control Conference Copyright 2014 IEEE Reprinted from Proceedings of the 2014 European Control Conference Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current

More information

On the D-Stability of Linear and Nonlinear Positive Switched Systems

On the D-Stability of Linear and Nonlinear Positive Switched Systems On the D-Stability of Linear and Nonlinear Positive Switched Systems V. S. Bokharaie, O. Mason and F. Wirth Abstract We present a number of results on D-stability of positive switched systems. Different

More information

Optimal proportional reinsurance and dividend pay-out for insurance companies with switching reserves

Optimal proportional reinsurance and dividend pay-out for insurance companies with switching reserves Optimal proportional reinsurance and dividend pay-out for insurance companies with switching reserves Abstract: This paper presents a model for an insurance company that controls its risk and dividend

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. This material is posted here with permission of the IEEE Such permission of the IEEE does not in any way imply IEEE endorsement of any of Helsinki University of Technology's products or services Internal

More information

Student Project Allocation Using Integer Programming

Student Project Allocation Using Integer Programming IEEE TRANSACTIONS ON EDUCATION, VOL. 46, NO. 3, AUGUST 2003 359 Student Project Allocation Using Integer Programming A. A. Anwar and A. S. Bahaj, Member, IEEE Abstract The allocation of projects to students

More information

FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper.

FIRST YEAR CALCULUS. Chapter 7 CONTINUITY. It is a parabola, and we can draw this parabola without lifting our pencil from the paper. FIRST YEAR CALCULUS WWLCHENW L c WWWL W L Chen, 1982, 2008. 2006. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990. It It is is

More information

Support Vector Machines with Clustering for Training with Very Large Datasets

Support Vector Machines with Clustering for Training with Very Large Datasets Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano

More information

The Basics of FEA Procedure

The Basics of FEA Procedure CHAPTER 2 The Basics of FEA Procedure 2.1 Introduction This chapter discusses the spring element, especially for the purpose of introducing various concepts involved in use of the FEA technique. A spring

More information

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems Using the Theory of Reals in Analyzing Continuous and Hybrid Systems Ashish Tiwari Computer Science Laboratory (CSL) SRI International (SRI) Menlo Park, CA 94025 Email: ashish.tiwari@sri.com Ashish Tiwari

More information

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.

More information

Microcontroller-based experiments for a control systems course in electrical engineering technology

Microcontroller-based experiments for a control systems course in electrical engineering technology Microcontroller-based experiments for a control systems course in electrical engineering technology Albert Lozano-Nieto Penn State University, Wilkes-Barre Campus, Lehman, PA, USA E-mail: AXL17@psu.edu

More information

Speech at IFAC2014 BACKGROUND

Speech at IFAC2014 BACKGROUND Speech at IFAC2014 Thank you Professor Craig for the introduction. IFAC President, distinguished guests, conference organizers, sponsors, colleagues, friends; Good evening It is indeed fitting to start

More information

ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE. 1. Introduction

ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE. 1. Introduction ON COMPLETELY CONTINUOUS INTEGRATION OPERATORS OF A VECTOR MEASURE J.M. CALABUIG, J. RODRÍGUEZ, AND E.A. SÁNCHEZ-PÉREZ Abstract. Let m be a vector measure taking values in a Banach space X. We prove that

More information

MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS. Department of Computer Science, York University, Heslington, York, Y010 5DD, UK (email:nep@cs.york.ac.

MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS. Department of Computer Science, York University, Heslington, York, Y010 5DD, UK (email:nep@cs.york.ac. MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS N. E. Pears Department of Computer Science, York University, Heslington, York, Y010 5DD, UK (email:nep@cs.york.ac.uk) 1 Abstract A method of mobile robot steering

More information

Optimization of Supply Chain Networks

Optimization of Supply Chain Networks Optimization of Supply Chain Networks M. Herty TU Kaiserslautern September 2006 (2006) 1 / 41 Contents 1 Supply Chain Modeling 2 Networks 3 Optimization Continuous optimal control problem Discrete optimal

More information