Predictive Control Algorithms for Nonlinear Systems



Similar documents
INPUT-TO-STATE STABILITY FOR DISCRETE-TIME NONLINEAR SYSTEMS

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

Chapter 3 Nonlinear Model Predictive Control

Example 4.1 (nonlinear pendulum dynamics with friction) Figure 4.1: Pendulum. asin. k, a, and b. We study stability of the origin x

Nonlinear Model Predictive Control: From Theory to Application

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Fault Accomodation Using Model Predictive Methods - Jovan D. Bošković and Raman K.

C21 Model Predictive Control

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms

2.3 Convex Constrained Optimization Problems

Duality of linear conic problems

1 Norms and Vector Spaces

What is Linear Programming?

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization

Adaptive Online Gradient Descent

Lecture 13 Linear quadratic Lyapunov theory

4 Lyapunov Stability Theory

Identification algorithms for hybrid systems

Optimization of warehousing and transportation costs, in a multiproduct multi-level supply chain system, under a stochastic demand

PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks

3. Linear Programming and Polyhedral Combinatorics

Lecture 7: Finding Lyapunov Functions 1

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

Continued Fractions and the Euclidean Algorithm

7 Gaussian Elimination and LU Factorization

Sistemas com saturação no controle

Completely Positive Cone and its Dual

Solving polynomial least squares problems via semidefinite programming relaxations

Mathematical finance and linear programming (optimization)

Discrete Optimization

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

On the Interaction and Competition among Internet Service Providers

Scheduling Real-time Tasks: Algorithms and Complexity

1 if 1 x 0 1 if 0 x 1

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

CONSTANT-SIGN SOLUTIONS FOR A NONLINEAR NEUMANN PROBLEM INVOLVING THE DISCRETE p-laplacian. Pasquale Candito and Giuseppina D Aguí

Real-Time Systems Versus Cyber-Physical Systems: Where is the Difference?

An interval linear programming contractor

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM

Notes on Symmetric Matrices

Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

Chapter 6. Cuboids. and. vol(conv(p ))

Least-Squares Intersection of Lines

How To Know If A Domain Is Unique In An Octempo (Euclidean) Or Not (Ecl)

Equilibria and Dynamics of. Supply Chain Network Competition with Information Asymmetry. and Minimum Quality Standards

MLD Model of Boiler-Turbine System Based on PWA Linearization Approach

Cryptography and Network Security. Prof. D. Mukhopadhyay. Department of Computer Science and Engineering. Indian Institute of Technology, Kharagpur

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

OPRE 6201 : 2. Simplex Method

A Cournot-Nash Bertrand Game Theory Model of a Service-Oriented Internet with Price and Quality Competition Among Network Transport Providers

Neuro-Dynamic Programming An Overview

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

Date: April 12, Contents

24. The Branch and Bound Method

Properties of BMO functions whose reciprocals are also BMO

BANACH AND HILBERT SPACE REVIEW

ARX-Model based Model Predictive Control with Offset-Free Tracking.

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

CHAPTER 9. Integer Programming

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

Fairness in Routing and Load Balancing

A Robust Optimization Approach to Supply Chain Management

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Data Mining: Algorithms and Applications Matrix Math Review

A Passivity Measure Of Systems In Cascade Based On Passivity Indices

IRREDUCIBLE OPERATOR SEMIGROUPS SUCH THAT AB AND BA ARE PROPORTIONAL. 1. Introduction

Adaptive Control Using Combined Online and Background Learning Neural Network

Fuzzy Differential Systems and the New Concept of Stability

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

Direct Methods for Solving Linear Systems. Matrix Factorization

Identification of Hybrid Systems

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

How To Prove The Dirichlet Unit Theorem

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Control Systems with Actuator Saturation

LOOP TRANSFER RECOVERY FOR SAMPLED-DATA SYSTEMS 1

The Characteristic Polynomial

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Tail inequalities for order statistics of log-concave vectors and applications

Mapping an Application to a Control Architecture: Specification of the Problem

The Heat Equation. Lectures INF2320 p. 1/88

Follow links for Class Use and other Permissions. For more information send to:

Pacific Journal of Mathematics

19 LINEAR QUADRATIC REGULATOR

Transcription:

Predictive Control Algorithms for Nonlinear Systems DOCTORAL THESIS for receiving the doctoral degree from the Gh. Asachi Technical University of Iaşi, România The Defense will take place on 15 September 2009 by Mircea Lazăr born at Iaşi, România

Promoter: Prof. Dr. Mihail Voicu Corresponding Member of the Romanian Academy Defense Committee: Prof. Dr. Vasile-Ion Manta, Chair Prof. Dr. Mihail Voicu, Promoter Corresponding Member of the Romanian Academy Prof. Dr. Ioan Dumitrache Corresponding Member of the Romanian Academy Prof. Dr. Vladimir Răsvan Prof. Dr. Octavian Păstrăvanu

soției mele to my wife

Contents Acknowledgements 4 Summary 7 1 Introduction 11 1.1 Model predictive control..................... 11 1.2 Open problems in stability and robustness of MPC...... 16 1.2.1 Stability of MPC..................... 16 1.2.2 Robust MPC schemes.................. 18 1.3 Summary of publications..................... 21 1.4 Basic mathematical notation and definitions.......... 22 2 Lyapunov Functions Subtleties for Discrete-time Systems 25 2.1 Introduction............................ 25 2.2 Preliminaries........................... 27 2.2.1 Stability and input-to-state stability.......... 27 2.2.2 Lyapunov functions.................... 29 2.3 Illuminating examples...................... 30 2.4 ISS tests based on discontinuous USL functions........ 36 2.5 Conclusions............................ 41 3 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions 43 3.1 Introduction............................ 43 3.2 Preliminaries........................... 45 3.3 MPC scheme set-up....................... 47 3.4 Input-to-state stability results.................. 49 3.5 Asymptotic stability results................... 54 3.6 Conclusion............................. 55 4 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control 57 4.1 Introduction............................ 57 4.1.1 Preliminaries....................... 59 4.2 Input-to-state stability...................... 59 4.3 Min-max nonlinear MPC: Problem set-up........... 64

2 Contents 4.4 ISpS results for min-max nonlinear MPC............ 66 4.5 Main result: ISS dual-mode min-max MPC.......... 68 4.6 Illustrative example: A nonlinear double integrator...... 73 4.7 Conclusions............................ 76 5 Design of the terminal cost: H and min-max MPC 77 5.1 Introduction............................ 77 5.2 Preliminaries........................... 78 5.2.1 Input-to-state stability.................. 79 5.2.2 Input-to-state stability conditions for min-max robust MPC............................ 79 5.3 Problem formulation....................... 81 5.3.1 Existing solutions..................... 82 5.4 Main results............................ 83 5.4.1 LMI-based-solution.................... 83 5.4.2 Relation to LMI-based H control design....... 84 5.5 Conclusions............................ 87 6 Self-optimizing robust nonlinear MPC 89 6.1 Introduction............................ 89 6.2 Preliminary definitions and results............... 90 6.2.1 ISS definitions and results................ 91 6.2.2 Inherent ISS through continuous and convex control Lyapunov functions.................... 92 6.3 Problem definition........................ 93 6.4 Main results............................ 94 6.4.1 Optimized ISS through convex CLFs.......... 94 6.4.2 Self-optimizing robust nonlinear MPC......... 96 6.4.3 Decentralized formulation................ 98 6.4.4 Implementation issues.................. 101 6.5 Illustrative examples....................... 102 6.5.1 Example 1: control of a nonlinear system....... 102 6.5.2 Example 2: control of a DC-DC converter....... 103 6.5.3 Example 3: control of networked nonlinear systems.. 110 6.6 Conclusions............................ 112 7 Conclusions 113 7.1 Contributions........................... 113 7.1.1 Stability theory for discrete-time systems....... 113

Contents 3 7.1.2 Input-to-State stability theory for discrete-time discontinuous Systems.................... 114 7.1.3 Stabilizing nonlinear model predictive control..... 115 7.1.4 Robust nonlinear model predictive control....... 115 7.1.5 Low complexity nonlinear MPC............. 116 7.2 Future research.......................... 117 Bibliography 119

4 Contents

Acknowledgements This thesis presents the results of the research carried out during the period September 2001 - September 2002 and September 2006 - September 2009, under the supervision of Prof. Mihail Voicu, thesis promotor, and in close collaboration with Prof. Octavian Păstrăvanu. The completion of the research that led to the results published in this thesis would have not been possible without the constant support, patience and advice received from Prof. Voicu and Prof. Păstrăvanu and as such, my gratitude goes to them. I am very grateful to Prof. Ioan Dumitrache and Prof. Vladimir Răsvan for kindly agreeing to participate in the committee of this thesis and at the defence ceremony. Also, I am very grateful to Prof. Vasile-Ion Manta for agreeing to chair the defense committee of this thesis. I would like to thank Prof. Paul van den Bosch for his helpful advice and encouragement. He has always been there for me when I needed his opinion and he supported me through my career as a researcher. This thesis is largely based on a collection of articles published in international peer reviewed conferences and journals. As most of the articles are joint work with several collaborators, I would like to express my gratitude to all the co-authors. First and foremost I am very grateful to Prof. Maurice Heemels, without whom the research gathered in this thesis would have not been possible. His constant dedication, supervision and professionalism will always be a source of inspiration for me. I would like to thank Prof. Andrew (Andy) R. Teel for his contributions to the research presented in Chapter 2 and for sharing his knowledge. I am also grateful to Dr. David Muñoz de la Peña, Dr. Teodoro Alamo, Dr. Davide M. Raimondo, Prof. Lalo Magni, Dr. Daniel Limon, Prof. Eduardo F. Camacho, Dr. Bas J.P. Roset, Prof. Henk Nijmeijer for their contribution to our joint works. A special thanks goes to my colleague and friend, Dr. Andrej Jokić, who has provided me with constant support and has had important contributions in several research matters. Working together with him will always be a nice experience. Special thanks also go to Prof. Alberto Bemporad, my mentor and guide in the MPC world and to Dr. Stefano Di Cairano, with whom I enjoyed very

6 Contents much working together and having fun at the conferences. I am very grateful to Prof. Ilya V. Kolmanovsky for his constant support and encouragement and for sharing his knowledge. I am eternally indebted to my wife Raluca, my parents Roxana and Corneliu, my parents-in-law, Paulina and Traian, my grandparents on the father side, Eleonora and Ilie, and my grandparents on the mother side, Magdalena and Florin, for all their support and love. This thesis is dedicated to my wife. Mircea Lazar Eindhoven, June, 2009

Summary This thesis considers the stabilization and the robust stabilization of discretetime systems using model predictive control. Model predictive control (MPC) (also referred to as receding horizon control) is a control strategy that offers attractive solutions, already successfully implemented in industry, for the regulation of constrained linear or nonlinear systems. In this thesis, the MPC controller design methodology will be employed for the regulation of constrained discrete-time systems. One of the reasons for the success of MPC algorithms is their ability to handle hard constraints on states/outputs and inputs. Stability and robustness are probably the most studied properties of MPC controllers, as they are indispensable to practical implementation. A complete theory on (robust) stability of MPC has been developed for linear and continuous nonlinear systems. However, these results do not carry over to discrete-time discontinuous systems easily. These challenges will be taken up in this thesis with the purpose of highlighting certain subtleties that arise in stabilization and robust stabilization via model predictive control. As a starting point, in Chapter 2 of this thesis we consider stability analysis of discrete-time discontinuous systems using Lyapunov functions. We demonstrate via simple examples that the classical second method of Lyapunov is precarious for discrete-time discontinuous system dynamics. Also, we indicate that a particular type of Lyapunov condition, slightly stronger than the classical one, is required to establish stability of discrete-time discontinuous systems. Furthermore, we examine the robustness of the stability property when it was attained via a discontinuous Lyapunov function. This is often the case for discrete-time systems in closed-loop with model predictive controllers. In contrast to existing results based on smooth Lyapunov functions, we develop several robust stability tests, in terms of the input-to-state stability (ISS) property, that explicitly employ an available discontinuous Lyapunov function. The subtleties exposed in Chapter 2 are employed in Chapter 3 to develop a novel model predictive control scheme that achieves input-to-state stabilization of constrained discontinuous nonlinear and hybrid systems. Inputto-state stability is guaranteed when an optimal solution of the MPC optimization problem is attained. Special attention is paid to the effect that sub-optimal solutions have on ISS of the closed-loop system. This issue is

8 Contents of interest as firstly, the infimum of MPC optimization problems does not have to be attained and secondly, numerical solvers usually provide only sub-optimal solutions. An explicit relation is established between the deviation of the predictive control law from the optimum (called the optimality margin) and the resulting deterioration of the ISS property of the closedloop system. By imposing stronger conditions on the sub-optimal solutions, ISS can even be attained in this case. Revealing this explicit relation is an important result, as it provides an a priori bound on the evolution of the closed-loop system state and leads to conditions that guarantee ISS even in the presence of unaccounted sub-optimal solutions. Discrete-time nonlinear systems that are affected, possibly simultaneously, by parametric uncertainties and other disturbance inputs are considered in Chapter 4. The min-max model predictive control methodology is employed to obtain a controller that robustly steers the state of the system towards a desired equilibrium. The aim is to provide a priori sufficient conditions for robust stability of the resulting closed-loop system using the input-to-state stability framework. First, we show that only input-to-state practical stability can be ensured in general for closed-loop min-max MPC systems; and we provide explicit bounds on the evolution of the closed-loop system state. Then, we derive new conditions for guaranteeing ISS of min-max MPC closed-loop systems, using a dual-mode approach. The results developed in Chapter 4 hinge on the fact that a suitable terminal cost that satisfies the developed sufficient conditions for ISS must be a priori available. This problem is addressed in Chapter 5, which presents a novel method for designing the terminal cost and the auxiliary control law (ACL) for robust MPC of uncertain linear systems, such that ISS is a priori guaranteed for the closed-loop system. The method is based on the solution of a set of linear matrix inequalities (LMIs). An explicit relation is established between the proposed method and H control design. This relation shows that the LMI-based optimal solution of the H synthesis problem solves the terminal cost and ACL problem in min-max MPC, for a particular choice of the stage cost. This result, which was somehow missing in the MPC literature, is of general interest as it connects well known linear control problems to robust MPC design. In Chapter 6 we start from the observation that the goal of existing design methods for synthesizing control laws that achieve ISS is to a priori guarantee a predetermined closed-loop ISS gain. Consequently, the ISS property, with a predetermined, constant ISS gain, is in this way enforced for all state space trajectories of the closed-loop system and at all time instances. As the existing approaches, which are also employed in the design of MPC schemes

Contents 9 that achieve ISS, can lead to overly conservative solutions along particular trajectories, it is of high interest to develop a control (MPC) design method with the explicit goal of adapting the closed-loop ISS gain depending of the evolution of the state trajectory. Motivated by this, in Chapter 6 we propose a novel novel method for synthesizing robust MPC schemes with this feature. The method employs convex control Lyapunov functions (CLFs) and disturbance bounds to embed standard ISS conditions using a finite number of inequalities. This leads to a finite dimensional optimization problem that has to be solved on-line, in a receding horizon fashion. The proposed inequalities govern the evolution of the closed-loop state trajectory through the sublevel sets of the CLF. The unique feature of the proposed robust MPC scheme is to allow for the simultaneous on-line (i) computation of a control action that achieves ISS and (ii) minimization of the closed-loop ISS gain depending of an actual state trajectory. As a result, the developed nonlinear MPC scheme is self-optimizing in terms of disturbance attenuation. From the computational point of view, following a particular design recipe, the self-optimizing robust MPC algorithm can be implemented as a single linear program for discrete-time nonlinear systems that are affine in the control variable and the disturbance input. This renders the developed MPC schemes applicable to fast nonlinear systems, which is demonstrated by controlling a Buck-Boost DC-DC converter that requires sampling times less than a millisecond. Furthermore, we demonstrate that the freedom to optimize the closed-loop ISS gain on-line makes self-optimizing robust MPC suitable for decentralized control of networks of nonlinear systems. In conclusion, this thesis contains a series of significant advances in the synthesis of model predictive controllers for discrete-time, possibly discontinuous systems that guarantees stable and robust closed-loop systems. The latter properties are indispensable for any application of these control algorithms in practice. In the set-ups of the MPC algorithms, a clear focus was also on keeping the on-line computational burden low via simpler stabilizing constraints. The example on the control of DC-DC converters showed that the application to (very) fast systems comes within reach. This opens up a completely new range of applications, next to the traditional process control for typically slow systems. Therefore, the developed theory represents a fertile ground for future practical applications and it opens many roads for future research in model predictive control and stability of discrete-time systems as well.

10 Contents Motto: Imagination is more important than knowledge. - Albert Einstein

1 Introduction 1.1 Model predictive control 1.2 Open problems in stability and robustness of MPC 1.3 Summary of publications 1.4 Basic mathematical notation and definitions This thesis deals with the synthesis of stabilizing and robust controllers for constrained discrete-time discontinuous nonlinear systems. An appealing solution to the control of these systems is provided by the model predictive control methodology, due to its capability to a priori take into account constraints when computing the control action. Also, since the principles of model predictive control do not depend on the type of model applied for prediction, this methodology can be employed to formulate controller design set-ups for general dynamical systems. However, the properties of such control schemes and the feasibility of their implementation have to be reconsidered in the discontinuous context. In this thesis we focus in particular on stability and robustness. As such, in this chapter we will present a general introduction to the principles of MPC and then focus on open problems related to stability and robustness that will be tackled in the remainder of the thesis. 1.1 Model predictive control Model predictive control (MPC) (also referred to as receding horizon control) is a control strategy that offers attractive solutions for the regulation of constrained linear or nonlinear systems and, more recently, also for the regulation of discontinuous and hybrid systems. Within a relatively short time, MPC has reached a certain maturity due to the continuously increasing interest shown for this distinctive part of control theory. This is illustrated by its successful implementation in industry and by many excellent articles and books as well. See, for example, (Garcia et al., 1989; Mayne et al., 2000; Qin and Badgwell, 2003; Findeisen et al., 2003; Camacho and Bordons, 2004) and the references therein.

12 Introduction The initial MPC algorithms utilized only linear input/output models. In this framework, several solutions have been proposed both in the industrial world and in the academic world: IDCOM - Identification and command (later MAC - Model algorithmic control) at ADERSA (Richalet et al., 1978) and DMC - Dynamic matrix control at Shell (Cutler and Ramaker, 1980), which use step and impulse response models, (the adaptive control branch) MUSMAR - Multistep multivariable adaptive regulator (Mosca et al., 1984) - the first MPC formulation that is based on state-space linear models, and EPSAC - Extend predictive self-adaptive control (De Keyser and van Cauwenberghe, 1985). Generalized frameworks for setting up MPC algorithms based on input/output models were also developed later on, from which the most significant ones are GPC - Generalized predictive control (Clarke et al., 1987) and UPC - Unified predictive control (Soeterboek, 1992). The next step of the academic community was to extend the MPC algorithms based on state-space models to continuous (smooth) nonlinear systems, which includes the following approaches: nonlinear MPC with zero state terminal equality constraint (Keerthi and Gilbert, 1988), dual-mode nonlinear MPC (Michalska and Mayne, 1993) and quasi-infinite horizon nonlinear MPC (Chen and Allgöwer, 1996). More recent general set-ups for synthesizing stabilizing MPC algorithms for smooth nonlinear systems can be found in (Magni et al., 2001; Grimm et al., 2005). The first MPC approach for the control of discontinuous and hybrid systems has been reported in the seminal work (Bemporad and Morari, 1999), which was followed by many other researcher, see, for example, (Kerrigan and Mayne, 2002; Grieder et al., 2005; Lazar et al., 2006; Baotic et al., 2006) and the references therein. One of the reasons for the fruitful achievements of MPC algorithms consists in the intuitive way of addressing the control problem. In comparison with conventional control, which often uses a pre-computed state or output feedback control law, predictive control uses a discrete-time 1 model of the system to obtain an estimate (prediction) of its future behavior. This is done by applying a set of input sequences to a model, with the measured state/ouput as initial condition, while taking into account constraints. An optimization problem built around a performance oriented cost function is then solved to choose an optimal sequence of controls from all feasible sequences. The feedback control law is then obtained in a receding horizon manner by applying to the system only the first element of the computed 1 Although continuous-time models can also be employed in theory of MPC, see (Mayne et al., 2000), most MPC algorithms and theory consider discrete-time models, as this yields a tractable optimization problem.

1.1. Model predictive control 13 sequence of optimal controls, and repeating the whole procedure at the next discrete-time step. Summarizing the above discussion, one can conclude that MPC is built around the following key principles: The explicit use of a process model for calculating predictions of the future plant behavior over a finite horizon in time; The optimization of an objective function subject to constraints, which yields an finite optimal sequence of controls; The receding horizon strategy, according to which only the first element of the optimal sequence of controls is applied on-line and the optimization problem is solved again at the next time instant with the measured state as initial condition. The MPC methodology involves solving on-line an open-loop finite horizon optimal control problem subject to input, state and/or output constraints. A graphical illustration of this concept is depicted in Figure 1.1. At each discrete-time instant k, the measured variables and the process model (linear, nonlinear or hybrid) are used to (predict) calculate the future behavior of the controlled plant over a specified time horizon, which is usually called the prediction horizon and is denoted by N. This is achieved by considering a future control scenario as the input sequence applied to the process model, which must be calculated such that certain desired constraints and objectives are fulfilled. To do that, a cost function is minimized subject to constraints, yielding an optimal sequence of controls over a specified time horizon, which is usually called control horizon and is denoted by N u. According to the receding horizon control strategy, only the first element of the computed optimal sequence of controls is then applied to the plant and this sequence of steps is repeated at the next discrete-time instant, for the updated state. The MPC methodology can be summarized formally as the following constrained optimization problem: Problem 1.1.1 Let N 1 be given and let X R n and U R m be sets that implement state and input constraints, respectively, and contain the origin in their interior. The prediction model is x(k + 1) = g(x(k), u(k)), k 0, with g : R n R m R n a nonlinear, possibly discontinuous function with g(0, 0) = 0. Let F : R n R + with F (0) = 0 and L : R n R m R + with L(0, 0) = 0 be known mappings. At every discrete-time instant k 0

14 Introduction State constraint Past Future/Predictions Predicted state x k Desired equilibrium point x r Initial State x 0 Closed-loop state x k Closed-loop input u k Open-loop input u k Input constraint k k + N u Control horizon Prediction horizon k + N Figure 1.1: A graphical illustration of Model Predictive Control. let x(k) X be the measured state, let x(0 k) x(k) and minimize the cost function N 1 J(x(k), u(k)) F (x(n k)) + L(x(i k), u(i k)), over all input sequences u(k) (u(0 k),..., u(n 1 k)) subject to the constraints: i=0 x(i + 1 k) g(x(i k), u(i k)), i = 0,..., N 1, x(i k) X, for all i = 1,..., N, u(i k) U, for all i = 0,..., N 1. In Problem 1.1.1, F ( ), L(, ) and N denote the terminal cost, the stage cost and the prediction horizon, respectively. The term x(i k) denotes the predicted state at future discrete-time instant i [0, N], obtained at discrete-time

1.1. Model predictive control 15 instant k 0 by applying the input sequence {u(i k)} i=0,...,n 1 to a model of the system, i.e. x(k + 1) = g(x(k), u(k)), with the measured state x(k) as initial condition, i.e. x(0 k) = x(k). The control actions in the sequence {u(i k)} i=0,...,n 1 constitute the optimization variables. Suppose that the above MPC optimization problem is solvable and let {u(i k) } i=0,...,n 1 denote an optimal solution. The MPC control action is obtained as follows: u MPC (x(k)) u(0 k) ; k 0. Although the key principles of MPC are independent of the type of system, e.g. linear, nonlinear or hybrid, the computational complexity of the MPC constrained optimization problem, as well as the stability issues, strongly depend on the type of model used for prediction. For instance, assuming that the MPC cost is defined using quadratic forms (Hahn, 1967) and the constraint sets are polyhedra, Problem 1.1.1 is a quadratic programming problem if the model is linear; Problem 1.1.1 is a nonlinear optimization problem if the model is nonlinear; Problem 1.1.1 is a mixed integer quadratic programming problem (Bemporad and Morari, 1999) if the model is piecewise affine. Therefore, depending on the utilized prediction model and MPC cost function, different tools are required for solving the MPC optimization problem. One of the most studied research problems regarding MPC, which is also addressed in this thesis, consists in how to guarantee stability of a system in closed-loop with an MPC controller, e.g. obtained by solving Problem 1.1.1, as this is not automatically guaranteed and is the primal condition that any controller should satisfy. For linear and continuous nonlinear systems, many solutions to this problem have been developed, see the survey (Mayne et al., 2000) for a comprehensive and well documented overview. The most popular approach is the so-called terminal cost and constraint set method, which requires that the terminal predicted state, i.e. x(n k), is constrained inside a terminal set that contains the origin (the equilibrium) in its interior. Then, under the assumption that the system dynamics and the MPC value function corresponding to Problem 1.1.1 are continuous, sufficient stabilization conditions, in terms of properties that a terminal cost F ( ) and a terminal constraint set (usually denoted by X T ) must satisfy, can be found in (Mayne et al., 2000).

16 Introduction This concludes the general introduction to MPC and this chapter continues with a discussion of several relevant open problems in the theory of model predictive control. 1.2 Open problems in stability and robustness of MPC Typically, stability and robustness results for discrete-time systems are obtaining by mutatis mutandis reproducing the classical results available for continuous-time systems, see, for example, (Kalman and Bertram, 1960a,b; Freeman, 1965; Willems, 1970; LaSalle, 1976; Vidyasagar, 1993; Khalil, 2002; Jiang and Wang, 2001; Kellett and Teel, 2004). In general, less attention is payed to relaxations of the sufficient conditions for Lyapunov stability that might be allowed by the discrete-time setting and their implications in terms of robustness, e.g., in the form of input-to-state stability (ISS) (Jiang and Wang, 2001). One particularly relevant point is whether global or local (i.e. on a neighborhood of the equilibrium) continuity of the system dynamics and/or of the candidate Lyapunov function is still required to establish asymptotic stability in the Lyapunov sense. This issue is of paramount importance to MPC closed-loop systems, as it is well known, especially since the seminal work on hybrid systems (Bemporad and Morari, 1999), see also (Borrelli, 2003), that MPC candidate Lyapunov functions and closed-loop systems are discontinuous in general. This is due to the fact that MPC usually generates a discontinuous control law, even for continuous system dynamics, which was shown for the first time in (Meadows et al., 1995). 1.2.1 Stability of MPC The stability results within the MPC framework follow closely the above mentioned general stability results for discrete-time systems, but with a sharp focus on removing continuity assumptions, as summarized next. The usual approach to ensure stability in MPC is to consider the value function of the MPC cost as a candidate Lyapunov function. Then, if the system dynamics is continuous, the classical Lyapunov stability theory (Kalman and Bertram, 1960b) can be used to prove that the MPC control law is stabilizing, which was done in (Keerthi and Gilbert, 1988). The requirement that the system dynamics must be continuous is (partially) removed in (Alamir and Bornard, 1994; Meadows et al., 1995), where terminal equality constraint MPC is considered. In (Alamir and Bornard, 1994), continuity of the

1.2. Open problems in stability and robustness of MPC 17 system dynamics on a neighborhood of the origin is still used to prove Lyapunov stability, but not for attractivity. Although continuity of the system is still assumed in (Meadows et al., 1995), where it is shown that MPC can generate discontinuous state-feedbacks, the Lyapunov stability proof (Theorem 2 in (Meadows et al., 1995)) does not use the continuity property. Later on, an exponential stability result is given in (Scokaert et al., 1997) and an asymptotic stability theorem is presented in (Scokaert et al., 1999), where sub-optimal MPC is considered. The theorems of (Scokaert et al., 1997, 1999) explicitly point out that both the system dynamics and the candidate Lyapunov function only need to be continuous at the equilibrium. Stability of sub-optimal MPC is proven in (Scokaert et al., 1999) under the usual assumptions (existence of class K bounds on the candidate Lyapunov function V and its forward difference) plus the extra requirement that the MPC optimal sequence of controls is upper bounded in norm by a K function of the norm of the state. A recent overview on stability of receding horizon control in discrete-time can be found in (Goodwin et al., 2005). Although continuity of the system dynamics and local continuity of V are assumed in (Goodwin et al., 2005), the stability proof (Theorem 4.3.2 in (Goodwin et al., 2005)) only uses continuity of V at the equilibrium, as done in (Meadows et al., 1995). The interested reader can find a general stability theorem for discrete-time MPC that unifies most of the above results in (Lazar et al., 2007a). Apart from removing the continuity assumption on the system dynamics and MPC cost function, all these results employ the additional assumption that the (global) optimum of the MPC optimization problem is always attained, which is usually referred to as the optimality assumption in MPC. Recently, in (Spjøtvold et al., 2007) it was shown that, similarly to the continuity assumption, the optimality assumption is not a realistic one as well. This is because in the presence of a discontinuous value function corresponding to the cost of the optimization problem, which is usually the case with MPC cost functions, although the global optimum may exists, it is not necessarily attainable. This rises the following open problem in stability of MPC: (i) what can be said about stability of classical terminal cost and constraint set MPC schemes (Mayne et al., 2000) in the presence of discontinuous dynamics, value functions and/or sub-optimal solutions? Notice that although in (Scokaert et al., 1999) stability results are obtained for sub-optimal MPC schemes, this is attained via additional modifications to the classical MPC set-up (Mayne et al., 2000). More precisely, an explicit nonlinear and nonconvex constraint that involves the MPC cost

18 Introduction function is added to the MPC set-up, which significantly hampers implementation. In contrast to (Scokaert et al., 1999), our aim is to obtain stability results for sub-optimal MPC solutions without bringing any modifications to the original terminal cost and constraint set MPC set-up. A solution to this open problem is provided in Chapter 4 of this thesis by making use of the general stability results presented in Chapter 3. An equally relevant and disturbing issue was raised in (Grimm et al., 2004), where it was shown for the first time that MPC closed-loop systems that are asymptotically stable have zero robustness. That is, in the presence of arbitrarily small perturbations, the asymptotic stability property is lost. The fragility of the stability of MPC closed-loop systems is in fact related to the absence of a continuous Lyapunov function. As the usual candidate for a Lyapunov function in MPC is the value function corresponding to the cost J(, ), normally a discontinuous function, the following open problem arises: (ii) what can be said about inherent robustness of asymptotically stable discrete-time systems, when either the system dynamics or the Lyapunov function employed to establish stability, or both, are discontinuous? Notice that while it is well known that smooth Lyapunov functions imply inherent robustness, even in the sense of ISS, to the best of the author s knowledge, there are no robustness test that rely exclusively on a discontinuous Lyapunov function. As such tests are crucial for MPC closed-loop systems, several possible solutions are presented in Chapter 3 of this thesis. 1.2.2 Robust MPC schemes Next, we continue the discussion on stability and robustness of MPC by presenting a short summary of methods for designing MPC schemes with an a priori guarantee of robustness in the sense of input-to-state stability (Sontag, 1989, 1990; Jiang and Wang, 2001). There are several ways for designing robust MPC controllers for perturbed nonlinear systems. One way is to rely on the inherent robustness properties of nominally stabilizing nonlinear MPC algorithms, e.g. as it was done in (Scokaert et al., 1997; Magni et al., 1998; Limon et al., 2002b; Grimm et al., 2003). Another approach is to incorporate knowledge about the disturbances in the MPC problem formulation via open-loop worst case scenarios. This includes MPC algorithms based on tightened constraints, e.g., as the one of (Limon et al., 2002a), and MPC algorithms, based on open-loop min-max optimization problems, see, for example, the survey (Mayne et al., 2000). As it was the case with the nominal stability results discussed in this chapter, ISS results for tightened constraints terminal cost and constraint set

1.2. Open problems in stability and robustness of MPC 19 MPC rely on the same basic assumptions: continuity of the system dynamics (Grimm et al., 2003) or even Lipschitz continuity (Limon et al., 2002a) and, optimality of the MPC solution. This gives rise to an open problem similar to the one raised for nominal stability, i.e.: (iii) what can be said about input-tostate stability of tightened constraints robust MPC schemes in the presence of discontinuous dynamics, value functions and/or sub-optimal solutions? A possible solution to this problem is presented in Chapter 4 of this thesis. To incorporate feedback to disturbances, the closed-loop or feedback minmax MPC (or shortly, min-max MPC) problem set-up was introduced in (Lee and Yu, 1997) and further developed in (Mayne, 2001; Magni et al., 2003; Limon et al., 2006; Magni et al., 2006). The open-loop approach is computationally somewhat easier than the feedback approach, but the set of feasible states corresponding to the feedback min-max MPC optimization problem is usually much larger and the disturbance rejection is improved. Sufficient conditions for robust asymptotic stability of closed-loop (feedback) min-max MPC systems were presented in (Mayne, 2001) under the assumption that the (additive) disturbance input converges to zero as the state converges to the origin. Recently, input-to-state stability (ISS) (Sontag, 1989, 1990; Jiang and Wang, 2001) results for min-max nonlinear MPC were presented in (Limon et al., 2006) and (Magni et al., 2006). In (Limon et al., 2006) it was shown that, in general, only input-to-state practical stability (ISpS) (Jiang, 1993; Jiang et al., 1994, 1996) can be a priori ensured for min-max nonlinear MPC. ISpS is a weaker property than ISS, as ISpS does not imply asymptotic stability for zero disturbance inputs. The reason for the absence of ISS in general is that the effect of a non-zero disturbance input is taken into account by the min-max MPC controller, even if the disturbance input vanishes in reality. Still, in the case when the disturbance input converges to zero, it is desirable that asymptotic stability is recovered for the controlled system. The first open problem related min-max MPC is (iv) under what conditions/modifications can ISS, rather than ISpS, can be a priori guaranteed for min-max MPC closed-loop systems? In (Magni et al., 2006), an H (Chen and Scherer, 2006a) strategy was used to modify the classical min-max MPC cost function (Mayne et al., 2000) such that ISS is guaranteed for the closed-loop min-max MPC system. Furthermore, in (Magni et al., 2006) it was proven that a local upper bound on the min-max MPC value function, rather than a global one, is sufficient for ISS. However, this method requires the modification of the stage cost by introducing a negative term which consists of a disturbance norm. In this way, the corresponding min-max optimization problem becomes non-convex

20 Introduction in the disturbance, which is a significant drawback regarding implementation. As such, our goal is to provide a solution to this problem without incorporating additional terms in the standard min-max MPC cost, which is still possible by employing a dual-mode approach, as presented in Chapter 5 of this thesis. The second open problem in min-max MPC is (v) how to compute a terminal cost and auxiliary control law that satisfy the sufficient conditions for input-to-state stability? While a solution to the computational of the terminal cost exists in the nominal case, i.e. it amounts to take the terminal cost equal to a local control Lyapunov function, for the robust case, it would amount to the computation of ISS control Lyapunov functions, which is still an open problem. In Chapter 6 of this thesis we present a possible solution for solving this problem for quadratic candidate ISS CLFs. Furthermore, we demonstrate that the solution of the H synthesis problem solves the corresponding terminal cost min-max MPC problem for a particular choice of the terminal cost. The problems raised so far with respect to existing techniques for designing robust MPC schemes still do not offer a solution to the ultimate open problem in robust MPC: (vi-a) how to provide feedback to the disturbances actively, on-line, as a function of the closed-loop trajectory, rather than in a worst case manner, i.e. imposing a fixed ISS gain for all possible trajectories; and (vi-b) how to render the corresponding robust MPC optimization problems computationally efficient? A novel and innovative solution to this problem is presented in Chapter 7 of this thesis, which introduces the concept of self-optimizing robust MPC, in the sense that this MPC scheme provides the means to optimize the closed-loop ISS gain on-line, as a function of the state trajectory. Furthermore, in terms of computational complexity, for a fairly wide class of nonlinear systems it is shown that the corresponding self-optimizing robust MPC optimization problem can be formulated as a single linear program, which is a major step in complexity reduction compared with standard min-max MPC. A case study on the control of DC-DC converters that includes preliminary real-time computational results is included to illustrate the potential of the developed theory for practical applications. As the sampling period of the considered DC-DC converter is well below one millisecond, this indicates that the proposed self-optimizing robust MPC scheme is implementable for (very) fast systems, which opens up a whole new range of industrial applications in electrical, mechatronic and automotive systems. The following summarizing formal statement concludes the section on open problems. This thesis focuses mainly on novel ways to design MPC

1.3. Summary of publications 21 controllers with a robust stability guarantee. Special attention is paid to discontinuous nonlinear system dynamics, sub-optimal solutions, low computational complexity and improved disturbance rejection. 1.3 Summary of publications This thesis is mostly based on published or submitted articles. A complete list of the publications that support this thesis is presented in this section, as follows. Chapter 2 contains results presented in: (Lazar et al., 2007b): M. Lazar, W.P.M.H. Heemels, A.R. Teel. Subtleties in robust stability of discrete-time PWA systems. In proceedings of the 26th American Control Conference 2007, New York, USA. (Lazar et al., 2009c): M. Lazar, W.P.M.H. Heemels, A.R. Teel. Lyapunov functions, stability and input-to-state stability subtleties for discrete-time discontinuous systems. IEEE Transactions on Automatic Control, accepted, scheduled to appear in the September, 2009 issue. The results presented in Chapter 3 are published in: (Lazar and Heemels, 2008c): M. Lazar, W.P.M.H. Heemels. Predictive control of hybrid systems: Stability results for sub-optimal solutions. 17th IFAC World Congress, Seoul, Korea, 2009. (Lazar and Heemels, 2009): M. Lazar, W.P.M.H. Heemels. Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions. Automatica, Vol. 45, No. 1, pp. 180-185, 2009. Chapter 4 is based on: (Lazar et al., 2008a): M. Lazar, D. Muñoz de la Peña, W.P.M.H. Heemels and T. Alamo. On input-to-state stability of min-max nonlinear model predictive control. Systems & Control Letters, Vol. 57, pp. 39-48, 2008. (Raimondo et al., 2009): D.M. Raimondo, D. Limon, M. Lazar, L. Magni, E.F. Camacho. Min-max model predictive control of nonlinear systems: A unifying overview on stability. Survey paper (discussants: J. Maciejowski and J.A. Rossiter). European Journal of Control, Vol. 15, No. 1, pp. 1-17. The results of Chapter 5 are presented in: (Lazar et al., 2009b): M. Lazar, W.P.M.H. Heemels, D. Muñoz de la Peña and T. Alamo. Further results on Robust MPC using Linear Matrix Inequalities. L. Magni et al., Eds., Assessment and Future Directions of Nonlinear Model Predictive Control, Lecture Notes in Control and Information Sciences, vol. 384, pages 89-98, Springer-Verlag.

22 Introduction Chapter 6 contains results presented in: (Lazar and Heemels, 2008b): M. Lazar, W.P.M.H. Heemels. Optimized input-to-state stabilization of discrete-time nonlinear systems with bounded inputs. In Proceedings of the 27th American Control Conference, Seattle, USA, 2008. (Lazar et al., 2008b): M. Lazar, B.J.P. Roset, W.P.M.H. Heemels, H. Nijmeijer and P.P.J. van den Bosch. Input-to-state stabilizing sub-optimal nonlinear MPC algorithms with an application to DC-DC converters. International Journal of Robust and Nonlinear Control, Invited paper for the Special Issue on Nonlinear MPC of Fast Systems, Vol. 18, Issue 8, pages 890-904, 2008. (Lazar et al., 2009a): M. Lazar, W.P.M.H. Heemels, A. Jokic. Self-optimizing Robust Nonlinear Model Predictive Control. L. Magni et al., Eds., Assessment and Future Directions of Nonlinear Model Predictive Control, Lecture Notes in Control and Information Sciences, vol. 384, pages 27-40, Springer- Verlag. 1.4 Basic mathematical notation and definitions In this section, some basic mathematical notation and standard definitions are recalled to make the manuscript self-contained. Sets and operations with sets: R, R +, Z and Z + denote the field of real numbers, the set of nonnegative reals, the set of integers and the set of non-negative integers, respectively; Z c1 and Z (c1,c 2 ] denote the sets {k Z + k c 1 } and {k Z + c 1 < k c 2 }, respectively, for some c 1, c 2 Z + ; For a set S R n, S N denotes the N-dimensional Cartesian product S... S, for some N Z 1 ; For a set P R n, P denotes the boundary of P, int(p) denotes the interior of P, cl(p) denotes the closure of P, card(p) denotes the number of elements of P and Co(P) denotes the convex hull of P; For any real λ 0 and set P R n, the set λp is defined as λp {x R n x = λy for some y P};

1.4. Basic mathematical notation and definitions 23 For two arbitrary sets P 1 R n and P 2 R n, P 1 P 2 denotes their union, P 1 P 2 denotes their intersection, P 1 \ P 2 denotes their set difference, P 1 P 2 (or P 1 P 2 ) denotes P 1 is subset of, but not equal to, P 2, P 1 P 2 denotes P 1 is subset of, or equal to P 2 ; For two arbitrary sets P 1 R n and P 2 R n, P 1 P 2 {x R n x + P 2 P 1 } denotes their Pontryagin difference and denotes their Minkowski sum; P 1 P 2 {x + y x P 1, y P 2 } A convex and compact set in R n that contains the origin in its interior is called a C-set; A polyhedron (or a polyhedral set) in R n is a set obtained as the intersection of a finite number of open and/or closed half-spaces; A piecewise polyhedral set is a set obtained as the union of a finite number of polyhedral sets. Vectors, matrices and norms: For a real number a R, a denotes its absolute value and a denotes the smallest integer larger than a; For a sequence {z j } j Z+ with z j R l, z [k] denotes the truncation of {z j } j Z+ at time k Z +, i.e. z [k] = {z j } j Z[0,k], and z [k1,k 2 ] denotes the truncation of {z j } j Z+ at times k 1 Z 1 and k 2 Z k1, i.e. z [k1,k 2 ] = {z j } ; j Z[k1,k 2 ] The Hölder p-norm of a vector x R n is defined as: { ( x x p 1 p +... + x n p ) 1 p, p Z [1, ) max i=1,...,n x i, p =, where x i, i = 1,..., n is the i-th component of x, x 2 is also called the Euclidean norm and x is also called the infinity (or the maximum) norm;

24 Introduction Let denote an arbitrary Hölder p-norm. For a sequence {z j } j Z+ with z j R n, {z j } j Z+ sup{ z j j Z + }; I n denotes the identity matrix of dimension n n; For some matrices L 1,..., L n, diag([l 1,..., L n ]) denotes a diagonal matrix of appropriate dimensions with the matrices L 1,..., L n on the main diagonal; For a matrix Z R m n and p Z 1 or p = Zx p Z p sup, x 0 x p denotes its induced matrix norm. It is well known, see, for example, (Golub and Van Loan, 1989), that Z = max 1 i m n j=1 Z{ij}, where Z {ij} is the ij-th entry of Z; For a matrix Z R m n, Z denotes its transpose and Z 1 denotes its inverse (if it exists); For a matrix Z R n n, Z > 0 denotes Z is positive definite, i.e. for all x R n \ {0} it holds that x Zx > 0, and Z = Z ; For a matrix Z R m n with full-column rank, Z L (Z Z) 1 Z denotes the Moore-Penrose inverse of Z, which satisfies Z L Z = I n ; For a positive definite and symmetric matrix Z, Z 1 2 denotes its Cholesky factor, which satisfies (Z 1 2 ) Z 1 2 = Z 1 2 (Z 1 2 ) = Z; For a positive definite matrix Z, λ min (Z) and λ max (Z) denote the smallest and the largest eigenvalue of Z, respectively.

2 Lyapunov Functions Subtleties for Discrete-time Systems 2.1 Introduction 2.2 Preliminaries 2.3 Illuminating examples 2.4 ISS tests based on discontinuous USL functions 2.5 Conclusions In this chapter we consider stability analysis of discrete-time discontinuous systems using Lyapunov functions. We demonstrate via simple examples that the classical second method of Lyapunov is precarious for discontinuous system dynamics. Also, we indicate that a particular type of Lyapunov condition, slightly stronger than the classical one, is required to establish stability of discrete-time discontinuous systems. Furthermore, we examine the robustness of the stability property when it was attained via a discontinuous Lyapunov function, which is often the case for discrete-time systems in closed-loop with MPC controllers. In contrast to existing results based on smooth Lyapunov functions, we develop several input-to-state stability tests that explicitly employ an available discontinuous Lyapunov function. 2.1 Introduction Discrete-time discontinuous systems, such as piecewise affine (PWA) systems, are a powerful modeling class for the approximation of hybrid and non-smooth nonlinear dynamics (Sontag, 1981; Heemels et al., 2001). The modeling capability of discrete-time PWA systems has already been shown in several applications, including switched power converters (Leenaerts, 1996), direct torque control of three-phase induction motors (Geyer et al., 2005) and applications in automotive systems (Bemporad et al., 2003). Many numerically efficient tools for stability analysis and stabilizing controller synthesis for discrete-time PWA systems have already been developed, see, for example, (Johansson, 1999; Mignone et al., 2000; Ferrari-Trecate et al., 2002;

26 Lyapunov Functions Subtleties for Discrete-time Systems Feng, 2002; Daafouz et al., 2002) for static feedback methods and (Lazar et al., 2005; Grieder et al., 2005; Lazar et al., 2006; Baotic et al., 2006) for model predictive control (MPC) techniques. Most of these methods make use of classical Lyapunov methods (Kalman and Bertram, 1960b). The first contribution of this chapter is to illustrate the precariousness of the second method of Lyapunov, as presented in (Kalman and Bertram, 1960b), for discontinuous system dynamics. We illustrate via a simple example that existence of a Lyapunov function in the sense of Corollary 1.2 of (Kalman and Bertram, 1960b) (and hence, a continuous function) does not even guarantee global asymptotic stability (GAS) for discrete-time discontinuous systems. In the presence of discontinuity of the dynamics one needs to impose a class K upper bound on the one-step rate of decrease of the Lyapunov function in order to attain GAS. The second contribution of this chapter concerns robustness of stability in terms of input-to-state stability (ISS) (Jiang and Wang, 2001). Firstly, we present a simple example inspired from (Kellett and Teel, 2004) (see also (Grimm et al., 2004) for a similar example in MPC) to illustrate that even the global exponential stability (GES) property is precarious for discretetime discontinuous systems affected by arbitrary small perturbations. The severe lack of inherent robustness is related to the absence of a continuous Lyapunov function. This example establishes that there exist GES discretetime systems that admit a discontinuous Lyapunov function, but not a continuous one. Notice that previous results on stability of discrete-time PWA systems (Johansson, 1999; Mignone et al., 2000; Ferrari-Trecate et al., 2002; Feng, 2002; Daafouz et al., 2002) only indicated that continuous Lyapunov functions may be more difficult to find than discontinuous ones, while in fact a continuous Lyapunov function might not even exist. As such, a valid warning regarding nominally stabilizing state-feedback synthesis methods for discrete-time discontinuous systems, including both static feedback approaches (Johansson, 1999; Mignone et al., 2000; Ferrari-Trecate et al., 2002; Feng, 2002; Daafouz et al., 2002) and MPC techniques (Lazar et al., 2005; Grieder et al., 2005; Lazar et al., 2006; Baotic et al., 2006) arises. These synthesis methods lead to a stable, possibly discontinuous closed-loop system and often rely on discontinuous Lyapunov functions. For example, in MPC the most natural candidate Lyapunov function is the value function corresponding to the MPC cost, which is generally discontinuous when PWA systems are used as prediction models (Lazar et al., 2006). Hence, these controllers may result in closed-loop systems that are GAS, but only admit a discontinuous Lyapunov function. This means that such closed-loop systems may not be ISS to arbitrarily small perturbations, which are always present

2.2. Preliminaries 27 in practice. This brings us to the second contribution of this chapter: for discretetime systems for which only a discontinuous Lyapunov function is known, we propose several robustness tests that can establish ISS solely based on the available discontinuous Lyapunov function. 2.2 Preliminaries In this section we introduce some preliminary notions, definitions and results. Let R, R +, Z and Z + denote the field of real numbers, the set of nonnegative reals, the set of integer numbers and the set of non-negative integers, respectively. For every subset Π of R + we define Z Π := {k Z + k Π}. Let denote an arbitrary norm on R n and let denote the absolute value of a real number. For a sequence z := {z(l)} l Z+ with z(l) R n, l Z +, let z := sup{ z(l) l Z + } and let z [k] := {z(l)} l Z[0,k]. For a set S R n, we denote by int(s) the interior, by S the boundary and by cl(s) the closure of S. For two arbitrary sets S R n and P R n, let S P := {x + y x S, y P} denote their Minkowski sum. The distance of a point x R n from a set P is denoted by d(x, P) := inf y P x y. For any µ R (0, ) we define B µ := {x R n x µ}. A polyhedron (or a polyhedral set) is a set obtained as the intersection of a finite number of open and/or closed half-spaces. The p-norm of a vector x R n is defined as x p := ( x 1 p +... + x n p ) 1 p for p Z [1, ) and x := max i=1,...,n x i, where x i, i = 1,..., n is the i-th component of x. For a matrix Z R m n let Zx Z p := sup p x 0 x p, p Z [1, ), p = denote its induced matrix norm. A function ϕ : R + R + belongs to class K (ϕ K) if it is continuous, strictly increasing and ϕ(0) = 0. A function ϕ : R + R + belongs to class K (ϕ K ) if ϕ K and lim s φ(s) =. A function β : R + R + R + belongs to class KL (β KL) if for each fixed k R +, β(, k) K and for each fixed s R +, β(s, ) is decreasing and lim k β(s, k) = 0. 2.2.1 Stability and input-to-state stability To study robustness, we will employ the ISS framework (Sontag, 1990; Jiang and Wang, 2001). Consider the discrete-time perturbed nonlinear system: ξ(k + 1) = g(ξ(k), z(k)), k Z +, (2.1) where ξ : Z + R n is the state trajectory, z : Z + R dv is an unknown disturbance input trajectory and g : R n R dv R n is a nonlinear, possi-

28 Lyapunov Functions Subtleties for Discrete-time Systems bly discontinuous function. For simplicity, we assume that the origin is an equilibrium for (2.1), i.e. g(0, 0) = 0. Definition 2.2.1 A set P R n with 0 int(p) is called a robustly positively invariant (RPI) set with respect to V R dv for system (2.1) if for all x P it holds that g(x, v) P for all v V. A set P R n with 0 int(p) is called a positively invariant (PI) set for system (2.1) with zero input if for all x P it holds that g(x, 0) P. Definition 2.2.2 Let X with 0 int(x) be a subset of R n. We call system (2.1) with zero input (i.e. z(k) = 0 for all k Z + ) asymptotically stable in X, or shortly AS(X), if there exists a KL-function β(, ) such that, for each ξ(0) X it holds that ξ(k) β( ξ(0), k), k Z +. If the property holds with β(s, k) := θρ k s for some θ R (0, ) and ρ R [0,1) we call system (2.1) with zero input exponentially stable in X (ES(X)). We call system (2.1) with zero input globally asymptotically (exponentially) stable if it is AS(R n ) (ES(R n )). Definition 2.2.3 Let X and V be subsets of R n and R dv, respectively, with 0 int(x). We call system (2.1) input-to-state stable in X for inputs in V, or shortly ISS(X,V), if there exist a KL-function β(, ) and a K-function γ( ) such that, for each initial condition ξ(0) X and all z = {z(l)} l Z+ with z(l) V for all l Z +, it holds that the corresponding state trajectory of (2.1) with initial state ξ(0) and input trajectory z satisfies ξ(k) β( ξ(0), k) + γ( z [k 1] ) for all k Z [1, ). The system (2.1) is globally ISS if it is ISS(R n, R dv ). Throughout this chapter we will employ the following sufficient conditions for analyzing ISS. Theorem 2.2.4 (Jiang and Wang, 2001; Lazar et al., 2008a) Let α 1, α 2, α 3 K, σ K and let V be a subset of R dv. Let X with 0 int(x) be a RPI set with respect to V for system (2.1) and let V : X R + be a function with V (0) = 0. Consider the following inequalities: α 1 ( x ) V (x) α 2 ( x ), V (g(x, v)) V (x) α 3 ( x ) + σ( v ). (2.2a) (2.2b) If inequalities (2.2) hold for all x X and all v V, then system (2.1) is ISS(X,V). If inequalities (2.2) hold for all x R n and all v R dv, then system (2.1) is globally ISS. If X with 0 int(x) is a PI set for system

2.2. Preliminaries 29 (2.1) with zero input and inequalities (2.2) hold for all x X (x R n ) and v V = {0}, then system (2.1) with zero input is AS(X) (GAS). A function V ( ) that satisfies the hypothesis of Theorem 2.2.4 is called an ISS Lyapunov function. Note the following aspects regarding Theorem 2.2.4. (i) The hypothesis of Theorem 2.2.4 allows that both g(, ) and V ( ) are discontinuous. The hypothesis only requires continuity at the point x = 0, and not necessarily on a neighborhood of x = 0. (ii) If the inequalities (2.2) are satisfied for α 1 (s) = as λ, α 2 (s) = bs λ, α 3 (s) = cs λ, for some a, b, c, λ > 0, then the hypothesis of Theorem 2.2.4 implies exponential stability of system (2.1) with zero input. 2.2.2 Lyapunov functions As an extension of classical Lyapunov functions (see Corollary 1.2 and Corollary 1.3 of (Kalman and Bertram, 1960b)), which are assumed to be continuous and only required to have a negative one step forward difference, we will introduce the following known types of Lyapunov functions for the zero input system corresponding to (2.1), i.e. ξ(k + 1) = g(ξ(k), 0), k Z +. Let X R n be a positively invariant set for ξ(k+1) = g(ξ(k), 0) with 0 int(x), let α 1, α 2, α 3 K, let V : R n R + denote a possibly discontinuous function with V (0) = 0, and consider the inequalities: α 1 ( x ) V (x) α 2 ( x ), x X, (2.3a) V (g(x, 0)) V (x) 0, x X, (2.3b) V (g(x, 0)) V (x) < 0, x X \ {0}, (2.3c) V (g(x, 0)) V (x) α 3 ( x ), x X. (2.3d) Definition 2.2.5 A function V ( ) that satisfies (2.3a) and (2.3b) is called a Lyapunov function. A function V ( ) that satisfies (2.3a) and (2.3c) is called a strict Lyapunov (SL) function. A function V ( ) that satisfies (2.3a) and (2.3d) is called a uniformly strict Lyapunov (USL) function. For continuous V ( ) and discrete-time continuous system dynamics it is known that SL functions and USL functions are equivalent and both imply asymptotic stability and inherent robustness (ISS, under certain conditions); see, for example, (Kellett and Teel, 2004). In the following section we will investigate whether these properties still hold when either the system dynamics or the Lyapunov function is discontinuous, or both.

30 Lyapunov Functions Subtleties for Discrete-time Systems Notice that a USL function can also be defined by replacing (2.3d) with the intermediate property V (g(x, 0)) V (x) δ(x), x X, (2.4) where δ : R n R + is a continuous and positive definite function. However, it can be shown that given such a USL function one can always find a new USL function that satisfies (2.3d), using ideas from (Nesic and Teel, 2001). Also, in the case when g(, 0) and V ( ) are continuous it can be proven that SL functions and USL functions that satisfy (2.4) are equivalent. 2.3 Illuminating examples Consider the following generic discrete-time PWA systems, which form one of the simplest class of discontinuous systems and will serve as a support for setting up the examples: ξ(k + 1) = G(ξ(k)) := A j ξ(k) + f j if ξ(k) Ω j, (2.5a) ξ(k + 1) = g( ξ(k), z(k)) := A j ξ(k) + fj + z(k) if ξ(k) Ωj, (2.5b) with z(k) B µ for some small µ R (0, ), k Z +, and where A j R n n, f j R n for all j S (a finite set of indexes) and {Ω j R n j S} defines a partition of X, meaning that j S Ω j = X and Ω i Ω j =, with the sets Ω j not necessarily closed. Firstly, we present a simple one-dimensional example of a discontinuous system that admits a continuous SL function but it is not GAS. Example 1: Consider the discrete-time system (2.5a) with j S := {1, 2}, A 1 = f 1 = 0, A 2 = 0.5, f 2 = 0.5 and the partition given by Ω 1 = {x R x 1}, Ω 2 = {x R x > 1}. One can easily check that lim k ξ(k) = 1 for any ξ(0) = x R (1, ) = Ω 2 and thus, this system is not GAS. Consider the function V (x) := x. Clearly, for x Ω 1 \ {0} we have V (G(x)) V (x) = V (x) < 0 and, for x Ω 2 we have V (G(x)) V (x) = 0.5 x+1 x < x x = 0. Hence, V (x) is a continuous SL function. However, V (x) is not a USL function, as for any α 3 K it holds that lim x 1 (V (G(x)) V (x)) = lim x 1 (0.5 x+1 x) = 0 > α 3 (1). As illustrated above, the system of Example 1 admits a continuous SL function but the trajectories do not converge to the origin globally. This indicates that SL functions (even continuous ones) which are not USL functions do not guarantee GAS for discrete-time discontinuous systems. Hence,

2.3. Illuminating examples 31 Figure 2.1: The function G( ) for the system of Example 2. one must strive for a USL function to guarantee GAS of a discrete-time discontinuous system. For a proof that (discontinuous) USL functions imply GAS see, for example, Chapter 4 in this thesis. The interested reader is also referred to (Nesic et al., 1999) for a proof that a GAS discrete-time system always admits a possibly discontinuous USL function. Example 2: Consider now the discrete-time system (2.5a) with j S := {1, 2}, A 1 = A 2 = 0, f 1 = 0, f 2 = 1 and the partition is given by Ω 1 = {x R x 1}, Ω 2 = {x R x > 1}. Figure 2.1 shows the values of the function G(x). One can easily observe that any trajectory ξ(k) at time k Z + of system (2.5a) starting from an initial condition ξ(0) = x R satisfies ξ(k) ξ(0) (even ξ(k) < ξ(0) when ξ(0) = x 0) and converges exponentially to the origin. Actually, any trajectory ξ(k) reaches the origin in 2 discrete-time steps or less. Furthermore, it can be proven that V (x) := i=0 ξ(i)2 is a USL function, where ξ(i) denotes the trajectory of system (2.5a) at time i Z +, obtained from initial condition ξ(0) = x R. Indeed, since V (x) = i=0 ξ(i)2 = ξ(0) 2 + ξ(1) 2 for any ξ(0) = x R, it holds that V (G(x)) V (x) α 3 ( x ) for all x R, where α 3 (s) := s 2. An explicit expression for V ( ) is: V (x) = ξ(i) 2 = ξ(0) 2 + ξ(1) 2 = i=0 { x 2 + 1 if x > 1 x 2 if x 1, which shows that V ( ) is discontinuous at x = 1. Next consider the case when z(k) = µ R (0, ) for all k Z + in (2.5b). Then, the origin of the perturbed system (2.5b) corresponding to the nominal system (2.5a) is not ISS, as x = 1 + µ is an equilibrium of (2.5b) to which all trajectories with initial conditions ξ(0) = x R (1, ) = Ω 2 converge. Hence, no matter how small µ R (0, ) is taken, the system (2.5b) is

32 Lyapunov Functions Subtleties for Discrete-time Systems not ISS(R, B µ ). The following conclusions can be drawn from Example 2: (i) GES discrete-time discontinuous systems are not necessarily ISS, even to arbitrarily small inputs; (ii) existence of a discontinuous USL function does not guarantee ISS, even to arbitrarily small inputs. This indicates that additional conditions must be imposed on USL functions to attain ISS. For example, continuity of the USL function is known to guarantee inherent ISS (Lazar et al., 2009a), but this condition is too restrictive for discrete-time discontinuous systems such as PWA systems. Thus, in the next section we will propose ISS tests that can deal with discontinuous USL functions. Remark 2.3.1 The GES discrete-time system of Example 2 also admits a continuous SL function, i.e. V (x) := x, which satisfies V (G(x)) V (x) < 0 for all x 0. However, as it was the case in Example 1, V (x) = x is not a USL function, as for any α 3 K it holds that lim x 1 (V (G(x)) V (x)) = lim x 1 (1 x) = 0 > α 3 (1). Hence, the existence of a continuous SL function does not necessarily guarantee any robustness for discontinuous systems. The next example shows a constrained 2D PWA system that is exponentially stable but it has no robustness. Such constrained PWA systems arise inherently in explicit model predictive control of linear or PWA systems (Grieder et al., 2005; Lazar et al., 2006; Baotic et al., 2006), as the dynamics that describe the closed-loop system. Therefore, this makes the following example especially relevant for MPC closed-loop systems. Example 3: Consider the discontinuous nominal and perturbed PWA systems (3.3) with v(k) B µ = {v R 2 v µ} for some µ R (0, ), j S := {1,..., 9}, k Z +, and where [ ] [ ] [ 1 0 0.35 0.6062 0.5 A j = for j 7; A 0 1 7 = ; f 0.0048 0.0072 1 = f 2 = 0 [ [ [ ] [ ] 0 0 0.4 0.4 f 3 = f 4 = f 5 = f 6 = ; f 1] 7 = ; f 0] 8 = ; f 0.1 9 =. 0.1 The system state takes values in the set X := j S Ω j, where the regions Ω j are polyhedra (the exact representations are omitted due to space limitations), as shown in Figure 2.2. The state trajectories 1 of sys- 1 Note that the regions Ω 1 and Ω 2 are such that for all x Ω 1 Ω 2 the dynamics x(k + 1) = A 2x(k) + f 2 is active, i.e. Ω 1 Ω 2 Ω 2. ] ;

2.3. Illuminating examples 33 Figure 2.2: A constrained 2D PWA system with no robustness: nominal (square,circle-dotted lines) and perturbed (star-solid line). tem (2.5a) obtained for x(0) = [0.2 3.6] Ω 2 (square dotted line) and x(0) = [0.2 3.601] Ω 1 (circle dotted line) are plotted in Figure 2.2. Theorem 2.3.2 The following statements hold: (i) The function V (x) := x(10) + 9 i=0 Qx(i), where Q = 0.04I 2 and x(i) is the solution of system (2.5a) obtained at time i Z [0,10] from initial condition x(0) := x X, is a discontinuous USL function for system (2.5a); (ii) The PWA system (2.5a) is exponentially stable in X; (iii) For any µ R (0, ) the PWA system (2.5b) is not ISS in X for inputs in B µ. Proof: (i) The following properties hold for the PWA system (2.5a) of Example 3, as it can be seen by inspection of the dynamics: (P1) x(k + 1) x(k) for all x(k) X, k Z + ; (P2) For any initial state x(0) X the state trajectory satisfies x(k) Ω 7 for all k Z 10 ; (P3) A 7 < 1; (P4) Ω 7 is a Positively Invariant (PI) set for the dynamics x(k + 1) =

34 Lyapunov Functions Subtleties for Discrete-time Systems A 7 x(k) + f 7 ; (P5) X is a PI set for the PWA system (2.5a). First, we prove that V (x) = x(10) + 9 i=0 Qx(i) satisfies inequality (2.2a). For any τ (0, 0.04) it holds that Qx τ x. Therefore, α 1 ( x ) V (x) is satisfied for all x X with α 1 ( x ) := τ x. For any state trajectory {x(i)} i Z[0,10] there exists a set of indices j i S, i Z [0,10], such that x(i) Ω ji (note that by property (P2) j 9 = j 10 = 7 for any x(0) X). Then, using the triangle inequality, for any x X (note that x(0) := x) we obtain that V (x) = x(10) + 9 Qx(i) Qx(0) + QA j0 x(0) i=0 + Qf j0 + QA j1 A j0 x(0) + QA j1 f j0 + Qf j1 +... + A j9... A j0 x(0) + A j9... A j1 f j0 +... + f j9. Note that, by property (P4), for all x(0) = x Ω 7 we have that x(i) Ω 7 for all i Z + and hence, x(i + 1) = A 7 x(i) for all i Z +, as f 7 = [0 0]. Otherwise, if x(0) = x X \ Ω 7, since 0 int(ω 7 ) and Ω 7 is bounded, there exists a positive number ζ > 0 such that Qf j0 + ( QA j1 f j0 + Qf j1 ) +... + ( A j9... A j1 f j0 +... + f j9 ) ζ x(0). Then, using x(0) = x and the inequality Qx Q x it follows that V (x) α 2 ( x ) for all x X with α 2 ( x ) := θ x, where 9 i 1 9 θ := Q + Q A jp + A jp + ζ. i=1 p=0 Finally, for any x X Ω j and any j S, by properties (P2), (P3) it holds that V (A j x + f j ) V (x) = Qx + ( A 7 x(10) x(10) + Qx(10) ) p=0 Qx τ x =: α 3 ( x ). In the above inequality we used the fact that A 7 x x ( A 7 1) x = 0.0438 x 0.04 x = Q x Qx,

2.3. Illuminating examples 35 for all x R n. Therefore, the function V (x) = x(10) + 9 i=0 Qx(i) is a USL function for system (2.5a) of Example 3. One can easily check that V (x) is discontinuous, for example, at x = [0.2 3.6] Ω 2. (ii) By property (P5), X is a PI set for the PWA system (2.5a) of Example 3 and hence, a valid domain of attraction. Therefore, exponential stability of the origin follows directly from the result of part (i), due to the special form of the K-functions α 1 ( ), α 2 ( ) and α 3 ( ) established in the proof of part (i). (iii) To illustrate the non-robustness phenomenon for the perturbed PWA system (2.5b) of Example 3, we constructed an additive disturbance v(k), which at times k = 0, 2, 4,... is equal to [0 ε] and at times k = 1, 3, 5,... is equal to [0 ε], where ε > 0 can be taken arbitrarily small. The system trajectory (see Figure 2.2 for a plot - red line) with initial state x(0) = [0.2 3.6] Ω 2 Ω 1 is given by x(k) = [0.2 3.6], if k = 0, 2, 4,... and x(k + 1) = [ 0.3 3.6 + ε], if k = 1, 3, 5,.... This is a limit cycle with period 2 and x(k) 3.6 for all k Z +. Then, for any β KL and γ K we can take ε > 0 arbitrarily small and k Z + large enough such that β(k, x(0) ) + γ( w [k 1] ) < 3.6 x(k), k k. Therefore, for any ε > 0, the PWA system (2.5b) of Example 3 is not ISS for initial conditions in X and inputs in B ε. Notice that, by taking any finite polyhedral partition of R 2 \ X, defining the dynamics in each polyhedral region of this partition to be x(k + 1) = [ 0 0 0 0 ] x(k) + [ ] 0.1 0.1, k Z+, and adding these affine subsystems to the PWA system (2.5a), one obtains a 2D PWA system that is GES, but it has no robustness to arbitrarily small disturbances. Remark 2.3.3 While the disturbance signal used in Example 1 does not have a particular structure, a specific disturbance signal was employed in Example 3 to destroy ISS. However, in practice there is often still some structure in the disturbances (for example, time delays in embedded systems or cyclic sensor/encoder errors), which makes such a situation not highly unlikely to happen. Remark 2.3.4 By Theorem 14 of (Kellett and Teel, 2004), Example 2 implies that there exist GES discrete-time systems that do not admit a continuous USL function. However, as shown above, the PWA system of Example 2 does admit a discontinuous USL function, which is in conformi-

36 Lyapunov Functions Subtleties for Discrete-time Systems ty with the converse stability result for discrete-time discontinuous systems presented in (Nesic et al., 1999). 2.4 ISS tests based on discontinuous USL functions In this section we consider piecewise continuous (PWC) nonlinear systems of the form ξ(k + 1) = G(ξ(k)) := G j (ξ(k)) if ξ(k) Ω j, k Z +, (2.6) where each G j : R n R n, j S, is assumed to be a continuous function. PWA systems are obtained as a particular case by setting G j (x) = A j x + f j. Consider also a perturbed version of the above system, by including additive disturbances, i.e. ξ(k +1) = g( ξ(k), z(k)) := G j ( ξ(k))+z(k) if ξ(k) Ωj, k Z +. (2.7) Furthermore, we consider discontinuous USL functions V : R n R +, with V (0) = 0, V (x) := V i (x) if x Γ i, i J, (2.8) where for each i J, V i : R n R + is a continuous function that satisfies V i (x) V i (y) σ i ( x y ), x, y cl(γ i ), (2.9) for some σ i K. Examples of functions that satisfy this property include uniformly continuous functions on compact sets and Lipschitz continuous functions. This captures a wide range of frequently used Lyapunov functions for PWA systems, such as piecewise quadratic (PWQ), PWA or piecewise polyhedral functions (i.e. functions defined using the infinity norm or the 1-norm), including the value functions that arise in model predictive control of PWA systems. In (2.6) and (2.8), {Ω j j S} and {Γ i i J } with S := {1,..., s} and J := {1,..., M} finite sets of indices, denote partitions of R n. More precisely, we assume that j S Ω j = R n, Ω i Ω j = for i j, (i, j) S S and int(ω i ) for all i S and likewise for the regions Γ i, i J. We assume that a discontinuous USL function of the form (2.8) is available for system (2.6). We have seen from Example 2 in the previous section that this does not necessarily guarantee anything in terms of ISS. However, the goal is now to develop tests for ISS of system (2.7) based on the discontinuous USL function (2.8).

2.4. ISS tests based on discontinuous USL functions 37 The first result is based on examining the trajectory of the PWC system (2.6) with respect to the set of states at which V ( ) may be discontinuous. Let µ R (0, ) and let P R n with 0 int(p) be a RPI set for system (2.7) with respect to B µ, i.e. R 1 (P) B µ P, where R 1 (P) := {G(x) x P} is the one-step reachable set for system (2.6) from states in P. Let X D P denote the set of all states in P at which V ( ) is not continuous. If one can verify that any state trajectory {ξ(k)} k Z+ of (2.6) is a distance µ R (0, ) away from the set X D for all ξ(0) = x P and all k Z [1, ), then it can be proven that ISS(P,B µ ) is achieved, as formulated in the following result. Theorem 2.4.1 Suppose that the PWC system (2.6) admits a (discontinuous 2 ) USL function of the form (2.8) and consequently, (2.6) is GAS. Furthermore, suppose that there exists a µ R (0, ) and a set P R n with 0 int(p) such that d(x, X D ) > µ for all x R 1 (P) (2.10) and P is a RPI set 3 for system (2.7) with respect to B µ. Then, the PWC system (2.7) is ISS(P,B µ ). Proof: First, we will prove that there exists a K-function σ( ) (independent of x) such that for all x and for any two points y, ȳ G(x) B µ it holds that V (y) V (ȳ) σ( y ȳ ). By (2.9), for each i J and any two points y, ȳ cl(γ i ) there exists a K- function σ i ( ) such that V i (y) V i (ȳ) σ i ( y ȳ ). The inequality (2.10) implies that V ( ) is continuous on the set G(x) B µ for any x P. For any two points y, ȳ G(x) B µ consider the line segment L(y, ȳ) := {y+α(ȳ y) 0 α 1} between y and ȳ. We will construct a set of points {z 0,..., z M } L(y, ȳ) with M M on this line segment such that: (i) z 0 = y; z M = ȳ and (ii) (z p 1, z p ) cl(γ ip 1 ) cl(γ ip 1 ) for some i p 1 J, for all p = 1,..., M. To construct this set, take i 0 J such that z 0 = y cl(γ i0 ), α 0 = 0 and α 1 := max{α [0, 1] y + α(ȳ y) cl(γ i0 )}. Note that due to closedness of cl(γ i0 ) the maximum is attained and z 1 := y + α 1 (ȳ y) cl(γ i0 ). In addition, for all α (α 1, 1] it holds that y+α(ȳ y) cl(γ i0 ). If α 1 = 1 (and thus ȳ cl(γ i0 )) the construction is complete. If α 1 1, then there is an i 1 J \ {i 0 } with z 1 cl(γ i1 ). Take α 2 := max{α [α 1, 1] y + α(ȳ y) cl(γ i1 )} and observe that z 2 := y + α 2 (ȳ y) cl(γ i1 ) and for all α (α 2, 1] we have that y + α(ȳ y) cl(γ i0 ) cl(γ i1 ). If α 2 = 1 the construction 2 Note that the result also holds for continuous USL functions, as then X D =. 3 Observe that P = R n is a possible choice of a RPI set with respect to B µ for any µ R (0, ).

38 Lyapunov Functions Subtleties for Discrete-time Systems is complete. Otherwise, continue the construction. This construction will terminate in at most M steps as the number of regions cl(γ i ), i = 1,..., M, is finite and ȳ lies in at least one of them. At termination, we arrived at the set of points {z 0,..., z M } with the mentioned properties. Due to continuity of V ( ) in the region G(x) B µ, for z p Γ ip 1 Γ ip, p = 1,..., M, we have that V (z p ) = V ip 1 (z p ) = V ip (z p ). Then, for any y, ȳ G(x) B µ, it follows that M M V (y) V (ȳ) = (V (z p 1 ) V (z p )) V (z p 1 ) V (z p ) = p=1 M V ip (z p 1 ) V ip (z p ) p=1 p=1 p=1 M M σ ip ( z p 1 z p ) σ ip ( y ȳ ). Letting σ(s) := M max i J σ i (s) K, one obtains V (y) V (ȳ) σ( y ȳ ) for any y, ȳ G(x) B µ. Since for any v B µ it holds that g(x, v) = G(x) + v G(x) B µ, it follows that: V (g(x, v)) V (G(x)) σ( v ), x P, v B µ. (2.11) As by the hypothesis V ( ) is a USL function for the PWC system (2.6), we have that α 1 ( x ) V (x) α 2 ( x ) and p=1 V (G(x)) V (x) α 3 ( x ), x P, (2.12) for some α 1, α 2, α 3 K. Adding (2.11) and (2.12) yields: V (g(x, v)) V (x) α 3 ( x ) + σ( v ), x P, v B µ. Hence, V ( ) is an ISS Lyapunov function for the PWC system (2.7). The statement then follows from Theorem 2.2.4. The constant µ can be chosen as follows: { } 0 < µ µ := min j S inf G j (y) ȳ y Ω j P, ȳ X D. (2.13) If the set X D is the union of a finite number of polyhedra, the sets Ω j, j S and P are polyhedra, each G j, j S is an affine function and the infinity

2.4. ISS tests based on discontinuous USL functions 39 norm (or the 1-norm) is used in (2.13), a solution to the optimization problem in (2.13) can be obtained by solving a finite number of linear programming problems (quadratic programming problems if the 2-norm is used). If the optimization problem in (2.13) yields a strictly positive µ, then µ R (0, ) can be considered as a measure of the (worst case) inherent robustness of system (2.6). The sufficient condition (2.10) can be relaxed, as shown by the next result, in the sense that the trajectory {ξ(k)} k Z+ of system (2.6) is now allowed to intersect the set X D. Proposition 2.4.2 Let P R n with 0 int(p) be a RPI set for system (2.7) with respect to B µ for some µ R (0, ). Suppose that the PWC system (2.6) admits a function of the form (2.8) that satisfies (2.3a) for all x P. Furthermore, suppose that there exists α 3 K such that max i I V i(g(x)) V (x) α 3 ( x ), x P. (2.14) Then, the PWC system (2.7) is ISS(P,B µ ). The above result is based on a stronger, more conservative extension of the stabilization conditions from (Johansson, 1999; Mignone et al., 2000; Ferrari- Trecate et al., 2002; Feng, 2002; Daafouz et al., 2002), as it requires that the Lyapunov function is decreasing irrespective of which dynamics might be active at the next step. The proof of Proposition 2.4.2 follows from the proof of the less conservative result formulated next in Theorem 2.4.3. The sufficient condition (2.14) can be significantly relaxed, as follows. Consider the set Z := {x P {G(x) B µ } X D } and define for x Z M(x) := {i J G(x) Γ i, {G(x) B µ } Γ i }. Theorem 2.4.3 Suppose that the PWC system (2.6) admits a (discontinuous) USL function of the form (2.8). Furthermore, suppose that there exists a µ R (0, ), a K -function α 3 ( ) and a set P R n with 0 int(p) such that max V i(g(x)) V (x) α 3 ( x ), x Z (2.15) i M(x) and P is a RPI set for system (2.7) with respect to B µ. Then, the PWC system (2.7) is ISS(P,B µ ). Proof: As done in the proof of Theorem 2.4.1, we will show that V ( ) satisfies the ISS inequalities (2.2). For any x P only the following

40 Lyapunov Functions Subtleties for Discrete-time Systems situations can occur: (A) {G(x) B µ } X D = or (B) x Z. In case (A), as shown in the proof of Theorem 2.4.1, by continuity of V ( ) on G(x) B µ, there exists a σ K (independent of x) as constructed in the proof of Theorem 2.4.1 such that V (g(x, v)) V (x) α 3 ( x ) + σ( v ), v B µ. (2.16) In case (B), suppose that v B µ is such that G(x) Γ p and G(x)+v Γ p for some p J. In this case p M(x). Then, since V (G(x)) = V p (G(x)) and V (G(x) + v) = V p (G(x) + v), by continuity of V i ( ), inequality (2.16) holds with the same K-function σ( ) constructed in the proof of Theorem 2.4.1. Otherwise, if v B µ is such that G(x) Γ p and G(x) + v Γ i for some p, i J, p i, we have that V (G(x)) = V p (G(x)), V (G(x) + v) = V i (G(x) + v) and i M(x). Then, by continuity of V i ( ) and inequality (4.17) we obtain: V (G(x) + v) V (x) = V i (G(x) + v) V (x) = V i (G(x)) V (x) + V i (G(x) + v) V i (G(x)) max i M(x) V i(g(x)) V (x) + σ i ( v ) α 3 ( x ) + σ( v ), with σ i ( ) and σ( ) as defined in the proof of Theorem 2.4.1. Letting ˆα 3 (s) := min(α 3 (s), α 3 (s)) K, it follows that V (g(x, v)) V (x) = V (G(x) + v) V (x) ˆα 3 (x) + σ( v ), x P, v B µ. Therefore, V ( ) is an ISS Lyapunov function for system (2.7). The statement then follows from Theorem 2.2.4. Observe that (2.13) amounts to an a posteriori check that must be performed on a given USL function of the form (2.8). In contrast, condition (2.14) can be a priori specified when computing a USL function of the form (2.8), and it can be casted as a semidefinite programming problem for piecewise quadratic (PWQ) functions and PWA systems. On the same issue, condition (4.17) involves the set X D and hence, amounts to an a posteriori check that must be performed on a given USL function of the form (2.8). Under certain reasonable assumptions (e.g., X D is the union of a finite number of polyhedra, the regions Ω j, j S and Γ i, i I are polyhedra, the system is PWA, the USL function is convex) checking (4.17) amounts to solving a finite number of convex optimization problems.

2.5. Conclusions 41 Remark 2.4.4 The result of Theorem 2.4.3 also holds when condition (4.17) is replaced by max V i(g(x)) V (G(x)) cα 3 ( x ), x Z, (2.17) i M(x) for some c R [0,1), which might be easier to check than (4.17). Remark 2.4.5 The tests developed in this section require that for each i J, V i is a continuous function that satisfies (2.9) and it is only defined on: (i) Γ i for Theorem 2.4.1, (ii) P R n for Proposition 2.4.2 and (iii) cl(γ i ) B µ for some µ R (0, ) for Theorem 2.4.3. Some of these are additional requirements with respect to USL functions, which in principle, only require that each V i is defined on Γ i. An alternative to the tests presented in this section is to directly check condition (2.2b), which for PWA dynamics and PWQ candidate ISS Lyapunov functions can lead to tractable optimization problems, as shown recently in (Lazar and Heemels, 2008a). 2.5 Conclusions In this chapter we analyzed two types of Lyapunov functions in terms of their suitability for establishing stability and input-to-state stability of discretetime discontinuous systems. Via examples we exposed certain subtleties that arise in the classical Lyapunov methods when they are applied to discretetime discontinuous systems, as follows: The existence of a continuous SL function does not necessarily imply GAS - Example 1; The existence of a continuous SL function or discontinuous USL function does not necessarily imply ISS, even to arbitrarily small inputs - Example 2; GES does not necessarily imply the existence of a continuous USL function - Example 2 (see also (Kellett and Teel, 2004)). These results, together with the fact that existence of a possibly discontinuous USL function is equivalent to GAS (Nesic et al., 1999) (see also Chapter 4 in this thesis), issue a strong warning regarding existing nominally stabilizing state-feedback synthesis methods for discrete-time discontinuous systems, including both static feedback approaches (Johansson, 1999; Mignone et al., 2000; Ferrari-Trecate et al., 2002; Feng, 2002; Daafouz et al.,

42 Lyapunov Functions Subtleties for Discrete-time Systems 2002) and MPC techniques (Lazar et al., 2005; Grieder et al., 2005; Lazar et al., 2006; Baotic et al., 2006). This warning motivates the results on inputto-state stabilizing (sub-optimal) MPC of discontinuous systems presented in the next chapter. To render the many available procedures for obtaining Lyapunov functions, which typically yield discontinuous Lyapunov functions (e.g., value functions in MPC or PWQ Lyapunov functions), applicable to discontinuous systems, we presented several ISS tests based on discontinuous Lyapunov functions. These tests can be employed to establish ISS of nominally asymptotically stable discrete-time PWC systems in the case when a discontinuous USL function is available.

3 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions 3.1 Introduction 3.2 Preliminaries 3.3 MPC scheme set-up 3.4 Input-to-state stability results 3.5 Asymptotic stability results 3.6 Conclusion This chapter presents a novel model predictive control (MPC) scheme that achieves input-to-state stabilization of constrained discontinuous nonlinear and hybrid systems. Input-to-state stability (ISS) is guaranteed when an optimal solution of the MPC optimization problem is attained. Special attention is paid to the effect that sub-optimal solutions have on ISS of the closed-loop system. This issue is of interest as firstly, the infimum of MPC optimization problems does not have to be attained and secondly, numerical solvers usually provide only sub-optimal solutions. An explicit relation is established between the deviation of the predictive control law from the optimum and the resulting deterioration of the ISS property of the closed-loop system. By imposing stronger conditions on the sub-optimal solutions, ISS can even be attained in this case. 3.1 Introduction Discrete-time discontinuous systems form a powerful and general modeling class for the approximation of hybrid and nonlinear phenomena, which also includes the class of piecewise affine (PWA) systems (Heemels et al., 2001). The modeling capability of the latter class of systems has already been shown in several applications, including switched power converters, automotive systems and systems biology. As a consequence, there is an increasing interest in developing synthesis techniques for robust control of discrete-time hybrid systems. The model predictive control (MPC) methodology (Mayne et al.,

44 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions 2000) has proven to be one of the most successful frameworks for this task, see, for example, (Bemporad and Morari, 1999; Kerrigan and Mayne, 2002; Lazar et al., 2006) and the references therein. In this chapter we are interested in input-to-state stability (ISS) (Jiang and Wang, 2001) as a property to characterize robust stability of hybrid systems in closed-loop with MPC. More precisely, we consider systems that are piecewise continuous and affected by additive disturbances. It is known that for such discontinuous systems most of the results obtained for smooth nonlinear MPC (Mayne et al., 2000; Limon et al., 2002a; Grimm et al., 2007) do not necessarily apply. The min-max MPC methodology (see, e.g., (Lazar et al., 2008a) and the references therein) might be applicable, but its prohibitive computational complexity prevents implementation even for linear systems. As such, computationally feasible input-to-state stabilizing predictive controllers are widely unavailable. In what follows we propose a tightened constraints MPC scheme for discontinuous systems along with conditions for ISS of the resulting closed-loop system, assuming that optimal MPC control sequences are implemented. These results provide advances to the existing works on tightened constraints MPC (Limon et al., 2002a; Grimm et al., 2007), where continuity of the system dynamics is assumed, towards discontinuous and hybrid systems. Guaranteeing robust stability and feasibility in the presence of discontinuities is difficult and requires an innovative usage of tightened constraints, which is conceptually different from the approaches in (Limon et al., 2002a; Grimm et al., 2007). Therein tightened constraints are employed for robust feasibility only. However, by carefully matching the new tightening approach with the discontinuities in the system dynamics, we achieve both robust feasibility and ISS for the optimal case. Another issue that is neglected in MPC of hybrid systems is the effect of sub-optimal implementations. In particular, an important result was presented in (Spjøtvold et al., 2007), where it was shown that in the case of optimal control of discontinuous PWA systems it is not uncommon that there does not exist a control law that attains the infimum. Moreover, numerical solvers usually provide only sub-optimal solutions. As a consequence, for hybrid systems it is necessary to study if and how stability results for optimal predictive control change in the case of sub-optimal implementations, which forms one the main topics in this chapter. To cope with MPC control sequences (obtained by solving MPC optimization problems) that are not optimal, but within a margin δ 0 from the optimum, we introduce the notion of ε-iss as a particular case of the input-to-state practical stability (ISpS) property (Jiang et al., 1996). Next,

3.2. Preliminaries 45 we establish an analytic relation between the optimality margin δ of the solution of the MPC optimization problem and the ISS margin ε(δ). While the ISS results presented in this chapter require the use of a specific robust MPC problem formulation (i.e. based on tightened constraints), we also show that nominal asymptotic stability can be guaranteed for sub-optimal MPC of hybrid systems without any modification to the standard MPC setup presented in (Mayne et al., 2000). Compared to classical sub-optimal MPC (Scokaert et al., 1999), where an explicit constraint on the MPC cost function is employed, this provides a fundamentally different approach. 3.2 Preliminaries First, we recall some basic definitions that will be employed in this chapter Let R, R +, Z and Z + denote the field of real numbers, the set of nonnegative reals, the set of integer numbers and the set of non-negative integers, respectively. We use the notation Z c1 and Z (c1,c 2 ] to denote the sets {k Z k c 1 } and {k Z c 1 < k c 2 }, respectively, for some c 1, c 2 Z. For x R n let x denote an arbitrary norm and for Z R m n, let Z denote the corresponding induced matrix norm. We will use both (z(0), z(1),...) and {z(l)} l Z+ with z(l) R n, l Z +, to denote a sequence of real vectors. For a sequence z := {z(l)} l Z+ let z := sup{ z(l) l Z + } and let z [k] := {z(l)} l Z[0,k]. For a set S R n, we denote by int(s) the interior of S. For two arbitrary sets S R n and P R n, let S P := {x R n x + P S} denote their Pontryagin difference. For any µ > 0 we define B µ as {x R n x µ}. A polyhedron (or a polyhedral set) is a set obtained as the intersection of a finite number of open and/or closed half-spaces. A real-valued function ϕ : R + R + belongs to class K if it is continuous, strictly increasing and ϕ(0) = 0. A function β : R + R + R + belongs to class KL if for each fixed k R +, β(, k) K and for each fixed s R +, β(s, ) is decreasing and lim k β(s, k) = 0. Next, consider a discrete-time system of the form x(k + 1) G(x(k), w(k)), k Z +, (3.1) where x(k) R n is the state, w(k) W R l is an unknown input at discrete-time instant k Z + and G : R n R l 2 (Rn) is an arbitrary nonlinear, possibly discontinuous, set-valued function. For simplicity of notation, we assume that the origin is an equilibrium in (6.1) for zero input, i.e. G(0, 0) = {0}.

46 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions Definition 3.2.1 RPI We call a set P R n robustly positively invariant (RPI) for system (6.1) with respect to W if for all x P and all w W it holds that G(x, w) P. Definition 3.2.2 ε-iss Let X with 0 int(x) and W be subsets of R n and R l, respectively. For a given ε R +, the perturbed system (6.1) is called ε-iss in X for inputs in W if there exist a KL-function β and a K-function γ such that, for each x(0) X and all w = {w(l)} l Z+ with w(l) W for all l Z +, it holds that all state trajectories of (6.1) with initial state x(0) and input sequence w satisfy x(k) β( x(0), k) + γ( w [k 1] ) + ε, k Z 1. We call system (6.1) ISS in X for inputs in W if (6.1) is 0-ISS in X for inputs in W. Definition 3.2.3 ε-as For a given ε R +, the 0-input system (6.1), i.e. x(k + 1) G(x(k), 0), k Z +, is called ε-asymptotically stable (ε-as) in X if there exists a KL-function β such that, for each x(0) X it holds that all state trajectories with initial state x(0) satisfy x(k) β( x(0), k) + ε, k Z 1. We call the 0-input system (6.1) AS in X if it is 0-AS in X. We refer to ε by the term ISS (AS) margin. Theorem 3.2.4 Let d 1, d 2 be non-negative reals, a, b, c, λ be positive reals with c b and α 1 (s) := as λ, α 2 (s) := bs λ, α 3 (s) := cs λ and σ K. Furthermore, let X be a RPI set for system (6.1) with respect to W and let V : R n R + be a function such that α 1 ( x ) V (x) α 2 ( x ) + d 1, V (x + ) V (x) α 3 ( x ) + σ( w ) + d 2 (3.2a) (3.2b) for all x X, w W and all x + G(x, w). Then the system (6.1) is ε-iss in X for inputs in W with ( ) 3σ(s) β(s, k) := α1 1 (3ρk α 2 (s)), γ(s) := α1 1, 1 ρ ε := α 1 1 ( 3 ( d 1 + d 2 1 ρ )), ρ := 1 c [0, 1). (3.3) b If the inequalities (3.2) hold for d 1 = d 2 = 0, the system (6.1) is ISS in X for inputs in W.

3.3. MPC scheme set-up 47 The proof of Theorem 6.2.3 is similar in nature to the proof given in Chapter 4 of this thesis by replacing the difference equation by a difference inclusion as in (6.1) and is omitted here. We call a function V ( ) that satisfies the hypothesis of Theorem 6.2.3 an ε-iss function. 3.3 MPC scheme set-up Consider the piecewise continuous (PWC) system x(k + 1) = g(x(k), u(k), w(k)) := g j (x(k), u(k)) + w(k) if x(k) Ω j, k Z +, (3.4) where each g j : Ω j U R n, j S, is a continuous, possibly nonlinear function in x and S := {1, 2,..., s} is a finite set of indices. We assume that x and u are constrained in some sets X R n and U R m that contain the origin in their interior. The collection {Ω j R n j S} defines a partition of X, meaning that j S Ω j = X and Ω i Ω j =, with the sets Ω j not necessarily closed. We also assume that w takes values in the set W := B µ with µ R >0 sufficiently small as determined later. Assumption 3.3.1 For each fixed j S, g j (, ) satisfies a continuity condition in the first argument in the sense that there exists a K-function η j ( ) such that g j (x, u) g j (y, u) η j ( x y ), x, y Ω j, u U, and j 0 S such that 0 int(ω j0 ) and g j0 (0, 0) = 0. As we allow g(,, ) to be discontinuous in x over the switching boundaries, discontinuous PWA systems are a sub-class of PWC systems as given in (5.4). For a fixed N Z 1, let (φ(1),..., φ(n)) denote a state sequence generated by the unperturbed system (5.4), i.e. φ(i + 1) := g j (φ(i), u(i)) if φ(i) Ω j, (3.5) for i = 0,..., N 1, from initial condition φ(0) := x(k) and by applying an input sequence u [N 1] = (u(0),..., u(n 1)) U N := U... U. Let X T X denote a set with 0 int(x T ). Define η(s) := max j S η j (s). As the maximum of a finite number of K-functions is a K-function, η K.

48 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions Let η [p] (s) denote the p-times function composition with η [0] (s) := s and η [k] (s) = η(η [k 1] (s)) for k Z 1. For any µ > 0 and i Z 1, define i 1 L i µ := ζ Rn ζ η [p] (µ). p=0 Define the set of admissible input sequences for x X as: U N (x) := {u [N 1] U N φ(i) X i, i = 1,..., N 1, φ(0) = x, φ(n) X T }, (3.6) where X i := j S {Ω j L i µ} X, i = 1,..., N 1. The purpose of the above set of input sequences will be made clear in Lemma 3.4.3. For a given N Z 1, notice that µ > 0 has to be sufficiently small so that 0 int(ω j0 L N 1 µ ). Let F : R n R + and L : R n R m R + with F (0) = L(0, 0) = 0 be arbitrary nonlinear mappings. Problem 3.3.2 MPC optimization problem Let X T X and N Z 1 be given. At time k Z + let x(k) X be given and infimize the cost J(x(k), u [N 1] ) := F (φ(n)) + N 1 i=0 L(φ(i), u(i)) over all sequences u [N 1] in U N (x(k)). We call a state x X feasible if U N (x). Problem 3.3.2 is said to be feasible for x X if U N (x). Let X f (N) X denote the set of feasible states for Problem 3.3.2. Let V (x) := inf u[n 1] U N (x) J(x, u [N 1] ). Since J(, ) is lower bounded by 0, the infimum exists. As such, V (x) is well defined for all x X f (N). However, as shown in (Spjøtvold et al., 2007), the infimum is not necessarily attainable. Therefore, we will consider the following set of sub-optimal control sequences. For any x X f (N) and δ 0, we define Π δ (x) := {u [N 1] U N (x) J(x, u [N 1] ) V (x) + δ} and π δ (x) := {u(0) R m u [N 1] Π δ (x)}. We will refer to δ by the term optimality margin. Note that δ = 0 and Π δ (x) correspond to the situation when the global optimum is attained in Problem 3.3.2. An optimality margin δ can be guaranteed a priori, for example, by using the sub-optimal mixed integer linear programming (MILP) solver proposed in (Spjøtvold et al., 2007) or by specifying a tolerance with respect to achieving the optimum, which is a usual feature of most solvers.

3.4. Input-to-state stability results 49 Next, consider the following MPC closed-loop system corresponding to (5.4): x(k + 1) Φ δ (x(k), w(k)) := {g(x(k), u, w(k)) u π δ (x(k))}, k Z +. (3.7) To simplify the exposition we will make use of the following commonly adopted assumptions in tightened constraints MPC (Limon et al., 2002a; Grimm et al., 2007). Let h : R n R m denote a terminal control law and define X U := {x X h(x) U}. Assumption 3.3.3 There exist K-functions α L ( ), α F (s) := τs λ, α 1 (s) := as λ and α 2 (s) := bs λ, τ, a, b, λ R >0, such that (i) L(x, u) α 1 ( x ), x X, u U; (ii) L(x, u) L(y, u) α L ( x y ), x, y X, u U; (iii) F (x) F (y) α F ( x y ), x, y Ω j0 L N 1 µ ; (iv) V (x) α 2 ( x ), x X f (N). Assumption 3.3.4 There exist N Z 1, θ > θ 1 > 0, µ > 0 and a terminal control law h( ) such that (i) α F (η [N 1] (µ)) θ θ 1 ; (ii) F θ := {x R n F (x) θ} (Ω j0 L N 1 µ ) X U and g j0 (x, h(x)) F θ1 for all x F θ ; (iii) F (g j0 (x, h(x))) F (x) + L(x, h(x)) 0, x F θ. Note that the hypotheses in Assumption 3.3.3-(i),(ii),(iii) usually hold by suitable definitions of L(, ) and F ( ). Also, it can be shown that the hypothesis of Assumption 3.3.3-(iv) may hold, even for discontinuous value functions. For further details on how to satisfy Assumption 3.3.4-(i),(ii),(iii) we refer to (Lazar et al., 2006, 2007a). 3.4 Input-to-state stability results The main result on ε-iss of sub-optimal predictive control of hybrid systems is stated next. Theorem 3.4.1 Let δ R >0 be given, suppose that Assumption 3.3.1, Assumption 3.3.3 and Assumption 3.3.4 hold for the nonlinear hybrid system (5.4) and Problem 3.3.2, and set X T = F θ1. Then: (i) If Problem 3.3.2 is feasible at time k Z + for state x(k) X, then Problem 3.3.2 is feasible at

50 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions time k + 1 for any state x(k + 1) Φ δ (x(k), w(k)) and all w(k) B µ. Moreover, X T X f (N); (ii) The closed-loop system x(k + 1) Φ δ (x(k), w(k)) is ε(δ)-iss in X f (N) for inputs in B µ with ISS margin ε(δ) := ( 3b δ ) 1 λ. a 2 To prove Theorem 3.4.1 we will make use of the following technical lemmas (see the appendix for their proofs). Lemma 3.4.2 Let x Ω j L i+1 µ for some j S, i Z +, and let y R n. If y x η [i] (µ), then y Ω j L i µ. Proof: Consider y R n with y x η [i] (µ). Let ζ L i µ and define z := y x + ζ. Then it holds that z y x + ζ η [i] (µ) + i 1 p=0 η[p] (µ) = i p=0 η[p] (µ) and thus, z L i+1 µ. Together with x Ω j L i+1 µ this yields x + z Ω j. Hence, y + ζ = x + z Ω j. Since ζ L i µ was arbitrary, we have y Ω j L i µ. Lemma 3.4.3 Let (φ(1),..., φ(n)) be a state sequence of the unperturbed system (3.5), obtained from initial state φ(0) := x(k) X and by applying an input sequence u [N 1] = (u(0),..., u(n 1)) U N (x(k)). Let (j 1,..., j N 1 ) S N 1 be the corresponding mode sequence in the sense that φ(i) Ω ji L i µ Ω ji, i = 1,..., N 1. Let ( φ(1),..., φ(n)) be also a state sequence of the unperturbed system (3.5), obtained from the initial state φ(0) := x(k + 1) = φ(1) + w(k) for some w(k) B µ and by applying the shifted input sequence ū [N 1] := (u(1),..., u(n 1), h( φ(n 1))). Then, ( φ(i), φ(i + 1)) Ω ji+1 Ω ji+1, i = 0, N 2, (3.8a) φ(i) φ(i + 1) η [i] ( w(k) ), i = 0, N 1. (3.8b) Proof: Property (3.8a) obviously holds for i = 0, since φ(0) = φ(1) + w(k), w(k) B µ = L 1 µ and φ(1) Ω j1 L 1 µ. Property (3.8b) holds for i = 0 as φ(0) φ(1) = w(k) = η [0] ( w(k) ). We proceed by induction. Suppose that both (3.8a) and (3.8b) hold for 0 i 1 < N 2. Then, since φ(i 1) Ω ji and φ(i 1) φ(i) η [i 1] ( w(k) ), it follows that: φ(i) φ(i + 1) = g ji ( φ(i 1), u(i)) g ji (φ(i), u(i)) η ji ( φ(i 1) φ(i) ) η( φ(i 1) φ(i) ) η(η [i 1] ( w(k) )) = η [i] ( w(k) ), (3.9) and thus, (3.8b) holds for i. Next, as η [i] ( w(k) ) η [i] (µ) i p=0 η[p] (µ), it follows that φ(i) φ(i + 1) L i+1 µ. Then, since φ(i + 1) Ω ji+1 L i+1 µ,

3.4. Input-to-state stability results 51 we have that φ(i + 1) + ( φ(i) φ(i + 1)) = φ(i) Ω ji+1. Hence, (3.8a) holds for i. Thus, we proved that (3.8a) holds for i = 0,..., N 2 and (3.8b) holds for i = 0,..., N 2. Then, (3.8a) and (3.8b) for i = N 2 imply (3.8b) for i = N 1 via the reasoning used in (3.9). Proof: (Proof of Theorem 3.4.1) (i) We will show that ū [N 1], as defined in Lemma 3.4.3, is feasible at time k + 1. Let (j 1,..., j N 1 ) S N 1 be such that φ(i) Ω ji L i µ Ω ji, i = 1,..., N 1. Then, due to property (3.8b) and φ(i + 1) Ω ji+1 L i+1 µ, it follows from Lemma 3.4.2 that φ(i) Ω ji+1 L i µ X i for i = 1,..., N 2. From φ(n 1) φ(n) η [N 1] ( w(k) ) η [N 1] (µ) and Assumption 3.3.3-(iii) it follows that F ( φ(n 1)) F (φ(n)) α F (η [N 1] (µ)), which implies F ( φ(n 1)) θ 1 +α F (η [N 1] (µ)) θ due to φ(n) X T = F θ1 and α F (η [N 1] (µ)) θ θ 1. Hence φ(n 1) F θ X U (Ω j0 L N 1 µ ) X U X N 1, so that h( φ(n 1)) U and φ(n) F θ1 = X T. Thus, the sequence ū [N 1] is feasible at time k + 1, which proves the first part of (i). Moreover, since g j0 (x, h(x)) F θ1 for all x F θ and F θ1 F θ it follows that F θ1 is a positively invariant set for system x(k + 1) = g j0 (x, h(x(k))), k Z +. Then, as F θ1 F θ (Ω j0 L N 1 µ ) X U X i X U for all i = 1,..., N 1 and X T = F θ1, the sequence (h(φ(0)),..., h(φ(n 1))) is feasible for Problem 3.3.2 for all φ(0) := x(k) F θ1, k Z +. Therefore, X T = F θ1 X f (N), which concludes the proof of (i). (ii) The result of part (i) implies that X f (N) is an RPI set for the closedloop system x(k + 1) Φ δ (x(k), w(k)), k Z +. Moreover, 0 int(x T ) implies that 0 int(x f (N)). We will now prove that V ( ) is an ε-iss function for the closed-loop system (3.7). Since for any x X and u [N 1] U N (x) it holds that J(x, u [N 1] ) L(x, u(0)), from Assumption 3.3.3-(i) it follows that V (x) α 1 ( x ) for all x X f (N), with α 1 (s) = as λ. Furthermore, by Assumption 3.3.3-(iv), for all x X f (N) we have that V (x) α 2 ( x ), α 2 (s) = bs λ. Hence, V ( ) satisfies inequality (3.2a) with d 1 = 0 for all x X f (N). Next, we prove that V ( ) satisfies inequality (3.2b). Let x(k + 1) Φ δ (x(k), w(k)) for some arbitrary w(k) B µ. Furthermore, for any u [N 1] U N (x(k)) let ū [N 1] be defined as in Lemma 3.4.3. Using Assumption 3.3.4-(iii), i.e. F (g j0 (x, h(x))) F (x) + L(x, h(x)) 0, x F θ, property (3.8a), Assumption 3.3.3-(ii),(iii) and φ(n 1) X T, it follows

52 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions that: V (x(k + 1)) V (x(k)) J(x(k + 1), ū [N 1] ) J(x(k), u [N 1] ) + δ = L(φ(0), u(0)) + F ( φ(n)) + δ + [ F ( φ(n 1)) + F ( φ(n 1)) ] F (φ(n)) + L( φ(n 1), h( φ(n 1))) + N 2 i=0 [ L( φ(i), u(i + 1)) L(φ(i + 1), u(i + 1)) ] L(φ(0), u(0)) + F ( φ(n)) F ( φ(n 1)) + L( φ(n 1), h( φ(n 1))) ) N 2 ) + α F (η [N 1] ( w(k) ) + α L (η [i] ( w(k) ) + δ i=0 α 3 ( x(k) ) + σ( w(k) ) + δ, with σ(s) := α F (η [N 1] (s)) + N 2 i=0 α L(η [i] (s)) and α 3 (s) := α 1 (s) = as λ. Notice that σ K due to α F, α L, η K. The statement then follows from Theorem 6.2.3. Moreover, from (3.3) it follows that the ε-iss property of Definition 4.2.4 holds with ε(δ) = ( 3b δ ) 1 λ. a 2 Theorem 3.4.1 enables the proper selection of an optimality margin δ in the numerical solver by choosing a desirable ISS margin ε(δ) and finding the corresponding value of δ. Also, Theorem 3.4.1 recovers as a particular case the following result for the optimal case (Lazar et al., 2007a), where only PWA systems were considered. Corollary 3.4.4 Suppose that the hypothesis of Theorem 3.4.1 holds and the global optimum is attained in Problem 3.3.2 for all k Z +. Then, the closed-loop system x(k + 1) Φ 0 (x(k), w(k)) is ISS in X f (N) for inputs in B µ. Remark 3.4.5 The result of Corollary 3.4.4 recovers the result in (Limon et al., 2002a) as the following particular case: X = Ω j0, S = {j 0 } and g j0 (, ) is Lipschitz continuous in X. In this case, the set of admissible input sequences U N (x) only plays a role in guaranteeing recursive feasibility of Problem 3.3.2, while ISS can be established directly using the Lipschitz continuity property of the dynamics, see (Limon et al., 2002a) for details. See also (Grimm et al., 2007) where Lipschitz continuity of system dynamics

3.4. Input-to-state stability results 53 is relaxed to just continuity. Corollary 3.4.4 further relaxes the Lipschitz continuity requirement to discontinuous nonlinear dynamics, while the assumptions on the MPC cost, prediction horizon and disturbance bound µ > 0 are not stronger than the ones employed in (Limon et al., 2002a). Next, we present a modification to the set of δ sub-optimal MPC controllers that will guarantee ISS of the closed-loop system a priori, even for non-zero optimality margins. For any x X f (N) and δ 0 let Π δ (x) := {u [N 1] U N (x) J(x, u [N 1] ) V (x)+δ x λ } and π δ (x) := {u(0) R m u [N 1] Π δ (x)}. The MPC closed-loop system corresponding to (5.4) is now given by x(k + 1) Φ δ (x(k), w(k)) := {g(x(k), u, w(k)) u π δ (x(k))}, k Z +. For the above set of δ sub-optimal MPC control actions it holds that π δ (0) π 0 (0) for all δ > 0. Hence, compared to the absolute δ sub-optimal MPC control laws, now δ is a relative optimality margin that varies with the size of the state norm. The closer the state gets to the origin, the better the approximation of the optimal MPC control law has to be. This is a realistic assumption, as there exists a sufficiently small neighborhood of the origin where all constraints in Problem 3.3.2 become inactive and there is no more switching in the predicted trajectory. Theorem 3.4.6 Suppose that the hypotheses of Theorem 3.4.1 are satisfied with the K-function α 1 (s) := as λ, a, λ R >0, as introduced in Assumption 3.3.3. Let δ R >0 be given such that 0 < δ < a. Then: (i) If Problem 3.3.2 is feasible at time k Z + for state x(k) X, then Problem 3.3.2 is feasible at time k + 1 for any state x(k + 1) Φ δ (x(k), w(k)) and all w(k) B µ. Moreover, X T X f (N); (ii) The closed-loop system x(k + 1) Φ δ (x(k), w(k)) is ISS in X f (N) for inputs in B µ. Proof: The proof of Theorem 3.4.6 readily follows by applying the reasoning used in the proof of Theorem 3.4.1. The modified set of suboptimal control laws π δ (x) makes a difference only in the proof of statement (ii), where J(x(k), ū [N 1] ) V (x(k)) + δ x(k) λ implies V (x(k)) J(x(k), ū [N 1] ) + δ x(k) λ and thus, V (x(k + 1)) V (x(k))... α 1 ( φ(0) ) + σ( w(k) ) + δ x(k) λ = α 3 ( x(k) ) + σ( w(k) ), with σ(s) := α F (η [N 1] (s)) + N 2 i=0 α L(η [i] (s)) and α 3 (s) := (a δ)s λ. Note that α 3 K as a δ > 0.

54 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions Remark 3.4.7 In the particular case when system (5.4) is PWA, X, U, Ω j, j S are polyhedral sets and the MPC cost function is defined using 1, -norms, Problem 3.3.2 can be formulated as a MILP problem, which is standard in hybrid MPC (Bemporad and Morari, 1999). For methods to compute a terminal cost and control law h( ) that satisfy Assumption 3.3.3, Assumption 3.3.4 and for illustrative examples we refer to (Lazar et al., 2006, 2007a). 3.5 Asymptotic stability results Sufficient conditions for asymptotic stability of discrete-time PWA systems in closed-loop with MPC were presented in (Lazar et al., 2006), under the standing assumption of global optimality for the MPC control law. As already mentioned, it is important to analyze if and how the stability results of (Lazar et al., 2006) change in the case of sub-optimal implementations. Consider the PWC nonlinear system x(k + 1) = ξ(x(k), u(k)) :=g j (x(k), u(k)) if x(k) Ω j, (3.10) where the notation is similar to the one in Section 6.3. We still assume that each g j (, ), j S, satisfies a continuity condition as was defined in Assumption 3.3.1. However, we do not require anymore that the origin lies in the interior of one of the regions Ω j in the state-space partition. The MPC problem set-up remains the same as the one described by Problem 3.3.2, with the only difference that the set of admissible input sequences for an initial condition x X is now defined as (without any tightening): U N (x) := {u [N 1] U N φ(i) X, i = 1,..., N 1, φ(0) = x, φ(n) X T }. (3.11) All the definitions introduced in Section 6.3 and Section 6.4 remain the same (e.g., X f (N), V ( ), Π δ ( ), π δ ( ), Π δ ( ), π δ ( ), etc.) with the observation that the set of admissible input sequences defined in (5.6) is replaced everywhere with the set defined in (3.11). We will use Ξ δ (x(k)) := {ξ(x(k), u) u π δ (x(k))} and Ξ δ (x(k)) := {ξ(x(k), u) u π δ (x(k))}. Theorem 3.5.1 Let δ R >0 be given and suppose that Assumption 3.3.3 holds for system (6.16) and Problem 3.3.2. Take N Z 1, X T with 0 int(x T ) as a positively invariant set for system (6.16) in closed-loop with

3.6. Conclusion 55 u(k) = h(x(k)), k Z +. Furthermore, suppose F (ξ(x, h(x))) F (x) + L(x, h(x)) 0 for all x X T. (i) If Problem 3.3.2 is feasible at time k Z + for state x(k) X, then Problem 3.3.2 is feasible at time k + 1 for any state x(k + 1) Ξ δ (x(k)). Moreover, X T X f (N); (ii) The closed-loop system x(k + 1) Ξ δ (x(k)) is ε-as in X f (N) with ε(δ) := ( 2b δ ) 1 λ ; (iii) Suppose a 2 that δ R >0 satisfies 0 < δ < a, where a R >0 is the gain of the K-function α 1 (s) := as λ, introduced in Assumption 3.3.3. Then, the closed-loop system x(k + 1) Ξ δ (x(k)) is AS in X f (N). The proof of the above theorem can be obtained mutatis mutandis by combining the reasoning used in the proof of Theorem III.2 in (Lazar et al., 2006) and Theorem 6.2.3 for the case when σ(s) 0. Remark 3.5.2 The result of Theorem 3.5.1, statement (ii), establishes that δ sub-optimal nonsmooth MPC is ε(δ)-as without requiring any additional assumption, other than the ones needed for AS of optimal smooth MPC (Mayne et al., 2000). Furthermore, the result of Theorem 3.5.1, statement (iii), introduces a slightly stronger condition, under which even AS can be guaranteed a priori for a specific class of sub-optimal predictive control laws. In contrast with the results in (Scokaert et al., 1999) this is achieved without introducing additional stabilization constraints in the original MPC problem set-up. 3.6 Conclusion In this chapter we have considered discontinuous hybrid systems in closedloop with predictive control laws. We presented conditions for ε-iss and ε-as of the resulting closed-loop systems. These conditions do not require continuity of the system dynamics nor optimality of the predictive control law. The latter is especially important as firstly, the infimum in an MPC optimization problem does not have to be attained and secondly, numerical solvers usually provide only sub-optimal solutions. An explicit relation was established between the deviation of the MPC control action from the optimum and the resulting deterioration of the ISS (AS) property of the closed-loop system. By imposing stronger conditions on the sub-optimal solutions, ISS can even be attained in this case. The link between the optimality margin of the MPC control action and the ISS (AS) margin of the closed-loop system was further exploited to derive stronger conditions that yield sub-optimal MPC controllers with an ISS (AS) guarantee, without adding additional constraints to the MPC optimization problem.

56 Predictive control of hybrid systems: Input-to-state stability results for suboptimal solutions

4 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control 4.1 Introduction 4.2 Input-to-state stability 4.3 Min-max nonlinear MPC: Problem set-up 4.4 ISpS results for min-max nonlinear MPC 4.5 Main result: ISS dual-mode min-max MPC 4.6 Illustrative example: A nonlinear double integrator 4.7 Conclusions In this chapter we consider discrete-time nonlinear systems that are affected, possibly simultaneously, by parametric uncertainties and other disturbance inputs. The min-max model predictive control (MPC) methodology is employed to obtain a controller that robustly steers the state of the system towards a desired equilibrium. The aim is to provide a priori sufficient conditions for robust stability of the resulting closed-loop system using the inputto-state stability (ISS) framework. First, we show that only input-to-state practical stability can be ensured in general for closed-loop min-max MPC systems; and we provide explicit bounds on the evolution of the closed-loop system state. Then, we derive new conditions for guaranteeing ISS of minmax MPC closed-loop systems, using a dual-mode approach. An example illustrates the presented theory. 4.1 Introduction One of the practically relevant problems in control theory is the robust regulation towards a desired equilibrium of discrete-time systems affected, possibly simultaneously, by time-varying parametric uncertainties and other disturbance inputs. In the case when hard constraints are imposed on state and input variables, the robust model predictive control (MPC) methodology provides a reliable solution for tackling this control problem, see, for example, (Mayne et al., 2000) for an overview. The research related to robust MPC is focused on solving efficiently the corresponding optimization

58 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control problems on one hand and guaranteeing (robust) stability of the controlled system, on the other hand. In this chapter we are interested in stability issues and therefore, we position our results only with respect to articles on (robust) stability of nonlinear MPC. There are several ways for designing robust MPC controllers for perturbed nonlinear systems. One way is to rely on the inherent robustness properties of nominally stabilizing nonlinear MPC algorithms, e.g. as it was done in (Scokaert et al., 1997; Magni et al., 1998; Limon et al., 2002b; Grimm et al., 2003). Another approach is to incorporate knowledge about the disturbances in the MPC problem formulation via open-loop worst case scenarios. This includes MPC algorithms based on tightened constraints, e.g., as the one of (Limon et al., 2002a), and MPC algorithms, based on open-loop min-max optimization problems, see, for example, the survey (Mayne et al., 2000). To incorporate feedback to disturbances, the closed-loop or feedback min-max MPC problem set-up was introduced in (Lee and Yu, 1997) and further developed in (Mayne, 2001; Magni et al., 2003; Limon et al., 2006; Magni et al., 2006). The open-loop approach is computationally somewhat easier than the feedback approach, but the set of feasible states corresponding to the feedback min-max MPC optimization problem is usually much larger. Sufficient conditions for robust asymptotic stability of closed-loop (feedback) min-max MPC systems were presented in (Mayne, 2001) under the assumption that the (additive) disturbance input converges to zero as the state converges to the origin. Recently, input-to-state stability (ISS) (Sontag, 1989, 1990; Jiang and Wang, 2001) results for min-max nonlinear MPC were presented in (Limon et al., 2006) and (Magni et al., 2006). In (Limon et al., 2006) it was shown that, in general, only input-to-state practical stability (ISpS) (Jiang, 1993; Jiang et al., 1994, 1996) can be a priori ensured for min-max nonlinear MPC. ISpS is a weaker property than ISS, as ISpS does not imply asymptotic stability for zero disturbance inputs. The reason for the absence of ISS in general is that the effect of a non-zero disturbance input is taken into account by the min-max MPC controller, even if the disturbance input vanishes in reality. Still, in the case when the disturbance input converges to zero, it is desirable that asymptotic stability is recovered for the controlled system. In (Magni et al., 2006), an H (Chen and Scherer, 2006a) strategy was used to modify the classical min-max MPC cost function (Mayne et al., 2000) such that ISS is guaranteed for the closed-loop min-max MPC system. Furthermore, in (Magni et al., 2006) it was proven that a local upper bound on the min-max MPC value function, rather than a global one, is sufficient for ISS.

4.2. Input-to-state stability 59 In this chapter we propose a new approach for designing min-max MPC schemes for nonlinear systems with guaranteed ISS. In contrast with (Magni et al., 2006), our results apply to the classical min-max MPC problem setup, which is also employed in (Mayne, 2001; Limon et al., 2006). First, we develop novel ISpS conditions for min-max nonlinear MPC that allow us to derive explicit bounds on the evolution of the MPC closed-loop system state. Furthermore, we prove that these conditions actually imply that the state trajectory of the closed-loop system is ultimately bounded in a robustly positively invariant set. Then, we use a dual-mode approach in combination with a new technique based on KL-estimates of stability, e.g., see (Khalil, 2002), to derive a priori sufficient conditions for ISS of min-max nonlinear MPC. This result is important because it unifies the properties of (Limon et al., 2006) and (Mayne, 2001). More specifically, it can be used to design robustly asymptotically stable min-max MPC closed-loop systems without a priori assuming that the disturbance input converges to zero as the state of the closed-loop system converges to the origin. Section 6.4 and the sufficient conditions for ISS of dual-mode min-max nonlinear MPC are given in Section 5.5. An illustrative example is worked out in Section 6.5. Conclusions are summarized in Section 6.6. 4.1.1 Preliminaries Before introducing the notion of input-to-state (practical) stability we briefly recall some basic definitions. Let R, R +, Z and Z + denote the field of real numbers, the set of nonnegative reals, the set of integer numbers and the set of non-negative integers, respectively. We use the notation Z c1 and Z (c1,c 2 ] to denote the sets {k Z + k c 1 } and {k Z + c 1 < k c 2 }, respectively, for some c 1 Z +, c 2 Z >c1, and Z N to denote the N-times Cartesian product Z Z... Z, for some N Z 1. We use to denote an arbitrary p-norm. With some abuse of notation we will use both (z 0, z 1,...) and {z l } l Z+ with z l R n, l R +, to denote a sequence. For a sequence z := {z l } l Z+ let z := sup{ z l l Z + } and let z [k] denote the truncation of z at time k Z +, i.e. z [k] = {z l } l Z[0,k]. For a set S R n, we denote by int(s) its interior. For any r > 0 define a ball of radius r as B r := {x R n x r}. 4.2 Input-to-state stability In this section we present the ISS framework (Sontag, 1989, 1990; Jiang and Wang, 2001) for discrete-time autonomous nonlinear systems, which will

60 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control be employed in this chapter to study the behavior of perturbed nonlinear systems in closed-loop with min-max MPC controllers. Consider the discrete-time autonomous perturbed nonlinear system described by x k+1 = G(x k, w k, v k ), k Z +, (4.1) where x k R n, w k W R dw and v k V R dv are the state, unknown time-varying parametric uncertainties and other disturbance inputs (possibly additive), respectively, and, G : R n R dw R dv R n is an arbitrary nonlinear, possibly discontinuous, function. In what follows we assume that W and V are bounded sets. Throughout the chapter let w := {w l l Z +, w l W} and v := {v l l Z +, v l V} denote some arbitrary sequences of disturbances. Definition 4.2.1 RPI: A set P R n that contains the origin in its interior is called a robustly positively invariant (RPI) set for system (6.1) (with respect to W and V) if for all x P it holds that G(x, w, v) P for all w W and all v V. Definition 4.2.2 UB: System (6.1) is said to be ultimately bounded (UB) in a set P R n for initial conditions in X R n (with respect to W and V), if for all x 0 X there exists an i(x 0 ) Z + such that for all w and all v the corresponding state trajectory of (6.1) satisfies x k P for all k Z i(x0 ). Definition 4.2.3 A real-valued scalar function ϕ : R + R + belongs to class K if it is continuous, strictly increasing and ϕ(0) = 0. It belongs to class K if ϕ K and it is radially unbounded (i.e. ϕ(s) as s ). A function β : R + R + R + belongs to class KL if for each fixed k R +, β(, k) K and for each fixed s R +, β(s, ) is non-increasing and lim k β(s, k) = 0. Next, we introduce a regional version of global ISpS (Jiang, 1993; Jiang et al., 1994, 1996) and global ISS (Sontag, 1989, 1990; Jiang and Wang, 2001), respectively, for the discrete-time nonlinear system (6.1). This is useful when dealing with constrained nonlinear systems, such as NMPC closed-loop systems, as it was observed in (Magni et al., 2006). Definition 4.2.4 Regional ISpS (ISS): The system (6.1) is said to be ISpS in X R n if there exist a KL-function β, a K-function γ and a number d R + such that, for each x 0 X, all w and all v, it holds that the corresponding state trajectory of (6.1) satisfies x k β( x 0, k) + γ( v [k 1] ) + d, k Z 1. (4.2)

4.2. Input-to-state stability 61 If 0 int(x) and (4.2) holds for d = 0, the system (6.1) is said to be ISS in X. In what follows we state a discrete-time version of the continuous-time ISpS sufficient conditions of Proposition 2.1 of (Jiang et al., 1996). This result will be used throughout the chapter to prove ISpS and ISS for the particular case of min-max nonlinear MPC. Theorem 4.2.5 Let d 1, d 2 R +, let a, b, c, λ R >0 with c b and let 1 α 1 (s) := as λ, α 2 (s) := bs λ, α 3 (s) := cs λ and σ K. Furthermore, let X be a RPI set for system (6.1) and let V : X R + be a function such that α 1 ( x ) V (x) α 2 ( x ) + d 1 V (G(x, w, v)) V (x) α 3 ( x ) + σ( v ) + d 2 (4.3a) (4.3b) for all x X, w W and all v V. Then it holds that: (i) The system (6.1) is ISpS in X and the ISpS property of Definition 4.2.4 holds for β(s, k) := α1 1 (3ρk α 2 (s)), γ(s) := α1 1 ( ) 3σ(s) 1 ρ, d := α1 1 (3ξ), (4.4) where ξ := d 1 + d 2 1 ρ and ρ := 1 c b [0, 1). (ii) If 0 int(x) and the inequalities (6.4) hold for d 1 = d 2 = 0, the system (6.1) is ISS in X and the ISS property of Definition 4.2.4 (i.e. for d = 0) holds for ( ) 2σ(s) β(s, k) := α1 1 (2ρk α 2 (s)), γ(s) := α1 1, (4.5) 1 ρ where ρ := 1 c b [0, 1). Proof: (i) From V (x) α 2 ( x ) + d 1 for all x X, we have that for any x X \ {0} it holds: V (x) α 3 ( x ) V (x) α 3( x ) α 2 ( x ) (V (x) d 1) = ρv (x) + (1 ρ)d 1, where ρ := 1 c b [0, 1). In fact, the above inequality holds for all x X, since V (0) α 3 (0) = V (0) = ρv (0)+(1 ρ)v (0) ρv (0)+(1 ρ)d 1. Then, inequality (4.3b) becomes V (G(x, w, v)) ρv (x) + σ( v ) + (1 ρ)d 1 + d 2, (4.6) 1 Note that α 1, α 2, α 3 K.

62 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control for all x X, w W and all v V. Due to robust positive invariance of X, inequality (6.8) yields repetitively V (x k+1 ) ρ k+1 V (x 0 ) + k ρ i [σ( v k i ) + (1 ρ)d 1 + d 2 ] i=0 for all x 0 X, w [k] = (w 0, w 1,..., w k ) W k+1, v [k] = (v 0, v 1,..., v k ) V k+1, k Z +. Then, taking (4.3a) into account, using the property σ( v i ) σ( v [k] ) for all i k and the identity k, it holds that: V (x k+1 ) ρ k+1 α 2 ( x 0 ) + ρ k+1 d 1 + i=0 ρi = 1 ρk+1 1 ρ k ρ i [σ( v k i ) + (1 ρ)d 1 + d 2 ] i=0 ρ k+1 α 2 ( x 0 ) + ρ k+1 d 1 + [ σ( v [k] ) + (1 ρ)d 1 + d 2 ] k = ρ k+1 α 2 ( x 0 ) + 1 ρk+1 1 ρ σ( v [k] ) + d 1 + 1 ρk+1 1 ρ d 2 ρ k+1 α 2 ( x 0 ) + 1 1 ρ σ( v [k] ) + d 1 + 1 1 ρ d 2, for all x 0 X, w [k] W k+1, v [k] V k+1, k Z +. Let ξ := d 1 + d 2 1 ρ. Taking (4.3a) into account and letting α1 1 denote the inverse of α 1, we obtain: x k+1 α1 1 (V (x k+1)) α1 1 Applying the inequality α 1 1 (z + y + s) α 1(3 max(z, y, s)) α 1 we obtain from (4.7) 1 i=0 ( ρ k+1 α 2 ( x 0 ) + ξ + σ( v ) [k] ). (4.7) 1 ρ 1 ρ i (3z) + α 1 1 (3y) + α 1 1 (3s), (4.8) ( ) ( x k+1 α1 1 3ρ k+1 α 2 ( x 0 ) + α1 1 3 σ( v ) [k] ) + α1 1 1 ρ (3ξ), for all x 0 X, w [k] W k+1, v [k] V k+1, k Z +. We distinguish between two cases: ρ 0 and ρ = 0. First, suppose ρ (0, 1) and let β(s, k) := α1 1 (3ρk α 2 (s)). For a fixed k Z +, we have

4.2. Input-to-state stability 63 that β(, k) K due to α 2 K, α1 1 K and ρ (0, 1). For a fixed s, it follows that β(s, ) is non-increasing and lim k β(s, k) = 0, due to ρ (0, 1) and α1 1 K. Thus, it follows that β KL. Now let γ(s) := α 1 1 to α1 1 K and σ K. Finally, let d := α 1 ( 3σ(s) 1 ρ ). Since 1 1 ρ > 0, it follows that γ K due 1 (3ξ). Since ρ (0, 1) and d 1, d 2 0, we have that ξ 0 and thus, d 0. Otherwise, if ρ = 0 we have from (4.7) that x k α 1 1 (3σ( v [k 1] )) + α 1 1 (3ξ) β( x 0, k) + α 1 1 (3σ( v [k 1] )) + α 1 1 (3ξ) for any β KL and k Z 1. Hence, the perturbed system (6.1) is ISpS in X in the sense of Definition 4.2.4 and property (4.2) is satisfied with the functions given in (4.4). (ii) Following the proof of statement (i) it is straightforward to observe that when the sufficient conditions (6.4) are satisfied for d 1 = d 2 = 0, then ISS is achieved, since d = α1 1 (3ξ) = α 1 1 (0) = 0. From (4.7) and α 1 1 (z + y) α1 1 (2 max(z, y)) α 1 1 (2z) + α 1 1 (2y), it can be easily shown that the ISS property of Definition 4.2.4 actually holds with the functions given in (4.5). Definition 4.2.6 A function V ( ) that satisfies the hypothesis of Theorem 4.2.5 is called an ISpS (ISS) Lyapunov function. Remark 4.2.7 The hypothesis of Theorem 4.2.5 part (i) does not require continuity of G(,, ) or V ( ), nor that G(0, 0, 0) = 0 or V (0) = 0. The latter makes the ISpS framework suitable for analyzing stability of nonlinear systems in closed-loop with min-max MPC controllers, since in general, the min-max MPC value function is not zero at zero (see Section 6.4 for details). The hypothesis of Theorem 4.2.5 part (ii), which deals with ISS, also does not require continuity of G(,, ) or V ( ). However, it implies G(0, w, 0) = 0 for all w W and V (0) = 0, and continuity of G(, w, ) and V ( ) at the point x = 0 only, for all w W. Note that, due to the use of K -functions α 1, α 2, α 3 of a special type (which is not restrictive for the commonly used cost functions in min-max MPC, as shown in Section 6.4), Theorem 4.2.5 provides explicit bounds on the evolution of the state.

64 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control 4.3 Min-max nonlinear MPC: Problem set-up The results presented in this chapter can be applied to both open-loop and feedback min-max MPC strategies. However, there seems to be a common agreement that open-loop min-max formulations are conservative and underestimate the set of feasible input trajectories. For this reason, although we present both problem formulations, the stability results are proven only for feedback min-max MPC set-ups. However, it is possible to prove, via a similar reasoning and using the same hypotheses, that all the results developed in this chapter also hold for open-loop min-max MPC schemes. Consider the discrete-time non-autonomous perturbed nonlinear system x k+1 = g(x k, u k, w k, v k ), k Z +, (4.9) where x k R n, u k R m, w k W R dw and v k V R dv are the state, the control action, unknown time-varying parametric uncertainties and other disturbance inputs, respectively. The mapping g : R n R m R dw R dv R n is an arbitrary nonlinear, possibly discontinuous, function. Let X R n and U R m denote sets that contain the origin in their interior and represent state and input constraints for system (5.4). Furthermore, let X T X with 0 int(x T ) denote a desired terminal set and let F : R n R + with F (0) = 0 and L : R n R m R + with L(0, 0) = 0 be arbitrary functions. The objective is to regulate the system towards the origin while minimizing a performance index defined by the functions F ( ), L(, ) and with the set X T as terminal constraint. For a fixed prediction horizon N Z 1, open-loop min-max MPC evaluates a single sequence of controls, i.e. u k := (u 0 k,..., u N 1 k ) U N. Let x k (x k, u k, w k, v k ) := (x 1 k,..., x N k ) denote the state sequence generated by system (5.4) from initial state x 0 k := x k and by applying the input sequence u k, where w k := (w 0 k,..., w N 1 k ) W N and v k := (v 0 k,..., v N 1 k ) V N are the corresponding disturbance sequences and x i k := g(x i 1 k, u i 1 k, w i 1 k, v i 1 k ), i = 1,..., N. The open-loop min-max MPC class of admissible input sequences defined for X T and state x k X is U N (x k ) := { uk U N x k (x k, u k, w k, v k ) X N, x N k X T, w k W N, v k V N}. Let the terminal set X T X and N Z 1 be given. At time k Z + let x k X be given. The open-loop min-max MPC approach minimizes the cost

4.3. Min-max nonlinear MPC: Problem set-up 65 [ J(x k, u k ) := max wk W N,v k V N F (x N k ) + ] N 1 i=0 L(x i k, u i k ), with prediction model (5.4), over all sequences u k in U N (x k ). Feedback min-max MPC obtains a sequence of feedback control laws that minimizes a worst case cost function, while assuring robust constraint handling. In this chapter we employ the dynamic programming approach to feedback min-max nonlinear MPC proposed in (Lee and Yu, 1997) for linear systems and in (Mayne, 2001) for nonlinear systems. In this approach, the feedback min-max optimal control input is obtained as follows: { [ V i (x) := min u U max w W,v V L(x, u) + Vi 1 (g(x, u, w, v)) ] } (4.10) such that g(x, u, w, v) X f (i 1), w W, v V, where the set X f (i) contains all the states x i X which are such that (4.10) is feasible, i = 1,..., N. The optimization problem is defined for i = 1,..., N with the boundary conditions V 0 (x 0 ) := F (x 0 ), X f (0) := X T. (4.11) Taking into account the definition of the min-max problem (4.10), X f (i) is now the set of all states that can be robustly controlled into the set X T in i Z 1 steps. The control law is applied to system (5.4) in a receding horizon manner. At each sampling time the problem is solved for the current state x and the value function V N (x) is obtained. The feedback min-max MPC control law is defined as ū(x) := u N, (4.12) where u N is the optimizer of problem (4.10) for i = N. For simplicity of exposition, in what follows we assume existence and uniqueness of u N, and that the minimum and the maximum are well-defined in (4.10), for all i = 1,..., N. Notice that it is possible to show that the results developed in this chapter also apply when the global optimum is not unique. Furthermore, following the reasoning employed in (Scokaert et al., 1999), ISpS results can also be obtained for the sub-optimal case. In the following sections the min-max MPC value function V N (x) will be used as a candidate ISpS Lyapunov function in order to establish ISpS of the nonlinear system (5.4) in closed-loop with the feedback min-max MPC control (4.12). To simplify the notation, for the reminder of the chapter we will use V (x) to denote V N (x).

66 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control 4.4 ISpS results for min-max nonlinear MPC In this section we present sufficient conditions for ISpS of system (5.4) in closed-loop with the feedback min-max MPC control (4.12) and we derive explicit bounds on the evolution of the closed-loop system state. Let h : R n R m denote an arbitrary nonlinear function with h(0) = 0 and let X U := {x X h(x) U}. Assumption 4.4.1 There exist a L, a F, b F, λ R >0 with a L b F, e 1, e 2 R +, a function h : R n R m with h(0) = 0 and a K-function σ such that: (i) X T X U and 0 int(x T ); (ii) X T is a RPI set for system (5.4) in closed-loop with u k = h(x k ), k Z + ; (iii) L(x, u) a L x λ for all x X and all u U; (iv) a F x λ F (x) b F x λ + e 1 for all x X T ; (v) F (g(x, h(x), w, v)) F (x) L(x, h(x)) + σ( v ) + e 2 for all x X T, w W, and v V. Note that Assumption 4.4.1 implies that F ( ) is a local 2 ISpS Lyapunov function. Then, from Theorem 4.2.5 it follows that system (5.4) in closedloop with u k = h(x k ), k Z + is ISpS in X T, as formally stated below. Proposition 4.4.2 Suppose that Assumption 4.4.1 holds. Then, system (5.4) in closed-loop with u k = h(x k ), k Z +, is ISpS in X T. Moreover, if Assumption 4.4.1 holds with e 1 = e 2 = 0, system (5.4) in closed-loop with u k = h(x k ), k Z +, is ISS in X T. Assumption 4.4.1 can be regarded as a generalization of the usual sufficient conditions for nominal stability of MPC, which imply that F ( ) is a local Lyapunov function, see, for example, the survey (Mayne et al., 2000). Techniques for computing a terminal cost and a function h( ) such that Assumption 4.4.1 is satisfied for relevant subclasses of system (5.4) (i.e. perturbed linear and piecewise affine systems) will be presented in the next chapter. See also the illustrative nonlinear example in Section 6.5 of this chapter. Theorem 4.4.3 Suppose that F ( ), L(, ), X T and h( ) are such that Assumption 4.4.1 holds for system (5.4). Furthermore, suppose that there exists a number θ R bf such that V (x) θ x λ for all x X f (N) \ X T. Then, 2 ISS Lyapunov function when e 1 = e 2 = 0.

4.4. ISpS results for min-max nonlinear MPC 67 the perturbed nonlinear system (5.4) in closed-loop with the feedback minmax MPC control (4.12) is ISpS in X f (N). Moreover, the property (4.2) holds with the following functions: ( 3θ β(s, k) := a L ) 1 ( λ ˇρ k s, γ(s) := 3δ a L (1 ρ) ) 1 λ s, ( ) 1 3ξ λ d :=, (4.13) a L where ˇρ := ρ 1 λ (0, 1), ρ := 1 a L θ (0, 1), δ > 0 can be taken arbitrarily small, ξ := d 1 + d 2 1 ρ, d 1 := e 1 + N[max v V σ( v ) + e 2 ] and d 2 := max v V σ( v ) + e 2. Proof: The proof consists in showing that the min-max MPC value function V ( ) is an ISpS Lyapunov function, i.e. it satisfies the hypothesis of Theorem 4.2.5. First, it is known (see (Mayne, 2001; Kerrigan and Maciejowski, 2001)) that under Assumption 4.4.1-(i),(ii) the set X f (N) is a RPI set for system (5.4) in closed-loop with the feedback min-max MPC control (4.12). Second, we will obtain lower and upper bounding functions on the minmax MPC value function that satisfy (4.3a). From Assumption 4.4.1-(iii) it follows that V (x) = V N (x) L(x, ū(x)) a L x λ, for all x X f (N), where ū(x) is the feedback min-max MPC control law defined in (4.12). Next, letting x 0 := x X T, by Assumption 4.4.1-(ii) (i.e. due to robust positive invariance of X T ) one can apply Assumption 4.4.1-(v) repetitively for the sequence of predicted states. Summing up the resulting inequalities it follows that for any w [N 1] W N and any v [N 1] V N N 1 N 1 F (x N ) + L(x i, h(x i )) F (x 0 ) + σ( v i ) + Ne 2, i=0 where x i := g(x i 1, h(x i 1 ), w i 1, v i 1 ) for i = 1,..., N. Then, by optimality and Assumption 4.4.1-(iv) we have that for all x X T, V (x) = V N (x) max w W,v V [ F (x N ) + N 1 i=0 i=0 L(x i, h(x i )) F (x) + N[max v V σ( v ) + e 2] b F x λ + d 1, where d 1 := e 1 + N[max v V σ( v ) + e 2 ] > 0. As from the hypothesis of Theorem 4.4.3 we also have that V (x) θ x λ for all x X f (N) \ X T (with b F θ) it follows that V (x) θ x λ + d 1 for all x X f (N). Hence, V ( ) ]

68 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control satisfies condition (4.3a) for all x X f (N) with α 1 (s) := a L s λ, α 2 (s) := θs λ and d 1 = e 1 + N[max v V σ( v ) + e 2 ] > 0. Next, we show that V ( ) satisfies condition (4.3b). By Assumption 4.4.1- (v) and optimality, for all x X T = X f (0) we have that: V 1 (x) V 0 (x) max [L(x, h(x)) + F (g(x, h(x), w, v))] F (x) w W,v V max v V σ( v ) + e 2. Then, it can be shown via induction that (see also (Limon et al., 2006)): V i+1 (x) V i (x) max v V σ( v ) + e 2, x X f (i), i 0,..., N 1. (4.14) At time k Z +, for a given state x k X and a fixed prediction horizon N the min-max MPC control law ū(x k ) is calculated and then applied to system (5.4). The state evolves to x k+1 = g(x k, ū(x k ), w k, v k ) X f (N). Then, by Assumption 4.4.1-(v) and applying recursively (4.14) it follows that V N (x k+1 ) V N (x k ) (4.15) = V N (x k+1 ) max w W,v V [L(x k, ū(x k )) + V N 1 (g(x k, ū(x k ), w, v))] V N (x k+1 ) L(x k, ū(x k )) V N 1 (g(x k, ū(x k ), w k, v k )) = V N (x k+1 ) L(x k, ū(x k )) V N 1 (x k+1 ) L(x k, ū(x k )) + max v V σ( v ) + e 2 a L x k λ + max v V σ( v ) + e 2 = a L x k λ + d 2, (4.16) for all x k X f (N), w k W, v k V and all k Z +, where d 2 := max v V σ( v ) + e 2 > 0. Hence, the feedback min-max nonlinear MPC value function V ( ) satisfies (4.3b) with α 3 (s) := a L s λ, any σ K and d 2 = max v V σ( v ) + e 2 > 0. The statements then follow from Theorem 4.2.5. The functions β(, ), γ( ) and the constant d defined in (4.13) are obtained by letting σ(s) := δs λ for some (any) δ > 0 and substituting the functions α 1 ( ), α 2 ( ), α 3 ( ), σ( ) and the constants d 1, d 2 obtained above in relation (4.4). 4.5 Main result: ISS dual-mode min-max MPC As shown in the previous section, the hypothesis of Theorem 4.4.3 is sufficient for ISpS, but not necessarily for ISS of system (5.4) in closed-loop with ū( ),

4.5. Main result: ISS dual-mode min-max MPC 69 even when e 1 = e 2 = 0. This is due to the min-max MPC value function V ( ), which is only an ISpS Lyapunov function in general, and not an ISS Lyapunov function. Therefore, it is unclear whether the min-max MPC control law (4.12) results in an ISS closed-loop system. In the case of persistent disturbances this is not necessarily a drawback, since ultimate boundedness in a RPI subset of X f (N) is the most one can aim at, anyhow. It will be shown next that UB is indeed guaranteed under the hypothesis of Theorem 4.4.3. However, in the case when the disturbance input vanishes after a certain time it is desirable to have an ISS closed-loop system. In this section we present sufficient conditions for ISS of system (5.4) in closed-loop with a dual-mode min-max MPC strategy. The following technical result will be employed to prove the main result for dual-mode min-max nonlinear MPC. For any τ R (0,aL ) define { M τ := x X f (N) x λ d } 2 a L τ and M τ := X f (N) \ M τ, (4.17) where a L R >0 is from Assumption 4.4.1-(iii) and d 2 = max v V σ( v ) + d e 2 > 0. Note that 0 int(m τ ), as 2 a L τ > 0 and 0 int(x T ) int(x f (N)). Lemma 4.5.1 Suppose that F ( ), L(, ), X T and h( ) are such that Assumption 4.4.1 holds for system (5.4). Let τ R (0,aL ) be such that M τ and consider the closed-loop system (5.4)-(4.12). Then, for each x 0 M τ there exists an i(x 0 ) Z 1 such that for all disturbances realizations w and v, it holds that x i(x0 ) M τ. Moreover, there exists a KL-function β such that for all x 0 M τ and all disturbances realizations w and v, the corresponding trajectory of the closedloop system (5.4)-(4.12) satisfies x k β( x 0, k) as long as x k M τ for all k Z [0,i), i Z 1. Proof: We prove the second statement of the lemma first. As shown in the proof of Theorem 4.4.3, the hypothesis implies that a L x λ V (x) θ x λ + d 1, x X f (N). Let r > 0 be such that B r M τ. For all state trajectories {x k } k Z[0,i) M i τ (and thus, x k M τ for all k Z [0,i) ) we have that x k r for all k Z [0,i).

70 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control This yields: V (x k ) θ x k λ + d 1 ( xk r The hypothesis also implies (see (4.15)) that ) λ ( θ + d ) 1 r λ x k λ, x k M τ, k Z [0,i). V (x k+1 ) V (x k ) a L x k λ +d 2, x k X f (N), w k W, v k V, k Z +. By the definitions in (4.17), for x M τ it holds that a L x λ +d 2 τ x λ, which yields: V (x k+1 ) V (x k ) τ x k λ, x k M τ, w k W, v k V, k Z [0,i). (4.18) Then, following the steps of the proof of Theorem 4.2.5, it is straightforward to show that the state trajectory satisfies for all k Z [0,i), x k β( x 0, k); ( ) 1 β(s, k) := ᾱ 1 b λ ( ) 1 ( ρk ᾱ 2 (s)) = s ρ 1 k λ, (4.19) a L where ᾱ 2 (s) := bs λ, b := θ + d 1, ᾱ r λ 1 (s) := a L s λ and ρ := 1 τ b. Note that ρ (0, 1) as 0 < τ < a L b F θ < θ + d 1 = b. r λ Next, we prove that there exists an i Z 1 such that x i M τ. Assume that there does not exist an i Z 1 such that x i M τ. Then, for all i Z + we have that ( ) 1 b x i β( x λ ( ) 0, i) = x0 ρ 1 i λ. Since ρ 1 λ (0, 1), we have that lim i ( ρ 1 λ ) i = 0. Hence, there exists an i Z 1 such that x i B r M τ and we reached a contradiction. Note that (4.19) is independent of w or v and thus, i can be taken to depend on x 0 only. Before stating the main result, we make use of Lemma 4.5.1 to prove that the ISpS sufficient conditions of Assumption 4.4.1 ensure ultimate boundedness of the min-max MPC closed-loop system. This property is achieved with respect to a RPI sublevel set of the min-max MPC value function induced by the set M τ. Theorem 4.5.2 Suppose that the hypothesis of Lemma 4.5.1 holds and let Υ := max x M τ V (x) + d 2 and V Υ := {x X f (N) V (x) Υ}. Then, the closed-loop system (5.4)-(4.12) is ultimately bounded in the set V Υ for initial conditions in X f (N). a L

4.5. Main result: ISS dual-mode min-max MPC 71 Proof: By definition of Υ, x M τ X f (N) implies that V (x) max x M τ V (x) max x M τ V (x) + d 2 = Υ. Therefore, M τ V Υ. Suppose that x 0 X f (N) \ V Υ and thus, x 0 M τ. Then, by Lemma 4.5.1 it follows that there exists an i(x 0 ) Z 1 such that x i (x 0 ) M τ V Υ. Next, we prove that V Υ is a RPI set for the closed-loop system (5.4)- (4.12). As shown in the proof of Lemma 4.5.1 (see (4.18)), for any x V Υ \M τ it holds that V (g(x, ū(x), w, v)) V (x) τ x λ V (x) Υ, for all w W and all v V. Now let x M τ. By inequality (4.15) it holds that V (g(x, ū(x), w, v)) V (x) a L x λ + d 2 V (x) + d 2 Υ. Therefore, for any x V Υ, it holds that g(x, ū(x), w, v) V Υ for all w W and all v V, which implies that V Υ is a RPI set for the closed-loop system (5.4)-(4.12). Hence, the closed-loop system (5.4)-(4.12) is ultimately bounded in V Υ. In a worst case situation, i.e. when the disturbance input v V is too large and V Υ = X f (N) the result of Theorem 4.5.2 diminishes to ultimate boundedness of X f (N) itself. To state the main result, let the dual-mode feedback min-max MPC control law be defined as: { ū DM ū(x) if x X f (N) \ X T (x) := (4.20) h(x) if x X T. Theorem 4.5.3 Suppose Assumption 4.4.1 holds with e 1 = e 2 = 0 for system (5.4) and there exists τ R (0,aL ) such that M τ X T. Then, the perturbed nonlinear system (5.4) in closed-loop with the dual-mode feedback min-max MPC control ū DM ( ) is ISS in X f (N). Proof: In order to prove ISS, we consider two situations: in Case 1 we assume that x 0 X T and in Case 2 we assume that x 0 X f (N) \ X T. In Case 1, F ( ) satisfies the hypothesis of Proposition 4.4.2 with e 1 = e 2 = 0 and hence, the closed-loop system (5.4)-(4.20) is ISS. Then, using the reasoning employed in the proof of Lemma 4.5.1, it can be shown that there exist a

72 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control KL-function β(s, k) := α 1 1 (2 ρk α 2 (s)), with α 1 (s) := a F s λ, α 2 (s) := b F s λ, ρ := 1 a L bf, and a K-function γ such that for all x 0 X T the state trajectory satisfies x k β( x 0, k) + γ( v [k 1] ), k Z 1. (4.21) In Case 2, since M τ X T, by Lemma 4.5.1, for any x 0 X f (N), w and any v, there exists a p Z 1 such that x k X T for k Z [0,p) and x p X T. From Lemma 4.5.1 we also have that there exists a KL-function β(s, k) = ᾱ1 1 ( ρk ᾱ 2 (s)), with ᾱ 1 (s) = a L s λ, ᾱ 2 (s) = bs λ, ρ = 1 τ b such that the state trajectory satisfies x k β( x 0, k), k Z p and x p X T. Then, for all p Z 1 and all k Z p+1 it holds that x k β( x p, k p) + γ( v [k p,k 1] ) β( β( x 0, p), k p) + γ( v [k p,k 1] ) ˆβ( x 0, k) + γ( v [k 1] ), where v [k p,k 1] denotes the restriction of v to the interval [k p, k 1]. In the above inequalities we used β( β(s, p), k p) = α 1 1 ( ( 2bF b a L a F 2 ρ k p α 2 ( ( b and ˆρ := max( ρ, ρ) (0, 1). Hence, ˆβ KL. Then, we have that a L ) 1 )) λ ( ) s ρ 1 p λ ) 1 λ s ( ˆρ 1 λ ) k := ˆβ(s, k), x k β( x 0, k) + γ( v [k 1] ), k Z 1, ( ) for all x 0 X f (N), w and all v, where β(s, k) := max β(s, k), β(s, k), ˆβ(s, k). Since β, β, ˆβ KL implies that β KL, and we have γ K, the statement then follows from Definition 4.2.4. The interpretation of the condition M τ X T is that the min-max MPC controller steers the state of the system inside the terminal set X T for all w and all v. Then, ISS can be achieved by switching to the local feedback control law when the state enters the terminal set.

4.6. Illustrative example: A nonlinear double integrator 73 4.6 Illustrative example: A nonlinear double integrator The following example will illustrate how one can verify the conditions for ISS of min-max nonlinear MPC presented in this chapter. For examples that illustrate the benefits of using a min-max MPC scenario compared to using a nominally stabilizing or inherently robust MPC approach we refer the interested reader to (Lee and Yu, 1997; Scokaert and Mayne, 1998; Magni et al., 2003; Wang and Rawlings, 2004) and the references therein. Consider a perturbed discrete-time nonlinear double integrator obtained from a continuous-time double integrator via a sample-and-hold device with a sampling period equal to one, as follows: x k+1 = Ax k + Bu k + f(x k ) + v k, k Z +, (4.22) where A = [ 1 0 1 1 ], B = [ 0.5 1 ], f : R2 R 2, f(x) := 0.025 [ 1 1 ] x x is a nonlinear additive term and v k V := {v R 2 v 0.03} for all k Z + is an additive disturbance input (we use to denote the infinity norm). The state and the input are constrained at all times in the sets X := {x R 2 x 10} and U := {u R u 2}. The MPC cost function is defined using -norms, i.e. F (x) := P x, L(x, u) := Qx + Rx, where P is a full-column rank matrix (to be determined), Q = 0.8I 2 and R = 0.1. The stage cost satisfies Assumption 4.4.1-(iii) for λ = 1 and any a L (0, 0.8). We take the function h( ) as h(x) := Kx, where K R 1 2 is the gain matrix. To compute the terminal cost matrix P and the gain matrix K such that Assumption 4.4.1-(v) holds, we first calculate P and K for the linearization of system (4.22), i.e.: x k+1 = Ax k + Bu k + v k, k Z +. (4.23) To accommodate for the nonlinear term f( ), we employ a larger stage cost weight matrix for the state, i.e. Q = 2.4I2, instead of Q = 0.8I 2, for which it holds that Qx Qx for all x R 2. The terminal cost F (x) = P x and local control law h(x) = Kx with the matrices [ ] 12.1274 7.0267 P =, K = [ 0.5885 1.4169 ], (4.24) 0.4769 11.6072

74 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control were computed (using a technique recently developed in (Lazar et al., 2006)) such that the following inequality holds for the linear system (4.23), i.e. P ((A + BK)x + v) P x Qx RKx + σ( v ), (4.25) for all x R 2 and all v R 2, where σ(s) := P s. The terminal cost satisfies Assumption 4.4.1-(iv) for λ = 1, b F = P = 19.1541, a F = 0.1 and e 1 = 0. To obtain a suitable bound on f(x) we employ the following tightened set of constraints for h( ) (see Figure 4.1 for a plot of X U ): X U := {x X x 1.72, Kx 2}. The terminal set X T, also plotted in Figure 4.1, is taken as the maximal RPI set contained in the set X U (and which is non-empty) for the linear system (4.23), in closed-loop with u k = h(x k ), k Z +, and disturbances in the set {v R 2 v 0.18}. One can easily check that max x XU f(x) < 0.15 and thus, it follows that the terminal set X T chosen as specified above is a RPI set for the nonlinear system (4.22) in closed-loop with u k = h(x k ), k Z +, and all disturbances v in V = {v R 2 v 0.03}. Using the fact that (notice that below, in some cases, denotes the induced infinity matrix norm) Qx 2.3515 x, x R 2, max x X T P 0.025 [ 1 1 ] x = 1.5515, inequality (4.25) and the triangle inequality, for all x X T and all v R 2 we obtain: P ((A + BK)x + v) + P f(x) P x P ((A + BK)x + v) P x + P f(x) Qx RKx + σ( v ) + P f(x) ( ) 2.3515 x RKx + σ( v ) + max P 0.025 [ 1 1 ] x X x x T 2.3515 x RKx + σ( v ) + 1.5515 x = 0.8 x RKx + σ( v ) = Q x RKx + σ( v ) Qx RKx + σ( v ) = L(x, Kx) + σ( v ).

4.6. Illustrative example: A nonlinear double integrator 75 Figure 4.1: State trajectory for the nonlinear system (4.22) in closed-loop with a dual-mode min-max MPC controller and an estimate of the set of feasible states X f (4). Hence, the terminal cost F (x) = P x and the control law h(x) = Kx, with the matrices P and K given in (4.24), satisfy Assumption 4.4.1-(v) for the nonlinear system (4.22) with e 2 = 0 and with σ(s) = P s. Consider now the set M τ, which needs to be determined to establish ISS of the nonlinear system (4.22) in closed-loop with the dual-mode min-max MPC control law (4.20). We can choose a L = 0.79 < 0.8, which ensures that Qx a L x for all x R 2. Since d 2 = max v V σ( v ) = 0.5746, it follows that a necessary condition to be satisfied is τ (0, 0.79) (with d the smallest set M τ obtained for lim 2 τ 0 = 0.7273). For τ = 0.0718, d 2 a L τ which yields a L τ = 0.8001, it holds that M τ X T, see Figure 4.1 for an illustrative plot. Therefore, the closed-loop system (4.22)-(4.20) is ISS in X f (N), as guaranteed by Theorem 4.5.3. As the feedback min-max MPC optimization problem was computationally untractable for the nonlinear model (4.22), we have used an open-loop min-max MPC problem set-up, as the one described in Section 6.3, to calculate the control input. The developed theory applies also for the open-loop min-max MPC scheme, as pointed out in Section 6.3. Although the resulting open-loop min-max optimization problem still has a very high computational

76 On Input-to-State Stability of Min-max Nonlinear Model Predictive Control v 1 solid line; v 2 dashed line 0.04 0.02 0 0.02 0.04 1 2 3 4 5 6 7 8 9 10 2 1 u 0 1 2 1 2 3 4 5 6 7 8 9 10 Samples Figure 4.2: Dual-mode min-max nonlinear MPC control input and disturbance input histories. burden, we could obtain a solution using the fmincon Matlab solver. The closed-loop state trajectories for initial state x 0 = [ 7 4] and prediction horizon N = 4 are plotted in Figure 4.1. The dual-mode min-max MPC control input and (randomly generated) disturbance input histories are plotted in Figure 4.2. The min-max MPC controller manages to drive the state of the perturbed nonlinear system inside the terminal set, while satisfying constraints at all times. 4.7 Conclusions In this chapter we have revisited the robust stability problem in min-max nonlinear model predictive control. The input-to-state practical stability framework has been employed to study robust stability of perturbed nonlinear systems in closed-loop with min-max MPC controllers. New a priori conditions for ISpS were presented together with explicit bounds on the evolution of the closed-loop system state. Moreover, it was proven that these conditions also ensure ultimate boundedness. Novel conditions that guarantee ISS of min-max nonlinear MPC closed-loop systems were derived using a dual-mode approach. This result is useful as it provides a methodology for designing robustly asymptotically stable min-max MPC schemes without a priori assuming that the (additive) disturbance input converges to zero as the closed-loop system state converges to the origin.

5 Design of the terminal cost: H and min-max MPC 5.1 Introduction 5.2 Preliminaries 5.3 Problem formulation 5.4 Main results 5.5 Conclusions This chapter presents a novel method for designing the terminal cost and the auxiliary control law (ACL) for robust MPC of uncertain linear systems, such that ISS is a priori guaranteed for the closed-loop system. The method is based on the solution of a set of LMIs. An explicit relation is established between the proposed method and H control design. This relation shows that the LMI-based optimal solution of the H synthesis problem solves the terminal cost and ACL problem in min-max MPC, for a particular choice of the stage cost. This result, which was somehow missing in the MPC literature, is of general interest as it connects well known linear control problems to robust MPC design. 5.1 Introduction Perhaps the most utilized method for designing stabilizing and robustly stabilizing model predictive controllers (MPC) is the terminal cost and constraint set approach (Mayne et al., 2000). This technique, which applies to both nominally stabilizing and min-max robust MPC schemes, relies on the off-line computation of a suitable terminal cost along with an auxiliary control law (ACL). For nominally stabilizing MPC with quadratic costs, the terminal cost can be calculated for linear dynamics by solving a discretetime Riccati equation, with the optimal linear quadratic regulator (LQR) as the ACL (Scokaert and Rawlings, 1998). In (Kothare et al., 1996) it was shown that an alternative solution to the same problem, which also works for parametric uncertainties, can be obtained by solving a set of LMIs. The design of min-max MPC schemes that are robust to additive disturbances was treated in (Magni et al., 2003), where it was proven that the terminal

78 Design of the terminal cost: H and min-max MPC cost can be obtained as a solution of a discrete-time H Riccati equation, for an ACL that solves the corresponding H control problem. In this chapter we present an LMI-based solution for obtaining a terminal cost and an ACL, such that min-max MPC schemes (Magni et al., 2006; Lazar et al., 2008a) achieve input-to-state stability (ISS) (Jiang and Wang, 2001) for linear systems affected by both parametric and additive disturbances. The proposed LMIs generalize the conditions in (Kothare et al., 1996) to allow for additive uncertainties as well. Moreover, we establish an explicit relation between the developed solution and the LMI-based 1 optimal solution of the discrete-time H synthesis problem corresponding to a specific performance output, related to the MPC cost. This result, which was somehow missing in the MPC literature, adds to the results of (Magni et al., 2003) and to the well known connection between design of nominally stabilizing MPC schemes and the optimal solution of the LQR problem. Such results are of general interest as they connect well known linear control problems to MPC design. 5.2 Preliminaries Let R, R +, Z and Z + denote the field of real numbers, the set of nonnegative reals, the set of integer numbers and the set of non-negative integers, respectively. We use the notation Z c1 and Z (c1,c 2 ] to denote the sets {k Z + k c 1 } and {k Z + c 1 < k c 2 }, respectively, for some c 1, c 2 Z +. For i Z +, let i = 1, N denote i = 1,..., N. For a set S R n, we denote by int(s) the interior of S. A polyhedron (or a polyhedral set) in R n is a set obtained as the intersection of a finite number of open and/or closed half-spaces. The Hölder p-norm of a vector x R n is defined as x p := ( [x] 1 p +... + [x] n p ) 1 p for p Z [1, ) and x := max i=1,...,n [x] i, where [x] i, i = 1,..., n, is the i-th component of x and is the absolute value. For a positive definite and symmetric matrix M, denoted by M 0, M 1 2 denotes its Cholesky factor, which satisfies (M 1 2 ) M 1 2 = M 1 2 (M 1 2 ) = M and, λ min (M) and λ max (M) denote the smallest and the largest eigenvalue of M, respectively. We will use 0 and I to denote a matrix with all elements zero and the identity matrix, respectively, of appropriate dimensions. Let z := {z(l)} l Z+ with z(l) R o for all l Z + denote an arbitrary sequence. Define z := sup{ z(l) l Z + }, where denotes an arbitrary p- 1 A similar connection is established in (Magni et al., 2003), with the difference that the Riccati-based solution to the optimal H synthesis problem is exploited, rather than the LMI-based solution; also, parametric uncertainties are not considered.

5.2. Preliminaries 79 norm, and z [k] := {z(l)} l Z[0,k]. A function ϕ : R + R + belongs to class K if it is continuous, strictly increasing and ϕ(0) = 0. A function ϕ : R + R + belongs to class K if ϕ K and lim s ϕ(s) =. A function β : R + R + R + belongs to class KL if for each fixed k R +, β(, k) K and for each fixed s R +, β(s, ) is decreasing and lim k β(s, k) = 0. 5.2.1 Input-to-state stability Consider the discrete-time nonlinear system x(k + 1) = Φ(x(k), w(k), v(k)), k Z +, (5.1) where x(k) R n is the state and w(k) R dw, v(k) R dv are unknown disturbance inputs at the discrete-time instant k. The mapping Φ : R n R dw R dv R n is an arbitrary nonlinear function. We assume that Φ(0, w, 0) = 0 for all w. Let W and V be subsets of R dw and R dv, respectively. Definition 5.2.1 We call a set P R n robustly positively invariant (RPI) for system (6.1) with respect to (W, V) if for all x P it holds that Φ(x, w, v) P for all (w, v) W V. Definition 5.2.2 Let X with 0 int(x) be a subset of R n. We call system (6.1) ISS(X, W, V) if there exist a KL-function β(, ) and a K-function γ( ) such that, for each x(0) X, all w = {w(l)} l Z+ with w(l) W, l Z + and all v = {v(l)} l Z+ with v(l) V, l Z + it holds that the corresponding state trajectory of (6.1) satisfies x(k) β( x(0), k) + γ( v [k 1] ), k Z 1. We call the function γ( ) an ISS gain of system (6.1). 5.2.2 Input-to-state stability conditions for min-max robust MPC In this subsection we briefly summarize some of the results presented in the previous chapter, to prepare the problem formulation. Consider the discrete-time constrained nonlinear system x(k + 1) = φ(x(k), u(k), w(k), v(k)), k Z +, (5.2) where x(k) X R n is the state, u(k) U R m is the control action and w(k) W R dw, v(k) V R dv are unknown disturbance inputs at the discrete-time instant k. φ : R n R m R dw R dv R n is an arbitrary nonlinear function with φ(0, 0, w, 0) = 0 for all w W. We assume that 0 int(x), 0 int(u) and W, V are bounded. Next, let F : R n R + and

80 Design of the terminal cost: H and min-max MPC L : R n R m R + with F (0) = L(0, 0) = 0 be arbitrary nonlinear functions. For N Z 1 let ū [N 1] (k) := (ū(k), ū(k +1),..., ū(k +N 1)) U N = U... U denote a sequence of future inputs and, similarly, let w [N 1] (k) W N, v [N 1] (k) V N denote some sequences of future disturbances. Consider the MPC cost J(x(k), ū [N 1] (k), w [N 1] (k), v [N 1] (k)) N 1 := F ( x(k + N)) + L( x(k + i), ū(k + i)), where x(k + i + 1) := φ( x(k + i), ū(k + i), w(k + i), v(k + i)) for i = 0, N 1 and x(k) := x(k). Let X T X with 0 int(x T ) denote a target set and define the following set of feasible input sequences: U N (x(k)) := {u [N 1] (k) U N x(k + i) X, i = 1, N 1, x(k + N) X T, i=0 x(k) := x(k), w [N 1] (k) W N, v [N 1] (k) V N }. Problem 5.2.3 Let X T X and N Z 1 be given. At time k Z + let x(k) X be given and infimize sup J(x(k), ū [N 1] (k), w [N 1] (k), v [N 1] (k)) w [N 1] (k) W N, v [N 1] (k) V N over all input sequences ū [N 1] (k) U N (x(k)). Assuming the infimum in Problem 6.4.2 exists and can be attained, the MPC control law is obtained as u MPC (x(k)) := ū (k), where denotes the optimum 2. Next, we summarize a priori sufficient conditions for guaranteeing robust stability of system (6.4) in closed-loop with u(k) = u MPC (x(k)), k Z + that were presented in detail in the previous chapter. Let h : R n R m denote an auxiliary control law (ACL) with h(0) = 0 and let X U := {x X h(x) U}. Assumption 5.2.4 There exist functions α 1, α 2, α 3 K and σ K such that: (i) X T X U ; (ii) X T is a RPI set for system (6.4) in closed-loop with u(k) = h(x(k)), k Z + ; 2 If the infimum does not exist, one has to resort to ISS results for sub-optimal solutions, see, e.g., the results presented in Chapter 3 of the thesis.

5.3. Problem formulation 81 (iii) L(x, u) α 1 ( x ) for all x X and all u U; (iv) α 2 ( x ) F (x) α 3 ( x ) for all x X T ; (v) F (φ(x, h(x), w, v)) F (x) L(x, h(x)) + σ( v ), x X T, w W, v V. In (Magni et al., 2006) and Chapter 3 of this thesis it was shown that Assumption 5.2.4 is sufficient for guaranteeing ISS of the MPC closed-loop system corresponding to Problem 6.4.2. Notice that although in Problem 6.4.2 we have presented the open-loop formulation of min-max MPC for simplicity of exposition, Assumption 5.2.4 is also sufficient for guaranteeing ISS for feedback min-max variants of Problem 6.4.2, see (Magni et al., 2006) and Chapter 3 for the details. Remark 5.2.5 The sufficient ISS conditions of Assumption 5.2.4 are an extension for robust MPC of the well known terminal cost and constraint set stabilization conditions for nominal MPC, see A1-A4 in (Mayne et al., 2000). While the stabilization conditions for MPC (Mayne et al., 2000) require that the terminal cost is a local Lyapunov function for the system in closed-loop with an ACL, Assumption 5.2.4 requires in a similar manner that the terminal cost is a local ISS Lyapunov function (Jiang and Wang, 2001) for the system in closed-loop with an ACL. 5.3 Problem formulation For a given stage cost L(, ), to employ Assumption 5.2.4 for setting-up robust MPC schemes with an a priori ISS guarantee (or to compute state feedback controllers that achieve local ISS), one needs systematic methods for computing a terminal cost F ( ), a terminal set X T and an ACL h( ) that satisfy Assumption 5.2.4. Once F ( ) and h( ) are known, several methods are available for calculating the maximal RPI set contained in X U for certain relevant subclasses of system (6.4), in closed-loop with u(k) = h(x(k)), k Z +, see, for example, (Kolmanovsky and Gilbert, 1998; Alessio et al., 2007) and the references therein. As a consequence, therefore, we focus on solving the following problem. Problem 5.3.1 Calculate F ( ) and h( ) such that Assumption 5.2.4-(v) holds. This problem comes down to computing an input-to-state stabilizing statefeedback given by h( ) along with an ISS Lyapunov function (i.e. F ( )) for

82 Design of the terminal cost: H and min-max MPC system (6.4) in closed-loop with the ACL. This is a non-trivial problem, which depends on the type of MPC cost, system class and on the type of candidate ISS Lyapunov function F ( ). Furthermore, it would be desirable that the MPC cost function is continuous and convex. 5.3.1 Existing solutions Several solutions have been presented for the considered problem for particular subclasses of system (6.4). Most methods consider quadratic cost functions, F (x) := x P x, P 0, L(x, u) = x Qx + u Ru, Q, R 0, and linear state feedback ACLs given by h(x) := Kx. (i) The nominal linear case: φ(x, u, 0, 0) := Ax + Bu, A R n n, B R n m. In (Scokaert and Rawlings, 1998) it was proven that the solutions of the unconstrained infinite horizon linear quadratic regulation problem with weights Q, R satisfy Assumption 5.2.4-(v), i.e. and K = (R + B P B) 1 B P A P = (A + BK) P (A + BK) + K RK + Q. (5.3) Numerically, this method amounts to solving the discrete-time Riccati equation (5.3). (ii) The linear case with parametric disturbances: φ(x, u, w, 0) := A(w)x+ B(w)u, A(w) R n n, B(w) R n m are affine functions of w W with W a compact polyhedron. In (Kothare et al., 1996) it was proven that P = Z 1 and K = Y Z 1 satisfy Assumption 5.2.4-(v), where Z R n n and Y R m n are solutions of the linear matrix inequality Z (A(w i )Z + B(w i )Y ) (R 1 2 Y ) (Q 1 2 Z) (A(w i )Z + B(w i )Y ) Z 0 0 R 1 2 Y 0 I 0 0, Q 1 2 Z 0 0 I i = 1, E with w 1,..., w E the vertices of the polytope W. Numerically, this method amounts to solving a semidefinite programming problem. This solution trivially applies also to the case (i) and, moreover, it was extended to piecewise affine discrete-time hybrid systems in (Lazar et al., 2006).

5.4. Main results 83 (iii) The nonlinear case with additive disturbances: φ(x, u, 0, v) = f(x)+ g 1 (x)u + g 2 (x)v with suitably defined functions f( ), g 1 ( ) and g 2 ( ). A nonlinear ACL given by h(x) was constructed in (Magni et al., 2003) using linearization of the system, so that Assumption 5.2.4-(v) holds for all states in a sufficiently small sublevel set of V (x) = x P x, P 0. Numerically this method amounts to solving a discrete-time H Riccati equation. For the linear case with additive disturbances (i.e. f(x) = A, g 1 (x) = B and g 1 (x) = B 1 ), it is worth to point out that an LMI-based design method to obtain the terminal cost, for a given ACL, was presented in (Alamo et al., 2005). 5.4 Main results In this section we derive a novel LMI-based solution to the problem of finding a suitable terminal cost and ACL that applies to linear systems affected by both parametric and additive disturbances, i.e. x(k +1) = φ(x(k), u(k), w(k), v(k)) := A(w(k))x(k)+B(w(k))u(k)+B 1 (w(k))v(k), (5.4) where A(w) R n n, B(w) R n m, B 1 (w) R n dv are affine functions of w. We will also consider quadratic cost functions, F (x) := x P x, P 0, L(x, u) = x Qx + u Ru, Q, R 0, and linear state feedback ACLs given by h(x) := Kx. 5.4.1 LMI-based-solution Consider the linear matrix inequalities, Z 0 (A(w i )Z + B(w i )Y ) (R 1 2 Y ) (Q 1 2 Z) 0 τi B 1 (w i ) T 0 0 (A(w i )Z + B(w i )Y ) B 1 (w i ) Z 0 0 R 1 2 Y 0 0 I 0 0, Q 1 2 Z 0 0 0 I i = 1, E, (5.5) where w 1,..., w E are the vertices of the polytope W, Q R n n and R R m m are known positive definite and symmetric matrices, and Z R n n, Y R m n and τ R >0 are the unknowns. Theorem 5.4.1 Suppose that the LMIs (5.5) are feasible and let Z, Y and τ be a solution with Z 0, τ R >0. Then, the terminal cost F (x) = x P x, the stage cost L(x, u) = x Qx + u Ru and the ACL h(x) = Kx with

84 Design of the terminal cost: H and min-max MPC P := Z 1 and K := Y Z 1 satisfy Assumption 5.2.4-(v) with σ( v ) := τ v 2 2 = τv v. Proof: For brevity let (w i ) denote the matrix in the left-hand side of (5.5). Using W = Co{w 1,..., w E } (where Co{ } denotes the convex hull) and the fact that A(w), B(w) and B 1 (w) are affine functions of w, it is trivial to observe that if (5.5) holds for all vertices w 1,..., w E of W, then (w) 0 holds for all w W. Applying the Schur complement to (w) 0 (pivoting after diag(z, I, I)) and letting M(w) := A(w)Z + B(w)Y yields the equivalent matrix inequalities: ( Z M(w) Z 1 M(w) Z QZ Y RY M(w) Z 1 ) B 1 (w) B 1 (w) Z 1 M(w) τi B 1 (w) Z 1 0 B 1 (w) and Z 0. Letting A cl (w) := A(w) + B(w)K, substituting Z = P 1 and Y = KP 1, and performing a congruence transformation on the above matrix inequality with diag(p, I) yields the equivalent matrix inequalities: ( P Acl (w) P A cl (w) Q K RK A cl (w) ) P B 1 (w) B 1 (w) P A cl (w)) τi B 1 (w) 0 P B 1 (w) and P 0. Pre multiplying with ( x v ) and post multiplying with ( x v ) the above matrix inequality yields the equivalent inequality: (A cl (w)x + B 1 (w)v) P (A cl (w)x + B 1 (w)v) x P x x (Q + K RK)x + τv v, for all x R n and all v R dv. σ( v ) = τ v 2 2. Hence, Assumption 5.2.4-(v) holds with Remark 5.4.2 In (Lazar et al., 2008a) the authors established an explicit relation between the gain τ R >0 of the function σ( ) and the ISS gain of the corresponding closed-loop MPC system. Thus, since τ enters (5.5) linearly, one can minimize over τ subject to the LMIs (5.5), leading to a smaller ISS gain from v to x. 5.4.2 Relation to LMI-based H control design In this section we formalize the relation between the considered robust MPC design problem and H design for linear systems. But first, we briefly recall the H design procedure for the discrete-time linear system (5.4).

5.4. Main results 85 For simplicity, we remove the parametric disturbance w and consider only additive disturbances v V. However, the results derived below that relate to the optimal H gain also hold if parametric disturbances are considered, in the sense of an optimal H gain for linear parameter varying systems. Consider the system corresponding to (5.4) without parametric uncertainties, i.e. x(k + 1) = Ax(k) + Bu(k) + B 1 v(k), z(k) = Cx(k) + Du(k) + D 1 v(k), (5.6) where we added the performance output z R dz. Using the results of (Kaminer et al., 1993), (Chen and Scherer, 2006b) it can be demonstrated that system (5.6) in closed-loop with u(k) = h(x(k)) = Kx(k), k Z +, has an H gain less than γ if and only if there exists a symmetric matrix P such that: P 0 (A + BK) P (C + DK) 0 γi B1 P D 1 P (A + BK) P B 1 P 0 C + DK D 1 0 I 0. (5.7) Letting Z = P 1, Y = KP 1 and performing a congruence transformation using diag(z, I, Z, I) one obtains the equivalent LMI: Z 0 (AZ + BY ) (CZ + DY ) 0 γi B1 D1 AZ + BY B 1 Z 0 CZ + DY D 1 0 I 0. (5.8) Indeed, from the above inequalities, where V (x) := x P x, one obtains the dissipation inequality: V (x(k + 1)) V (x(k)) z(k) 2 2 + γ v 2 2. (5.9) Hence, we can infer that i=0 z(i) 2 2 γ i=0 v(i) 2 2 and conclude that the H norm of the system is not greater than γ. Minimizing γ subject to the above LMI yields the optimal H gain as the square root of the optimum. Remark 5.4.3 In (Kaminer et al., 1993), (Chen and Scherer, 2006b) an equivalent formulation of the matrix inequality (5.7) is used, i.e. with γi in the south east corner of (5.7)-(5.8) instead of I, which leads to the adapted

86 Design of the terminal cost: H and min-max MPC dissipation inequality V (x(k + 1)) V (x(k)) γ 1 z(k) 2 2 + γ v 2 2. Then, by minimizing over γ subject to the LMIs (5.8), one obtains the optimal H gain directly as the optimal solution, without having to take the square root. However, regardless of which LMI set-up is employed, the resulting optimal H gain and corresponding controller (defined by the gain K) are the same, with a difference in the storage function V (x) = x P x with a factor γ. Theorem 5.4.4 Suppose that ( the ) LMIs ((5.5) ) without parametric uncertainties and (5.8) with C = Q 1 2 0, D = and D 1 = 0 are feasible for system (5.6). Then the following statements are equivalent: 1. Z, Y and τ are a solution of (5.5); 2. Z, Y and γ are a solution of (5.8) with C = D 1 = 0; 0 R 1 2 ( ) ( ) Q 2 1 0, D = 0 R 1 and 2 3. System (5.6) in closed-loop with u(k) = Kx(k) and K = Y Z 1 satisfies the dissipation inequality (5.9) with storage function V (x) = x P x and P = Z 1, and it has an H norm less than γ = τ; 4. Assumption 5.2.4-(v) holds for F (x) = x P x, L(x, u) = x Qx+u Ru and h(x) = Kx, with P = Z 1, K = Y Z 1 and σ( v ) = τ v 2 2 = γ v 2 2. The proof of Theorem 5.4.4 is trivially obtained by replacing C, D and D 1 in (5.8) and (5.9), respectively, and using Theorem 5.4.1 and the results of (Kaminer et al., 1993), (Chen and Scherer, 2006b). Theorem 5.4.4 establishes that the LMI-based solution for solving Problem 5.3.1 proposed in this chapter guarantees an H gain equal to the square root of the gain τ = γ of the σ( ) function for the system in closedloop with the ACL. It also shows that the optimal H control law obtained by minimizing γ = τ subject to (5.8) (for a particular performance output related to the MPC cost) solves the terminal cost and ACL problem in minmax robust MPC. These results establish an intimate connection between H design and min-max MPC, in a similar way as LQR design is connected to nominally stabilizing MPC. This connection is instrumental in improving the closed-loop ISS gain of min-max MPC closed-loop systems as follows: an optimal gain τ = γ of the σ( ) function results in a smaller gain of the function γ( ) of Definition 6.2.2 for the MPC closed-loop system, as demonstrated in the previous chapter.

5.5. Conclusions 87 5.5 Conclusions In this chapter we proposed a novel LMI-based solution to the terminal cost and auxiliary control law problem in min-max robust MPC. The developed conditions apply to a more general class of systems than previously considered, i.e. linear systems affected by both parametric and additive disturbances. Since LMIs can be solved efficiently, the proposed method is computationally attractive. Furthermore, we have established an intimate connection between the proposed LMIs and the optimal H control law. This result, which was somehow missing in the MPC literature, adds to the well-known connection between design of nominally stabilizing MPC schemes and the optimal solution of the LQR problem. Such results are of general interest as they connect well known linear control problems to MPC design.

88 Design of the terminal cost: H and min-max MPC

6 Self-optimizing robust nonlinear MPC 6.1 Introduction 6.2 Preliminary definitions and results 6.3 Problem definition 6.4 Main results 6.5 Illustrative examples 6.6 Conclusions This chapter presents a novel method for designing robust MPC schemes that are self-optimizing in terms of disturbance attenuation. The method employs convex control Lyapunov functions and disturbance bounds to optimize robustness of the closed-loop system on-line, at each sampling instant - a unique feature in MPC. Moreover, the proposed MPC algorithm is computationally efficient for nonlinear systems that are affine in the control input and it allows for a decentralized implementation. 6.1 Introduction Robustness of nonlinear model predictive controllers has been one of the most relevant and challenging problems within MPC, see, e.g., (Mayne et al., 2000; Lazar et al., 2007a; Magni and Scattolini, 2007; Mayne and Kerrigan, 2007; Raković, 2008). From a conceptual point of view, three main categories of robust nonlinear MPC schemes can be identified, each with its pros and cons: inherently robust, tightened constraints and min-max MPC schemes, respectively. In all these approaches, the input-to-state stability property (Sontag, 1989) has been employed as a theoretical tool for characterizing robustness, or robust stability 1. The goal of the existing design methods for synthesizing control laws that achieve ISS (Sontag, 1999; Jiang and Wang, 2001; Kokotović and Arcak, 2001) is to a priori guarantee a predetermined closed-loop ISS gain. Consequently, the ISS property, with a predetermined, constant ISS gain, is 1 Other characterizations of robustness used in MPC, such as ultimate boundedness or stability of a robustly positively invariant set, can be recovered as a particular case of ISS or shown to be related.

90 Self-optimizing robust nonlinear MPC in this way enforced for all state space trajectories of the closed-loop system and at all time instances. As the existing approaches, which are also employed in the design of MPC schemes that achieve ISS, can lead to overly conservative solutions along particular trajectories, it is of high interest to develop a control (MPC) design method with the explicit goal of adapting the closed-loop ISS gain depending of the evolution of the state trajectory. In this chapter we present a novel method for synthesizing robust MPC schemes with this feature. The method employs convex control Lyapunov functions (CLFs) and disturbance bounds to embed the standard ISS conditions of (Jiang and Wang, 2001) using a finite number of inequalities. This leads to a finite dimensional optimization problem that has to be solved on-line, in a receding horizon fashion. The proposed inequalities govern the evolution of the closed-loop state trajectory through the sublevel sets of the CLF. The unique feature of the proposed robust MPC scheme is to allow for the simultaneous on-line (i) computation of a control action that achieves ISS and (ii) minimization of the closed-loop ISS gain depending of an actual state trajectory. As a result, the developed nonlinear MPC scheme is self-optimizing in terms of disturbance attenuation. From the computational point of view, following a particular design recipe, the self-optimizing robust MPC algorithm can be implemented as a single linear program for discrete-time nonlinear systems that are affine in the control variable and the disturbance input. Furthermore, we demonstrate that the freedom to optimize the closed-loop ISS gain on-line makes self-optimizing robust MPC suitable for decentralized control of networks of nonlinear systems. 6.2 Preliminary definitions and results Let R, R +, Z and Z + denote the field of real numbers, the set of nonnegative reals, the set of integer numbers and the set of non-negative integers, respectively. We use the notation Z c1 and Z (c1,c 2 ] to denote the sets {k Z + k c 1 } and {k Z + c 1 < k c 2 }, respectively, for some c 1, c 2 Z +. For a set S R n, we denote by int(s) the interior of S. For two arbitrary sets S R n and P R n, let S P := {x R n x + P S} denote their Pontryagin difference. A polyhedron (or a polyhedral set) in R n is a set obtained as the intersection of a finite number of open and/or closed half-spaces. The Hölder p-norm of a vector x R n is defined as x p := ( [x] 1 p +... + [x] n p ) 1 p for p Z [1, ) and x := max i=1,...,n [x] i, where [x] i, i = 1,..., n, is the i-th component of x and is the absolute value. For a matrix M R m n Mx, let M p := sup p x 0 x p denote its corresponding

6.2. Preliminary definitions and results 91 induced matrix norm. Then M = max n 1 i m j=1 [M] ij, where [M] ij is the ij-th entry of M. Let z := {z(l)} l Z+ with z(l) R o for all l Z + denote an arbitrary sequence. Define z := sup{ z(l) l Z + }, where denotes an arbitrary p-norm, and z [k] := {z(l)} l Z[0,k]. A function ϕ : R + R + belongs to class K if it is continuous, strictly increasing and ϕ(0) = 0. A function ϕ : R + R + belongs to class K if ϕ K and lim s ϕ(s) =. A function β : R + R + R + belongs to class KL if for each fixed k R +, β(, k) K and for each fixed s R +, β(s, ) is decreasing and lim k β(s, k) = 0. 6.2.1 ISS definitions and results Consider the discrete-time nonlinear system x(k + 1) Φ(x(k), w(k)), k Z +, (6.1) where x(k) R n is the state and w(k) R l is an unknown disturbance input at the discrete-time instant k. The mapping Φ : R n R l R n is an arbitrary nonlinear set-valued function. We assume that Φ(0, 0) = {0}. Let W be a subset of R l. Definition 6.2.1 We call a set P R n robustly positively invariant (RPI) for system (6.1) with respect to W if for all x P it holds that Φ(x, w) P for all w W. Definition 6.2.2 Let X with 0 int(x) and W be subsets of R n and R l, respectively. We call system (6.1) ISS(X, W) if there exist a KL-function β(, ) and a K-function γ( ) such that, for each x(0) X and all w = {w(l)} l Z+ with w(l) W for all l Z +, it holds that all corresponding state trajectories of (6.1) satisfy x(k) β( x(0), k) + γ( w [k 1] ), k Z 1. We call the function γ( ) an ISS gain of system (6.1). Theorem 6.2.3 Let W be a subset of R l and let X R n be a RPI set for (6.1) with respect to W, with 0 int(x). Furthermore, let α 1 (s) := as δ, α 2 (s) := bs δ, α 3 (s) := cs δ for some a, b, c, δ R >0, σ K and let V : R n R + be a function such that: α 1 ( x ) V (x) α 2 ( x ), V (x + ) V (x) α 3 ( x ) + σ( w ) (6.2a) (6.2b)

92 Self-optimizing robust nonlinear MPC for all x X, w W and all x + Φ(x, w). Then the system (6.1) is ISS(X,W) with ( ) 2σ(s) β(s, k) := α1 1 α 2 (s)), γ(s) := α1 1, ρ := 1 c [0, 1). 1 ρ b (6.3) If inequality (6.2b) holds for w = 0, then the 0-input system x(k + 1) Φ(x(k), 0), k Z +, is asymptotically stable in X. The proof of Theorem 6.2.3 is similar in nature to the proof given in (Jiang and Wang, 2001; Lazar et al., 2008a) by replacing the difference equation with the difference inclusion as in (6.1). 6.2.2 Inherent ISS through continuous and convex control Lyapunov functions Consider the discrete-time constrained nonlinear system x(k + 1) = φ(x(k), u(k), w(k)) := f(x(k), u(k)) + g(x(k))w(k), k Z +, (6.4) where x(k) X R n is the state, u(k) U R m is the control action and w(k) W R l is an unknown disturbance input at the discrete-time instant k. φ : R n R m R l R n, f : R n R m R n and g : R n R n l are arbitrary nonlinear functions with φ(0, 0, 0) = 0 and f(0, 0) = 0. Note that we allow that g(0) 0. We assume that 0 int(x), 0 int(u) and W is bounded. We also assume that φ(,, ) is bounded in X. Next, let α 1, α 2, α 3 K and let σ K. Definition 6.2.4 A function V : R n R + that satisfies (6.2a) for all x X is called a control Lyapunov function (CLF) for system x(k + 1) = φ(x(k), u(k), 0), k Z +, if for all x X, u U such that V (φ(x, u, 0)) V (x) α 3 ( x ). Problem 6.2.5 Let a CLF V ( ) be given. At time k Z + measure the state x(k) and calculate a control action u(k) that satisfies: u(k) U, φ(x(k), u(k), 0) X, V (φ(x(k), u(k), 0)) V (x(k)) + α 3 ( x(k) ) 0. (6.5a) (6.5b) Let π 0 (x(k)) := {u(k) R m (6.5) holds}. Let x(k + 1) φ 0 (x(k), π 0 (x(k))) := {f(x(k), u) u π 0 (x(k))}

6.3. Problem definition 93 denote the difference inclusion corresponding to the 0-input system (6.4) in closed-loop with the set of feasible solutions obtained by solving Problem 6.2.5 at each instant k Z +. Theorem 6.2.6 Let α 1, α 2, α 3 K of the form specified in Theorem 6.2.3 and a corresponding CLF V ( ) be given. Suppose that Problem 6.2.5 is feasible for all states x in X. Then: (i) The difference inclusion x(k + 1) φ 0 (x(k), π 0 (x(k))), k Z +, (6.6) is asymptotically stable in X; (ii) Consider a perturbed version of (6.6), i.e. x(k + 1) φ 0 ( x(k), π 0 ( x(k))) + g( x(k))w(k), k Z + (6.7) and let X X be a RPI set for (6.7) with respect to W. If X is compact, the CLF V ( ) is convex and continuous 2 on X and M R >0 such that g(x) M for all x X, then system (6.7) is ISS( X,W). Proof: (i) Let x(k) X for some k Z +. Then, feasibility of Problem 6.2.5 ensures that x(k + 1) φ 0 (x(k), π 0 (x(k))) X due to constraint (6.5a). Hence, Problem 6.2.5 remains feasible and thus, X is a PI set for system (6.6). The result then follows directly from Theorem 6.2.3. (ii) By convexity and continuity of V ( ) and compactness of X, V ( ) is Lipschitz continuous on X (Wayne S.U., 1972). Hence, letting L R >0 denote a Lipschitz constant of V ( ) in X, one obtains V (φ(x, u, w)) V (φ(x, u, 0)) = V (f(x, u)+g(x)w) V (f(x, u)) LM w for all x X and all w. From this property, together with inequality (6.5b) we have that inequality (6.2b) holds with σ(s) := LMs K. Since X is an RPI set for (6.7) by the hypothesis, ISS( X,W) of the difference inclusion (6.7) follows from Theorem 6.2.3. 6.3 Problem definition Theorem 6.2.6 establishes that all feasible solutions of Problem 6.2.5 are stabilizing feedback laws which, under additional assumptions even achieve ISS. However, this inherent ISS property of a feedback law calculated by solving Problem 6.2.5 relies on a fixed, possibly large gain of σ( ), which depends on V ( ). This gain is explicitly related to the ISS gain of the closedloop system via (6.3). To optimize disturbance attenuation for the closedloop system, at each time instant k Z + and for a given x(k) X, it 2 Continuity of V ( ) alone is sufficient, but it requires a somewhat more complex proof.

94 Self-optimizing robust nonlinear MPC would be desirable to simultaneously compute a control action u(k) U that satisfies: (i) V (φ(x(k), u(k), w(k))) V (x(k)) + α 3 ( x ) σ( w(k) ) 0, w(k) W (6.8) and some function σ(s) := η(k)s δ and (ii) minimize η(k) (η(k), δ R >0, k Z + ). Remark 6.3.1 It is not possible to directly include (6.8) in Problem 6.2.5, as it leads to an infinite dimensional optimization problem. If W is a compact polyhedron, a possibility to resolve this issue would be to evaluate the inequality (6.8) only for w(k) taking values in the set of vertices of W. However, this does not guarantee that (6.8) holds for all w(k) W due to the fact that the left-hand term in (6.8) is not necessarily a convex function of w(k), i.e. it contains the difference of two, possibly convex, functions of w(k). This makes the considered problem challenging and interesting. 6.4 Main results In what follows we present a solution to the problem stated in Section 6.3. More specifically, we demonstrate that by considering continuous and convex CLFs and compact polyhedral sets X, U, W (that contain the origin in their interior) a solution to inequality (6.8) can be obtained via a finite set of inequalities that only depend on the vertices of W. The standing assumption throughout the remainder of the chapter is that the considered system, i.e. (6.4), is affine in the disturbance input w. 6.4.1 Optimized ISS through convex CLFs Let w e, e = 1,..., E, be the vertices of W. Next, consider a finite set of simplices S 1,..., S M with each simplex S i equal to the convex hull of a subset of the vertices of W and the origin, and such that M i=1 S i = W. More precisely, S i = Co{0, w e i,1,..., w e i,l} and {w e i,1,..., w e i,l} {w 1,..., w E } (i.e. {e i,1,..., e i,l } {1,..., E}) with w e i,1,..., w e i,l linearly independent. For each simplex S i we define the matrix W i := [w e i,1... w e i,l] R l l, which is invertible. Let λ e (k), k Z +, be optimization variables associated with each vertex w e. Let α 3 K, suppose that x(k) at time k Z + is given and consider the following set of inequalities depending on u(k) and

6.4. Main results 95 λ 1 (k),..., λ E (k): V (φ(x(k), u(k), 0)) V (x(k)) + α 3 ( x(k) ) 0, (6.9a) V (φ(x(k), u(k), w e )) V (x(k)) + α 3 ( x(k) ) λ e (k) 0, e = 1, E. (6.9b) Theorem 6.4.1 Let V ( ) be a convex CLF. If for α 3 K and x(k) at time k Z + there exist u(k) and λ e (k), e = 1,..., E, such that (6.9a) and (6.9b) hold, then (6.8) holds for the same u(k), with σ(s) := η(k)s and η(k) := where λ i (k) := [λ ei,1 (k)... λ ei,l (k)] R 1 l. max λ i (k)wi 1, (6.10) i=1,...,m Proof: Let α 3 K and x(k) be given and suppose (6.9b) holds for some λ e (k), e = 1,..., E. Let w W = M i=1 S i. Hence, there exists an i such that w S i = Co{0, w e i,1,..., w e i,l}, which means that there exist non-negative numbers µ 0, µ 1,..., µ l with l j=0 µ j = 1 such that w = l j=1 µ jw e i,j + µ 0 0 = l j=1 µ jw e i,j. In matrix notation we have that w = W i [µ 1... µ l ] and thus [µ 1... µ l ] = Wi 1 w. Multiplying each inequality in (6.9b) corresponding to the index e i,j and the inequality (6.9a) with µ j 0, j = 0, 1,..., l, summing up and using l j=0 µ j = 1 yield: µ 0 V (φ(x(k), u(k), 0)) + l µ j V (φ(x(k), u(k), w ei,j )) j=1 V (x(k)) + α 3 ( x(k) ) l µ j λ ei,j (k) 0. Furthermore, using φ(x(k), u(k), w e i,j ) = f(x(k), u(k)) + g(x(k))w e i,j, convexity of V ( ) and l j=0 µ j = 1 yields V (φ(x(k), u(k), or equivalently j=1 l µ j w e i,j )) V (x(k)) + α 3 ( x(k) ) j=1 l µ j λ ei,j (k) 0, V (φ(x(k), u(k), w)) V (x(k)) + α 3 ( x(k) ) λ i (k)[µ 1... µ l ] 0. Using that [µ 1... µ l ] = W 1 i w we obtain (6.8) with w(k) = w for σ(s) = η(k)s and η(k) 0 as in (6.10). j=1

96 Self-optimizing robust nonlinear MPC 6.4.2 Self-optimizing robust nonlinear MPC For any x X let W x := {g(x)w w W} R n (note that 0 W x ) and assume that X W x. Let λ := [λ 1,..., λ E ] and let J( λ) : R E R + be a function that satisfies α 4 ( λ ) J( λ) α 5 ( λ ) for some α 4, α 5 K ; for example, J( λ) := max i=1,...,m λ i Wi 1. Problem 6.4.2 Let α 3 K, J( ) and a CLF V ( ) be given. At time k Z + measure the state x(k) and minimize the cost J(λ 1 (k),..., λ E (k)) over u(k), λ 1 (k),..., λ E (k), subject to the constraints u(k) U, λ e (k) 0, f(x(k), u(k)) X W x(k), V (φ(x(k), u(k), 0)) V (x(k)) + α 3 ( x(k) ) 0, (6.11a) (6.11b) V (φ(x(k), u(k), w e )) V (x(k)) + α 3 ( x(k) ) λ e (k) 0, e = 1, E. (6.11c) Let π(x(k)) := {u(k) R m (6.11) holds} and let x(k + 1) φ cl (x(k), π(x(k)), w(k)) := {φ(x(k), u, w(k)) u π(x(k))} denote the difference inclusion corresponding to system (6.4) in closed-loop with the set of feasible solutions obtained by solving Problem 6.4.2 at each k Z +. Theorem 6.4.3 Let α 1, α 2, α 3 K of the form specified in Theorem 6.2.3, a continuous and convex CLF V ( ) and a cost J( ) be given. Suppose that Problem 6.4.2 is feasible for all states x in X. Then the difference inclusion is ISS(X, W). x(k + 1) φ cl (x(k), π(x(k)), w(k)), k Z + (6.12) Proof: Let x(k) X for some k Z +. Then, feasibility of Problem 6.4.2 ensures that x(k+1) φ cl (x(k), π(x(k)), w(k)) X for all w(k) W, due to g(x(k))w(k) W x(k) and constraint (6.11a). Hence, Problem 6.4.2 remains feasible and thus, X is a RPI set with respect to W for system (6.12). From Theorem 6.4.1 we also have that V ( ) satisfies (6.2b) with σ(s) := η(k)s and η(k) as in (6.10). Let λ := sup {V (φ(x, u, w e )) V (x) + α 3 ( x )}. x X,u U,e=1,...,E

6.4. Main results 97 As V ( ) is upper and lower bounded by K functions, due to compactness of X, U and boundedness of φ(,, ), λ exists and is finite (the sup above is a max if φ(,, ) is continuous in x and u). Hence, inequality (6.11c) is always satisfied for λ e (k) = λ for all e = 1,..., E, k Z +, and for all x X, u U. This in turn, via (6.10) ensures the existence of a η R >0 such that η(k) η for all k Z +. Hence, we proved that inequality (6.8) holds for all x X and all w W. Then, since X is RPI, ISS(X,W) follows directly from Theorem 6.2.3. Remark 6.4.4 An alternative proof to Theorem 6.4.3 can be obtained by simply applying the reasoning used in the proof of Theorem 6.2.6. Hence, inherent ISS can be established directly from constraint (6.11b). Also, notice that in the proof of Theorem 6.4.3 we used a worst case evaluation of λ e (k) to prove ISS. However, it is important to observe that compared to Problem 6.2.5, nothing is lost in terms of feasibility, while Problem 6.4.2, although it inherently guarantees a constant ISS gain, it provides freedom to optimize the ISS gain of the closed-loop system, by minimizing the variables λ 1 (k),..., λ E (k) via the cost J( ). As such, in reality the gain η(k) of the function σ( ) can be much smaller for k k 0, for some k 0 Z +, depending on the state trajectory x(k). In Theorem 6.4.3 we assumed for simplicity that Problem 6.4.2 is feasible for all x X; in other words, feasibility implies ISS. Whenever Problem 6.4.2 can be solved explicitly (see the implementation paragraph below), it is possible to calculate the maximal RPI set for the closed-loop dynamics that is contained within the explicit set of feasible solutions. Alternatively, we establish next an easily verifiable sufficient condition under which any sublevel set of V ( ) contained in X is a RPI subset of the set of feasible solutions of Problem 6.4.2. Lemma 6.4.5 Given a CLF V ( ) that satisfies the hypothesis of Theorem 6.4.3, let V := {x R n V (x) }. Then, for any R >0 such that V X, if λ (1 ρ), with ρ as defined in (6.3), Problem 6.4.2 is feasible for all x V and remains feasible for all resulting closed-loop trajectories that start in V. Proof: From the proof of Theorem 6.4.3 we know that inequalities (6.11c) are feasible for all x(k) X, u(k) U and e = 1, E by taking λ(k) = λ for all k Z +. Thus, for any x(k) V X, R 0, we have

98 Self-optimizing robust nonlinear MPC that: V (φ(x(k), u(k), w(k))) V (x(k)) α 3 ( x(k) ) + λ ρv (x(k)) + λ ρ + λ ρ + (1 ρ) =, which yields φ(x(k), u(k), w(k)) V X. This in turn ensures feasibility of (6.11a), while (6.11b) is feasible by definition of the CLF V ( ), which concludes the proof. Remark 6.4.6 The result of Theorem 6.4.3 holds for all inputs u(k) for which Problem 6.4.2 is feasible. To select on-line one particular control input from the set π(x(k)) and to improve closed-loop performance (in terms of settling time) it is useful to also penalize the state and the input. Let F : R n R + and L : R n R m R + with F (0) = L(0, 0) = 0 be arbitrary nonlinear functions. For N Z 1 let ū(k) := (ū(k), ū(k + 1),..., ū(k + N 1)) U N and J RHC (x(k), ū(k)) := F ( x(k+n))+ N 1 i=0 L( x(k+i), ū(k+ i)), where x(k + i + 1) := f( x(k + i), ū(k + i)) for i = 0, N 1 and x(k) := x(k). Then one can add this cost to Problem 6.4.2, i.e. at time k Z + measure the state x(k) and minimize J RHC (x(k), ū(k))+j(λ 1 (k),..., λ E (k)) over ū(k), λ 1 (k),..., λ E (k), subject to constraints (6.11) and x(k + i) X, i = 2, N. Observe that the optimum needs not to be attained at each sampling instant to achieve ISS, which is appealing for practical reasons but also in the case of a possibly discontinuous value function. Remark 6.4.7 Besides enhancing robustness, constraints (6.11b)-(6.11c) also ensure that Problem 6.4.2 recovers performance (in terms of settling time) when the state of the closed-loop system approaches the origin. Loosely speaking, when x(k) 0, solving Problem 6.4.2 will produce a control action u(k) 0 (because of constraint (6.11b) and the fact that the cost J RHC ( ) + J( ) is minimized). This yields V (φ(0, 0, w e )) λ e (k) 0, e = 1, E, due to constraint (6.11c). Thus, solving Problem 6.4.2 with the above cost will not optimize each variable λ e (k) below the corresponding value V (φ(0, 0, w e )), e = 1, E, when the state reaches the equilibrium. This property is desirable, since it is known from min-max MPC (Lazar et al., 2008a) that considering a worst case disturbance scenario leads to poor performance when the real disturbance is small or vanishes. 6.4.3 Decentralized formulation In this paragraph we give a brief outline of how the proposed self-optimizing MPC algorithm can be implemented in a decentralized fashion. We consider

6.4. Main results 99 a connected directed graph G = (S, E) with a finite number of vertices S and a set of directed edges E {(i, j) S S i j}. A dynamical system is assigned to each vertex i S, with the dynamics governed by the following equation: x i (k + 1) = φ i (x i (k), u i (k), v i (x Ni (k)), w i (k)), k Z +. (6.13) In (6.13), x i X i R n i, u i U i R m i are the state and the control input of the i-th system, and w i W i R l i is an exogenous disturbance input that directly affects only the i-th system. With each directed edge (j, i) E we associate a function v ij : R n j R n i, which defines the interconnection signal v ij (x j (k)), k Z +, between system j and system i, i.e. v ij ( ) characterizes how the states of system j influence the dynamics of system i. The set N i := {j (j, i) E} denotes the set of direct neighbors (observe that j N i i N j ) of the system i. For simplicity of notation we use x Ni (k) and v i (x Ni (k)) to denote {x j (k)} j Ni and {v ij (x j (k))} j Ni, respectively. Both φ i (,,, ) and v ij ( ) are arbitrary nonlinear, possibly discontinuous functions that satisfy φ i (0, 0, 0, 0) = 0, v ij (0) = 0 for all (i, j) S N i. For all i S we assume that X i, U i and W i are compact sets that contain the origin in their interior. Assumption 6.4.8 The value of all interconnection signals v ij (x j (k)) is known at all discrete-time instants k Z + for any system i S. From a technical point of view, Assumption 6.4.8 is satisfied, e.g., if all interconnection signals v ij (x j (k)) are directly measurable at all k Z + or, if all directly neighboring systems j N i are able to communicate their local measured state x j (k) to system i S. Consider next the following decentralized version of Problem 6.4.2, where the notation and definitions employed so far are carried over mutatis mutandis. Problem 6.4.9 For system i S let α3 i K, J i ( ) and a CLF V i ( ) be given. At time k Z + measure the local state x i (k) and the interconnection signals v i (x Ni (k)) and minimize the cost J i (λ i 1 (k),..., λi E i (k)) over u i (k), λ i 1 (k),..., λi E i (k), subject to the constraints u i (k) U, λ i e(k) 0, φ i (x i (k), u i (k), v i (x Ni (k)), 0) X i W xi (k), (6.14a) V i (φ i (x i (k), u i (k), v i (x Ni (k)), 0)) V i (x i (k)) + α i 3( x i (k) ) 0, (6.14b) V i (φ i (x i (k), u i (k), v i (x Ni (k)), w e i )) V i (x i (k)) + α i 3( x i (k) ) λ i e(k) 0, e = 1, E i. (6.14c)

100 Self-optimizing robust nonlinear MPC Let π i (x i (k), v i (x Ni (k))) := {u i (k) R m i (6.14) holds} and let x i (k + 1) φ cl i (x i (k), π i (x i (k), v i (x Ni (k)), v i (x Ni (k)), w i (k)) := {φ i (x i (k), u, v i (x Ni (k)), w i (k)) u π i (x i (k), v i (x Ni (k)))} denote the difference inclusion corresponding to system (6.13) in closedloop with the set of feasible solutions obtained by solving Problem 6.4.9 at each k Z +. Theorem 6.4.10 Let, α1 i, αi 2, αi 3 K of the form specified in Theorem 6.2.3, continuous and convex CLFs V i ( ) and costs J i ( ) be given for all systems indexed by i S. Suppose Assumption 6.4.8 holds and Problem 6.4.9 is feasible for each system i S and for all states x i in X i and all corresponding v i (x Ni ). Then the interconnected dynamically coupled nonlinear system described by the collection of difference inclusions x i (k + 1) φ cl i (x i (k), π i (x i (k), v i (x Ni (k)), v i (x Ni (k)), w i (k)), i S, k Z + (6.15) is ISS(X 1... X S, W 1... W S ). The proof of the above theorem is obtained by a straightforward application of the centralized result presented in this chapter and properties of K functions. Its central argument is that each continuous and convex CLF V i (x i ) is in fact Lipschitz continuous on X i (Wayne S.U., 1972), which makes i S V i(x i ) =: V ({x i } i S ) a Lipschitz continuous CLF for the global interconnected system. The result then follows similarly to the proof of Theorem 6.2.6-(ii). Theorem 6.4.10 guarantees a constant ISS gain for the global closed-loop system, while the ISS gain of each closed-loop system i S can still be optimized on-line. Remark 6.4.11 Problem 6.4.9 defines a set of decoupled optimization problems, implying that the computation of control actions can be performed in completely decentralized fashion, i.e. with no communication among controllers (if each v ij ( ) is measurable at all k Z + ). Inequality (6.14b) can be further significantly relaxed by replacing the zero on the righthand side with an optimization variable τ i (k) and adding the coupling constraint i S τ i(k) 0 for all k Z +. Using the dual decomposition method, see e.g. (Bertsekas, 1999), it is then possible to devise a distributed control scheme, which yields an optimized ISS-gain of the global interconnected system

6.4. Main results 101 in the sense that i S J i( ) is minimized. Further relaxations can be obtained by asking that the sum of τ i (k) is non-positive over a finite horizon, rather than at each time step. 6.4.4 Implementation issues In this section we briefly discuss the ingredients, which make it possible to implement Problem 6.4.2 (or its corresponding decentralized version Problem 6.4.9) as a single linear or quadratic program. Firstly, we consider nonlinear systems of the form (6.4) that are affine in control. Then it makes sense that there exist functions f 1 : R n R n with f 1 (0) = 0 and f 2 : R n R n m such that: x(k + 1) = φ(x(k), u(k), w(k)) := f 1 (x(k)) + f 2 (x(k))u(k) + g(x(k))w(k). (6.16) Secondly, we restrict our attention to CLFs defined using the -norm, i.e. V (x) := P x, where P R p n is a matrix (to be determined) with fullcolumn rank. We refer to (Lazar et al., 2006) for techniques to compute CLFs based on norms. Then, the first step is to show that the ISS inequalities (6.11b)-(6.11c) can be specified, without introducing conservatism, via a finite number of linear inequalities. Since by definition x = max i Z[1,n] [x] i, for a constraint x c with c > 0 to be satisfied, it is necessary and sufficient to require that ±[x] i c for all i Z [1,n]. Therefore, as x(k) in (6.11) is the measured state, which is known at every k Z +, for (6.11b)-(6.11c) to be satisfied it is necessary and sufficient to require that: ± [P (f 1 (x(k)) + f 2 (x(k))u(k))] i V (x(k)) + α 3 ( x(k) ) 0 ± [P (f 1 (x(k)) + f 2 (x(k))u(k) + g(x(k))w e )] i V (x(k)) + α 3 ( x(k) ) λ e (k) 0, i Z [1,p], e = 1, E, which yields 2p(E+1) linear inequalities in u(k), λ 1 (k),..., λ E (k). If the sets X, U and W x(k) are polyhedra, which is a reasonable assumption, then clearly the inequalities in (6.11a) are also linear in u(k), λ 1 (k),..., λ E (k). Thus, a solution to Problem 6.4.2, including minimization of the cost J RHC ( ) + J( ) for any N Z 1, can be obtained by solving a nonlinear optimization problem subject to linear constraints. Following some straightforward manipulations, the optimization problem to be solved on-line can be further simplified as follows. If the model is (i) piecewise affine or (ii) affine and the cost functions J RHC ( ) and J( )

102 Self-optimizing robust nonlinear MPC are defined using quadratic forms or infinity norms, then a solution to Problem 6.4.2 (with the cost J RHC ( ) + J( )) can be obtained by solving (i) a single mixed integer quadratic or linear program (MIQP - MILP), or (ii) a single QP - LP, respectively, for any N Z 1. Alternatively, for N = 1 and quadratic or -norm based costs, Problem 6.4.2 can be formulated as a single QP or LP for any discrete-time nonlinear model that is affine in the control variable and the disturbance input. 6.5 Illustrative examples 6.5.1 Example 1: control of a nonlinear system Consider the nonlinear system (6.16) where x(k) X = {ξ R 2 ξ 5}, u(k) U = {ξ R ξ 1} and w(k) W = {ξ R 2 ξ 1 0.2}, k Z +. The dynamics are given by: ( [x]1 + 0.7[x] f 1 (x) = 2 + ([x] 2 ) 2 ), [x] 2 ( 0.245 + sin([x]2 ) f 2 (x) = 0.7 ), g(x) = ( ) 1 0. 0 1 The technique of (Lazar et al., 2006) was used to compute the weight P R 2 2 of the CLF V (x) = P x for α 3 (s) := 0.01s and the linearization of (6.16) around the origin, in closed-loop with u(k) := Kx(k), K R 2 1, yielding [ ] 2.7429 0.7121 P =, K = [ 0.4379 1.5508 ]. 0.1989 4.0173 To optimize robustness, 4 optimization variables λ 1 (k),..., λ 4 (k) were introduced, each one assigned to a vertex of the set W. The RHC cost was chosen as J RHC (x(k), u(k), λ i (k)) = Q 1 (f 1 (x(k))+f 2 (x(k))u(k)) + Qx(k) + Ru(k) + 4 i=1 λ i(k), where Q 1 = 4I 2, Q = 0.1I 2 and R = 0.4. The resulting linear program has 11 optimization variables and 42 constraints. During the simulations, the worst case computational time required by the CPU over 4000 runs was 0.02 seconds, which shows the potential for controlling fast nonlinear systems. In the simulation scenario we tested the closed-loop system response for x(0) = [3, 1] and for the following disturbance scenarios: w(k) = [0, 0] for k Z [0,40] (nominal stabilization), w(k) takes random values in W for k Z [41,80] (robustness to random inputs), w(k) = [0, 0.1] for k Z [81,120] (robustness to constant inputs) and w(k) = [0, 0] for k Z [121,160] (to show that asymptotic stability is recovered for zero inputs).

6.5. Illustrative examples 103 Figure 6.1: Evolution of the closed-loop system state (top figure: red and blue lines) and of the control input (bottom figure: blue line). In Figure 6.4 the time history of the states and control input is depicted. The dashed horizontal lines give an approximation of the bounded region in which the system s states remain despite disturbances, i.e. approximately within the interval [ 0.2, 0.2]. The dashed vertical lines delimit the time intervals during which one of the four disturbance scenarios is active. One can observe that the feedback to disturbances is provided actively, resulting in good robust performance, while state and input constraints are satisfied at all times. In Figure 6.7 the time history of the optimization variables λ 1 (k),..., λ 4 (k) is presented. One can see that whenever the disturbance is acting on the system, or when the state is far from the origin (in the first disturbance scenario), these variables act so as to optimize the decrease of V ( ). Whenever the equilibrium is reached, the optimization variables satisfy the constraint V (φ(0, 0, w e )) λ e (k), e = 1,..., 4, as explained in Remark 6.4.7. In Figure 6.7 the values of V (φ(0, 0, w e ) for each vertex (0.5486 and 0.8432 for w 1 = [0.2, 0], w 3 = [ 0.2, 0] and w 2 = [0, 0.2], w 4 = [0, 0.2], respectively) are depicted with dashed horizontal lines. 6.5.2 Example 2: control of a DC-DC converter In this section we illustrate the MPC scheme developed in this chapter by applying it to control a Buck-Boost DC-DC converter power circuit. To

104 Self-optimizing robust nonlinear MPC Figure 6.2: Evolution of the optimization variables λ 1 (k),..., λ 4 (k). v in + ON/OFF L i L C i c R + v o Figure 6.3: A schematic view of a Buck-Boost converter. assess the enhancement of disturbance rejection, a comparison will be made with the inherently robust MPC scheme obtained by removing the additional optimization variables denote with λ. DC-DC converters are extensively used in power supplies for electronic equipment to control the energy flow between two DC systems. Buck-Boost DC-DC converters are currently used in a wide variety of relevant processes, including electric and hybrid vehicles, solar plants, DC motor drives, switched-mode DC power supplies, and many more. In Figure 6.3 a schematic representation of an ideal Buck-Boost circuit (i.e. neglecting the parasite components) is drawn. The following discrete-time nonlinear averaged model of the converter, which was developed in (Lazar and De Keyser, 2004) by applying the theory of (Kassakian et al., 1992), is used to obtain a prediction model: x m (k+1) = [ [x m (k)] 1 + T L [xm (k)] 2 T L ([xm (k)] 2 V in )u m (k) T C [xm (k)] 1 + T C [xm (k)] 1 u m (k) + (1 T RC )[xm (k)] 2 ], k Z +, (6.17)

6.5. Illustrative examples 105 where x m (k) R 2 and u m (k) R are the state and the input, respectively. [x m ] 1 represents the current flowing through the inductor (i L ), [x m ] 2 the output voltage (v o ) and u m represents the duty cycle (i.e. the fraction of the sampling period during which the transistor is kept ON). The sampling period is T = 0.65 milliseconds. The parameters of the circuit are the inductance L = 4.2mH, the capacitance C = 2200µF, the load resistance R = 165Ω and the source input voltage v in, with nominal value V in = 15V. The control objective is twofold: at start-up, a desired value of the output voltage, i.e. x ss 2, should be reached as fast as possible and with minimum overshoot; after the output voltage reaches the desired value, it must kept close to the operating point, i.e. within a range of ±3% around x ss 2 (the industrial operating margin for DC-DC converters) despite changes in the load R (within a 50% range around the nominal value) and disturbances. Note that for a desired output voltage value x ss 2 one can obtain the steady state duty cycle and inductor current as follows: u ss = x ss 2 x ss 2 V, x ss 1 = in x ss 2 R(u ss 1). (6.18) Furthermore, the following physical constraints must be fulfilled at all times k Z + : [x m (k)] 1 [0.01, 5], [x m (k)] 2 [ 20, 0], u m (k) [0.1, 0.9]. (6.19) To implement the developed MPC scheme, we first perform the following coordinate transformation on (6.17): [x(k)] 1 = [x m (k)] 1 x ss 1, [x(k)] 2 = [x m (k)] 2 x ss 2, u(k) = u m (k) u ss. (6.20) We obtain the following system description [ [x(k)] x(k + 1) = 1 + α[x(k)] 2 + (β T L [x(k)] ] 2)u(k) ( T C [x(k)] 1 + γ)u(k) + (1 T RC )[x(k)], (6.21) 2 + δ[x(k)] 1 where the constants α, β, γ and δ depend on the fixed steady state value x ss 2 as follows α = T L (1 δ = T C x ss x ss 2 2 V ), β = T in L (V in x ss 2 ), γ = T x ss RCV in ). ( x ss 2 x ss 2 V 1 in 2 (x ss 2 V in ),

106 Self-optimizing robust nonlinear MPC Using (6.20) and (6.18), the constraints given in (6.19) can be converted to: where [x(k)] 1 [b x 1, b x1 ], [x(k)] 2 [b x 2, b x2 ], u(k) [b u, b u ], (6.22) b x 1 = 0.01 1 RV in x ss 2 (x ss 2 V in ), b x 2 = 20 x ss 2, b u = 0.1 b x1 = 5 1 RV in x ss 2 (x ss 2 V in ), b x2 = x ss 2, b u = 0.9 x ss 2 x ss 2 V. in x ss 2 x ss 2 V, in The control objective can now be formulated as to robustly stabilize (6.21) around the equilibrium [0 0] while fulfilling the constraints given in (6.22). Next, to compute an -norm based control Lyapunov function, we linearize system (6.21) around the equilibrium [0 0] (for zero input u k = 0 [b u, b u ]). The linearized equations are: x(k + 1) = A x(k) + B u(k), (6.23) where x(k) and u(k) represent small deviations from the equilibrium [0 0] and zero input u k = 0, respectively. The matrices A and B are given by A f [ ] 1 α x=0, = x u=0 δ 1 T, B f [ ] β x=0, =. RC u u=0 γ For the linear model corresponding to a steady state output voltage x ss 2 = 4V (which yields u ss = 0.2105 and x ss 1 = 0.0307A), by applying the method of (Lazar et al., 2006) to find the matrix P and the feedback gain K satisfying [ the CLF condition for α 3 (s) = 0.001s, we have obtained the solution P = 0.9197 0.6895 ] 0.5815 1.8109 and K = [ 0.4648 0.4125 ]. The MPC cost matrices have been chosen as follows, to ensure a good performance: Q 1 = [ 1 0 0 4 ], Q = [ 1 0 0 2 ] and R = 0.1 (notice that this is different from the load resistance, also denoted by R). To test robustness, during the simulation we perturb the system with an additive disturbance on the inductor current and we perform a load change. The disturbance is generated in the set W := {w R 2 w = [w 1 0], 0.1 w 1 0}. Therefore, to implement the self-optimizing robust MPC scheme it is sufficient to associate a single feedback optimization variable λ(k), corresponding to the vertex w = [ 0.1 0]. The corresponding weight matrix for λ(k) was taken equal to one. To assess the real-time applicability of the developed theory for this type of a very fast system with a sampling period well below one millisecond, we

6.5. Illustrative examples 107 Output voltage [V] Feedback ISS MPC scheme 0 2 4 0 0.01 0.02 0.03 0.04 Inherent ISS MPC scheme 0 2 4 0 0.01 0.02 0.03 0.04 Inductor current [A] 1.5 1 0.5 0 0 0.01 0.02 0.03 0.04 1.5 1 0.5 0 0 0.01 0.02 0.03 0.04 Duty cycle 0.2 0.15 0.1 0.05 0 0.01 0.02 0.03 0.04 Time (seconds) 0.2 0.15 0.1 0.05 0 0.01 0.02 0.03 0.04 Time (seconds) Figure 6.4: Start-up: State trajectories and MPC input histories - solid lines, desired steady state values and constraints - dotted lines. we formulated the MPC optimization problems as Linear Programming (LP) problems. The LP problem corresponding to the inherently robust MPC scheme has 3 optimization variables and 14 constraints, while the LP problem corresponding to the self-optimizing robust MPC scheme has 5 optimization variables and 20 constraints. Here we excluded the lower and upper bounds on optimization variables, which are given directly as arguments of the LP solver. In one simulation, we tested first the start-up behavior (see Figure 6.4) and then, after reaching the desired operating point, we tested the disturbance rejection (see Figure 6.7). Note that, although the simulations were performed for the transformed system (6.21), we chose to plot all variables in the original coordinates corresponding to system (6.17), which have more physical meaning. During start-up, when no disturbance acts on the system and the value of the load remains unchanged, the differences between the self-optimizing robust MPC scheme and the inherently robust MPC scheme are very small,

108 Self-optimizing robust nonlinear MPC Output voltage [V] Feedback ISS MPC scheme 3.9 4 4.1 0.04 0.06 0.08 0.1 0.12 3.2 3.4 3.6 3.8 4 Inherent ISS MPC scheme 4.2 0.04 0.06 0.08 0.1 0.12 Inductor current [A] 0.2 0.1 0 0.04 0.06 0.08 0.1 0.12 0.4 0.3 0.2 0.1 0 0.04 0.06 0.08 0.1 0.12 Duty cycle 0.3 0.25 0.2 0.04 0.06 0.08 0.1 0.12 Time (seconds) 0.4 0.3 0.2 0.1 0.04 0.06 0.08 0.1 0.12 Time (seconds) Figure 6.5: Disturbance rejection: State trajectories MPC input histories - solid lines, desired steady state values, constraints and industrial operating margins for DC-DC converters (±3% of the desired output voltage) - dotted lines. as expected. Both schemes provide a very good start-up response. However, the difference in performance is significant in the second part of the simulation, when the dynamics were simultaneously affected by an asymptotically decreasing (in norm) additive disturbance of the form w = [w 1 0] (see Figure 6.6 for a plot of w 1 versus time) and a 50% drop of the load (i.e. R=82.5Ω) for k = 80, 81,..., 120. For k > 120 the disturbance was set equal to zero and the load was set to its nominal value (i.e. R=165Ω) to show that the closed-loop system is ISS, i.e. that asymptotic stability is recovered when the disturbance input vanishes. While the inherently robust MPC scheme does not manage to keep the output voltage within the desi-

6.5. Illustrative examples 109 0.02 0 0.02 0.04 0.06 0.08 Additive disturbance acting on the inductor current 0.1 0 0.02 0.04 0.06 0.08 0.1 0.12 0.1 Optimization variable providing feedback to the disturbance 0.05 0 0 0.02 0.04 0.06 0.08 0.1 0.12 Time (seconds) Figure 6.6: Time history of w 1 and λ(k) - solid lines. red operating range, the self-optimizing robust MPC scheme achieves very good performance in spite of significant additive and parametric disturbances (changes in the load R). The time history of λ(k) is shown in Figure 6.6. One can observe in Figure 6.6 that when the state reaches the desired operating point, λ(k) satisfies λ(k) P V [ 0.1 0] = 0.091, which means that the enhanced robustness is automatically deactivated when no the system remain at the origin. The LP problems equivalent to tmpc optimization problems were always solved 3 within the allowed sampling interval, with an worst case CPU time over 20 runs of 0.6314 milliseconds. In total, 4000 LPs were always solved within the allowed sampling interval for both algorithms. The very good closed-loop performance obtained for N = 1 collaborated with the computational time estimate is encouraging for further development of the real-time application of the presented theory to control DC-DC power converters, especially using faster platforms, such as Digital Signal Processors (DSP). 3 The simulation platform was Matlab 7.0.4 (R14) (CDD Dual Simplex LP solver) running on a Linux Fedora Core 5 operating system powered by an Intel Pentium 4 with a 3.2 GHz CPU.

110 Self-optimizing robust nonlinear MPC 6.5.3 Example 3: control of networked nonlinear systems Consider the nonlinear system (6.13) with S = {1, 2}, N 1 = {2}, N 2 = {1}, X 1 = X 2 = {ξ R 2 ξ 5}, U 1 = U 2 = {ξ R ξ 2} and W 1 = W 2 = {ξ R 2 ξ 1 0.2}. The dynamics are given by: φ 1 (x 1, u 1, v 1 (x N1 ), w 1 ) := φ 2 (x 2, u 2, v 2 (x N2 ), w 2 ) := [ ] 1 0.7 x 0 1 1 + [ ] 1 0.5 x 0 1 2 + [ sin([x1 ] 2 ) 0 [ sin([x2 ] 2 ) 0 ] + ] + [ 0.245 0.7 ] u 1 + [ [ ] [ 0.125 0 u 0.5 2 + ] 0 ([x 2 ] 1 ) 2 + w 1, (6.24a) ] + w [x 1 ] 2. 1 (6.24b) The technique of (Lazar et al., 2006) was used to compute the weights P, P 2 R 2 2 of the CLFs V 1 (x) = P 1 x and V 2 (x) = P 2 x for α3 1(s) = α2 3 (s) := 0.01s and the linearizations of (6.24a), (6.24b), respectively, around the origin, in closed-loop with u 1 (k) := K 1 x 1 (k), u 2 (k) := K 2 x 2 (k), K 1, K 2 R 2 1, yielding [ ] 1.3204 0.6294 P 1 =, K 0.5629 2.0811 1 = [ 0.2071 1.2731 ], [ ] 1.1356 0.5658 P 2 =, K 0.7675 2.1356 2 = [ 0.3077 1.4701 ]. Note that the control laws u 1 (k) = K 1 x(k) and u 2 (k) = K 2 x 2 (k) are only employed off-line, to calculate the weight matrices P 1, P 2 and they are never used for controlling the system. To optimize robustness, 4 optimization variables λ i 1 (k),..., λi 4 (k) were introduced for each system, each one assigned to a vertex of the set W i, i = 1, 2, respectively. The following cost functions were employed in the optimization problem, as specified in Remark 6.4.6: JRHC i (x i(k), u i (k)) := Q i 1 φ i(x i, u i, v i (x Ni ), 0) + Q i x i (k) + R i u i (k), J i (λ i 1 (k),..., λi 4 (k)) := Γi 4 j=1 λi j (k), where i = 1, 2, Q 1 1 = Q2 1 = 4I 2, Q 1 = Q 2 = 0.1I 2, R 1 = R 2 = 0.4, Γ 1 = 1 and Γ 2 = 0.1. For each system, the resulting linear program has 7 optimization variables and 42 constraints. During the simulations, the worst case computational time required by the CPU (Pentium 4, 3.2GHz, 1GB RAM) over 400 runs was 5 milliseconds, which shows the potential for controlling networks of fast nonlinear systems. In the simulation scenario we tested the closed-loop system response for x 1 (0) = [3, 1], x 2 (0) = [1, 2] and for the following disturbance scenarios: w 1 (k) = w 2 (k) = [0, 0] for k Z [0,40] (nominal stabilization), w i (k) takes random values in W i, i = 1, 2, for k Z [41,80] (robustness to random inputs), w 1 (k) = w 2 (k) = [0, 0.1] for k Z [81,120] (robustness to constant inputs) and w 1 (k) = w 2 (k) = [0, 0] for

6.5. Illustrative examples 111 Figure 6.7: States, inputs and first optimization variable histories for each system. k Z [121,160] (to show that asymptotic stability is recovered for zero inputs). In Figure 6.7 the time history of the states, control input and the optimization variables λ 1 1 (k) and λ2 1 (k), assigned to w1 1 = w1 2 = [0, 0.2], are depicted for each system. In the state trajectories plots, the dashed horizontal lines give an approximation of the bounded region in which the system s states remain despite disturbances, i.e. approximately within the interval [ 0.2, 0.2]. In the input trajectory plots the dashed line shows the input constraints. In all plots, the dashed vertical lines delimit the time intervals during which one of the four disturbance scenarios is active. One can observe that the feedback to disturbances is provided actively, resulting in good robust performance, while state and input constraints are satisfied at all times, despite the strong nonlinear coupling present. In the λ 1 plot, one

112 Self-optimizing robust nonlinear MPC can see that whenever the disturbance is acting on the system, or when the state is far from the origin (in the first disturbance scenario), these variables act to optimize the decrease of each V i ( ) and to counteract the influence of the interconnecting signal. Whenever the equilibrium is reached, the optimization variables satisfy the constraint V i (φ i (0, 0, wi e)) λi e(k), e = 1,..., 4, as explained in Remark 6.4.7. In Figure 6.7, the λ 1 plot, the values V 1 (φ 1 (0, 0, w1 1)) = 0.2641 and V 2(φ 2 (0, 0, w2 1 )) = 0.2271 are depicted with dashed horizontal lines. 6.6 Conclusions In this chapter we studied the design of robust MPC schemes with focus on adapting the closed-loop ISS gain on-line, in a receding horizon fashion. Exploiting convex CLFs and disturbance bounds, we were able to construct a finite dimensional optimization problem that allows for the simultaneous on-line (i) computation of a control action that achieves ISS, and (ii) minimization of the ISS gain of the resulting closed-loop system depending on the actual state trajectory. As a consequence, the proposed robust nonlinear MPC algorithm is self-optimizing in terms of disturbance attenuation. Solutions for establishing recursive feasibility and for decentralized implementation have also been briefly presented. Furthermore, we indicated a design recipe that can be used to implement the developed self-optimizing MPC scheme as a single linear program, for nonlinear systems that are affine in the control variable and the disturbance input. This brings the application to (networks of) fast nonlinear systems within reach.

7 Conclusions 7.1 Contributions 7.2 Future research A summary of the main contributions and a collection of several possible directions for future research conclude this thesis. 7.1 Contributions The major contributions are in the domains of Stability Theory for Discrete-time Discontinuous Systems; Input-to-State Stability Theory for Discrete-time Discontinuous Systems; Stabilizing Nonlinear Model Predictive Control; Robust Nonlinear Model Predictive Control; Low Complexity Nonlinear Model Predictive Control. We discuss the obtained results in more detail below. 7.1.1 Stability theory for discrete-time systems The contributions of this thesis regarding stability of discrete-time systems are presented in Chapter 2. The focus is on the assessment and generalization of the classical stability results (Kalman and Bertram, 1960b) in the case when the system dynamics and/or the candidate Lyapunov function is discontinuous. The most important observation is that, as opposed to the continuous case, a uniformly strict Lyapunov function, rather than a strict Lyapunov function (see Chapter 2 for exact definitions), is needed for establishing asymptotic stability in the Lyapunov sense. An example was given that shows that the equilibrium of a discrete-time system that admits

114 Conclusions a strict Lyapunov function, but not a uniformly strict one, is not necessarily globally attractive. While the uniform strictness condition is an additional requirement, compared to (Kalman and Bertram, 1960b), the fact that the systems dynamics and the candidate Lyapunov function are allowed to be discontinuous (only continuity at the equilibrium point is required, and not on a neighborhood of the equilibrium) is a significant relaxation. Notice that globally asymptotically stable discrete-time systems always enjoy a possibly discontinuous Lyapunov function, as shown in (Nesic et al., 1999), but not necessarily a continuous Lyapunov function. 7.1.2 Input-to-State stability theory for discrete-time discontinuous Systems The subject of input-to-state stability theory for discrete-time systems is present throughout the thesis. Perhaps the most relevant contributions, which have an impact beyond the MPC context, can be found in Chapter 2, Chapter 4 and Chapter 6, as follows. In Chapter 6 we present a simple way of establishing inherent robustness in the sense of ISS for possibly discontinuous discrete-time systems that admit a continuous uniformly strict Lyapunov function. Furthermore, in Chapter 2 we illustrate via an example that inherent robustness is no longer necessarily attained in the case of a discontinuous Lyapunov function. Actually, in turns out that the sever phenomenon of zero robustness, i.e. loss of asymptotic stability in the presence of arbitrarily small perturbations, is related to the absence of a continuous Lyapunov function. As most of the ISS results present in the literature assume continuity of the candidate (ISS) Lyapunov function, it is not clear how to establish robustness from discontinuous (ISS) Lyapunov functions. In Chapter 2 we presented several ISS tests based on discontinuous Lyapunov functions, which render the many available procedures for obtaining Lyapunov functions, which typically yield discontinuous Lyapunov functions, useful for establishing robustness. These tests can be employed to establish ISS of nominally asymptotically stable discrete-time discontinuous systems in the case when a discontinuous USL function is available. Moreover, in Chapter 4 we have presented a general input-to-state (practical) stability theorem which allows for discontinuous system dynamics and candidate ISS Lyapunov functions. These results bring certain relevant relaxations with respect to the original ISS work in discrete-time (Jiang and Wang, 2001) and prove to be very useful in the context of optimization based control, such as MPC algorithms.

7.1. Contributions 115 7.1.3 Stabilizing nonlinear model predictive control While most of the existing results in the theory of MPC regarding closedloop stability either assume optimality or employ non-trivial modifications of the original terminal cost and constraint set MPC set-up (Mayne et al., 2000), in Chapter 3 of this thesis we attained stability results for sub-optimal MPC solutions. To cope with MPC control sequences (obtained by solving MPC optimization problems) that are not optimal, but within a margin δ 0 from the optimum, we introduced the notion of ε-asymptotic stability (AS) as a particular case of regular AS. In this way we were able to show that nominal asymptotic stability can be guaranteed for sub-optimal MPC without any modification to the standard terminal cost and constraint set MPC set-up presented in (Mayne et al., 2000). Compared to classical suboptimal MPC (Scokaert et al., 1999), where an explicit constraint on the MPC cost function is employed, this result provides a fundamentally different approach to establishing closed-loop stability for sub-optimal MPC. 7.1.4 Robust nonlinear model predictive control The contributions to robust nonlinear MPC form the richest core of the thesis and are present in all chapters, but Chapter 2. In particular, similarly as discussed above about stability of sub-optimal MPC, in Chapter 3 we presented input-to-state stability results that allow for sub-optimal MPC implementations and discontinuous system dynamics. Within this context we introduced a novel approach in the framework of tightened constraints robust MPC, which recovers as a particular case existing set-ups. Notice that allowing for sub-optimal solutions is of paramount importance as firstly, the infimum in an MPC optimization problem does not have to be attained and secondly, numerical solvers usually provide only sub-optimal solutions. The main contributions to the framework of min-max MPC are presented in Chapter 4 and Chapter 5. One of the drawbacks of min-max MPC was, until now, the absence of an ISS guarantee for the closed-loop system. Due to the maximization of all possible realizations of the uncertain disturbance, only input-to-state practical stability can be guaranteed in general. The effect of this drawback is rather disturbing, i.e. even if in reality the disturbance vanishes, which should lead to recovering asymptotic stability (if ISS is established), the min-max MPC closed-loop system can only be guaranteed to be practically stable instead. A solution to solve this problem was presented recently in (Magni et al., 2006), but at the cost of a non-trivial modification to the classical set-up, which unfortunately lead to

116 Conclusions a non-convex maximization problem. In Chapter 4 novel conditions that guarantee ISS of min-max nonlinear MPC closed-loop systems were derived using a dual-mode approach. This result is useful as it provides a methodology for designing robustly asymptotically stable min-max MPC schemes without a priori assuming that the (additive) disturbance input converges to zero as the closed-loop system state converges to the origin. Another result that was missing in min-max MPC was a systematic procedure for computing a terminal cost and an auxiliary control law that satisfy the developed sufficient conditions for ISS. In Chapter 5 we presented a fairly general solution to this problem based on solving of a set of linear matrix inequalities (LMIs). An explicit relation was established between the proposed method and H control design. This relation shows that the LMI-based optimal solution of the H synthesis problem solves the terminal cost and ACL problem in min-max MPC, for a particular choice of the stage cost. This result, which was somehow missing in the MPC literature, is of general interest as it connects well known linear control problems to robust MPC design. One of the most important contributions of this thesis is the concept of self-optimizing robust nonlinear MPC, which makes the subject of Chapter 6. The goal of the existing design methods for synthesizing control laws that achieve ISS (Jiang and Wang, 2001) is to a priori guarantee a predetermined closed-loop ISS gain. Consequently, the ISS property, with a predetermined, constant ISS gain, is in this way enforced for all state space trajectories of the closed-loop system and at all time instances. As such, it is obvious that the existing approaches, which are also employed in the design of MPC schemes that achieve ISS, can lead to overly conservative solutions along particular trajectories. Therefore, it is of high interest to develop a control (MPC) design method with the explicit goal of adapting the closed-loop ISS gain depending of the evolution of the state trajectory. A novel method for synthesizing robust MPC schemes with this feature was presented in Chapter 6 of this thesis. Besides the benefits in performance obtained from enhancing robustness, which were illustrated by thorough examples, optimized ISS turned out to bring a viable solution to decentralized robust MPC. 7.1.5 Low complexity nonlinear MPC In developing the theoretical results presented in this thesis we have always kept an eye toward obtaining a solution for implementation that has a low computational complexity. Most of the existing methods in low complexity MPC start from a given, fixed optimization problem coming from classical

7.2. Future research 117 MPC design and provide solvers or explicit solutions for particular relevant classes of systems. So far, these methods are restricted to linear systems, certain types of uncertain systems (LPV systems) and hybrid systems (PWA, MLD systems). In contrast to these approaches concerned with numerical optimization aspects, we focused on the design of the MPC algorithm, aiming at achieving a low complexity optimization problem for a large class of systems. In Chapter 6 we showed that for discrete-time nonlinear systems that are affine in the control and disturbance inputs, respectively, the selfoptimizing robust MPC algorithm can be implemented by solving a single linear program. This was attained by considering infinity norms as control Lyapunov functions. The potential of this technique for real-life application to fast systems was illustrated in Chapter 6 by applying it to control a DC-DC converter (with a sampling period well below one millisecond). This opens up a complete new application domain, next to the traditional process control for typically slow systems. 7.2 Future research There are several interesting research directions possible on the basis of the results presented in this thesis. In what follows we will briefly present some future lines of research that can be pursued. In most of the algorithms developed in this thesis Lyapunov function candidates defined using infinity norms proved to be a fruitful alternative to the classical quadratic Lyapunov functions. Although necessary and sufficient conditions for existence of infinity norm based Lyapunov functions exist for linear systems (Molchanov, 1987), these conditions do not lead in general to computationally tractable optimization problems. We have made use of some alternative, only sufficient conditions, which lead to tractable optimization problems (although still, nonlinear, non-convex see, e.g.,(lazar et al., 2006)) at the price of some conservativeness. As such, it would be of great interest to search for new, necessary and sufficient conditions for existence of infinity norm based Lyapunov functions that can be implemented in a systematic and tractable manner. Extensions to other classes than linear systems, such as linear polytopic difference inclusions and input affine nonlinear systems are also of interest. In terms of robust MPC, in this thesis we have relaxed several stringent assumptions, such as continuity of the system dynamics, continuity of the value function corresponding to the MPC cost and most importantly, the assumption of optimality. What remains to be studied is an alternative to the

118 Conclusions terminal cost and constraint set method for proving recursive feasibility. In the terminal set method, one needs to compute off-line an invariant set which is usually difficult to obtain. Even then, this set is relatively small, in the sense that a long prediction may be required for reaching the set in N steps. As such, one either has to solve a computationally highly complex optimization problem, due to a large N, or runs intro feasibility problems. That is the reason why this method, although without rival in theory of MPC, is in fact not applied in real-time MPC. An alternative way of proving recursive feasibility, which does not employ finite time reachability to a predefined neighborhood of the equilibrium, would really enhance the application of robust nonlinear MPC in real-time control. Another relevant point for further research is related to providing feedback to disturbances, i.e. to achieve optimal disturbance rejection, rather than just bounded trajectories for bounded disturbances. We have taken a first step in this direction in Chapter 6 by providing a way to explicitly optimize the ISS gain of the closed-loop system on line. This results however, only holds along a particular trajectory generated in closed-loop on-line. It would be interesting to extended this approach to a set of trajectories originating from a set of initial conditions of interest. Also, the developed self-optimizing MPC algorithm uses the fact that the nominal system admits a continuous and convex control Lyapunov function. This condition is usually satisfied locally, i.e. if the nonlinear system is locally linearizable, but it may be conservative if imposed globally for general nonlinear systems. As such, it would be desirable to relax this conditions at the global level. Decentralized control is another very active research direction, especially since the focus in the control systems community has shifted from complex hybrid and embedded systems to complex networked and embedded systems in general and control of large scale networks of systems in particular. In Chapter 6 we have pointed out a solution to decentralized input-to-state stabilization of networks of nonlinear systems with hard state and input constraints. It would be of interest to extended this solution to include some coordination between different subsystems in the network and even to accommodate a distributed implementation.

Bibliography Alamir, M., Bornard, G., 1994. On the stability of receding horizon control of nonlinear discrete-time systems. Systems and Control Letters 23, 291 296. Alamo, T., Muńoz de la Peńa, D., Limon, D., Camacho, E. F., 2005. Constrained min-max predictive control: Modifications of the objective function leading to polynomial complexity. IEEE Transactions on Automatic Control 50 (5), 710 714. Alessio, A., Lazar, M., Bemporad, A., Heemels, W. P. M. H., 2007. Squaring the circle: An algorithm for obtaining polyhedral invariant sets from ellipsoidal ones. Automatica 43 (12), 2096 2103. Baotic, M., Christophersen, F. J., Morari, M., 2006. Constrained optimal control of hybrid systems with a linear performance index. IEEE Transactions on Automatic Control 51 (12), 1903 1919. Bemporad, A., Borodani, P., Mannelli, M., 2003. Hybrid control of an automotive robotized gearbox for reduction of consumptions and emissions. In: Hybrid Systems: Computation and Control. Vol. 2623 of Lecture Notes in Computer Science. Springer Verlag, pp. 81 96. Bemporad, A., Morari, M., 1999. Control of systems integrating logic, dynamics, and constraints. Automatica 35 (3), 407 427. Bertsekas, D. P., 1999. Nonlinear programming, second edition. Athena Scientific. Borrelli, F., 2003. Constrained optimal control of linear and hybrid systems. Vol. 290 of Lecture Notes in Control and Information Sciences. Springer. Camacho, E. F., Bordons, C., 2004. Model Predictive Control. Springer- Verlag, London. Chen, H., Allgöwer, F., 1996. A quasi-infinite horizon predictive control scheme for constrained nonlinear systems. In: 16th Chinese Control Conference. Quindao, China, pp. 309 316. Chen, H., Scherer, C. W., 2006a. Moving horizon H control with performance adaptation for constrained linear systems. Automatica 42 (6), 1033 1040.

120 BIBLIOGRAPHY Chen, H., Scherer, C. W., 2006b. Moving horizon H with performance adaptation for constrained linear systems. Automatica 42, 1033 1040. Clarke, D. W., Mohtadi, C., Tuffs, P. S., 1987. Generalized predictive control: I - the basic algorithm and II - Extensions and interpretations. Automatica 23, 137 160. Cutler, C. R., Ramaker, B. L., 1980. Dynamic matrix control - A computer control algorithm. In: Joint Automatic Control Conference. San Francisco, U.S.A. Daafouz, J., Riedinger, P., Iung, C., 2002. Stability analysis and control synthesis for switched systems: A switched Lyapunov function approach. IEEE Transactions on Automatic Control 47 (11), 1883 1887. De Keyser, R. M. C., van Cauwenberghe, A. R., 1985. Extended prediction self-adaptive control. In: IFAC Symposium on Identification and System Parameter Estimation. York, U.K., pp. 1255 1260. Feng, G., 2002. Stability analysis of piecewise discrete-time linear systems. IEEE Transactions on Automatic Control 47 (7), 1108 1112. Ferrari-Trecate, G., Cuzzola, F. A., Mignone, D., Morari, M., 2002. Analysis of discrete-time piecewise affine and hybrid systems. Automatica 38 (12), 2139 2146. Findeisen, R., Imsland, L., Allgöwer, F., Foss, B. A., 2003. State and output feedback nonlinear model predictive control: An overview. European Journal of Control 9 (2 3), 190 206. Freeman, H., 1965. Discrete-time systems. John Wiley & Sons, Inc. Garcia, C. E., Prett, D. M., Morari, M., 1989. Model predictive control: theory and practice - a survey. Automatica 25 (3), 335 348. Geyer, T., Papafotiou, G., Morari, M., 2005. Model Predictive Control in Power Electronics: A Hybrid Systems Approach. In: IEEE Conference on Decision and Control. Seville, Spain. Golub, G. H., Van Loan, C. F., 1989. Matrix computations. The Johns Hopkins University Press. Goodwin, G. C., Seron, M. M., De Dona, J. A., 2005. Constrained control and estimation An optimization approach. Communications and control engineering. Springer.

BIBLIOGRAPHY 121 Grieder, P., Kvasnica, M., Baotic, M., Morari, M., 2005. Stabilizing low complexity feedback control of constrained piecewise affine systems. Automatica 41 (10), 1683 1694. Grimm, G., Messina, M. J., Tuna, S., Teel, A. R., 2005. Model predictive control: for want of a local control Lyapunov function, all is not lost. IEEE Transactions on Automatic Control 50 (5), 546 558. Grimm, G., Messina, M. J., Tuna, S., Teel, A. R., 2007. Nominally robust model predictive control with state constraints. IEEE Transactions on Automatic Control 52 (10), 1856 1870. Grimm, G., Messina, M. J., Tuna, S. E., Teel, A. R., 2003. Nominally robust model predictive control with state contraints. In: 42nd IEEE Conference on Decision and Control. Maui, Hawaii, pp. 1413 1418. Grimm, G., Messina, M. J., Tuna, S. E., Teel, A. R., 2004. Examples when nonlinear model predictive control is nonrobust. Automatica 40 (10), 1729 1738. Hahn, W., 1967. Stability of motion. Springer-Verlag. Heemels, W. P. M. H., De Schutter, B., Bemporad, A., 2001. Equivalence of hybrid dynamical models. Automatica 37 (7), 1085 1091. Jiang, Z.-P., 1993. Quelques résultats de stabilisation robuste. Application á la commande (in french). Ph.D. thesis, École des Mines de Paris, France. Jiang, Z.-P., Mareels, I., Wang, Y., 1996. A Lyapunov formulation of the nonlinear small-gain theorem for interconnected ISS systems. Automatica 32 (8), 1211 1215. Jiang, Z.-P., Teel, A. R., Praly, L., 1994. Small gain theorem for ISS systems and applications. Journal of Mathematics of Control, Signals and Systems 7, 95 120. Jiang, Z.-P., Wang, Y., 2001. Input-to-state stability for discrete-time nonlinear systems. Automatica 37, 857 869. Johansson, M., 1999. Piecewise linear control systems. Ph.D. thesis, Lund Institute of Technology, Sweden. Kalman, R. E., Bertram, J. E., 1960a. Control system analysis and design via the second method of Lyapunov, I: Continuous-time systems. Transactions of the ASME, Journal of Basic Engineering 82, 371 393.

122 BIBLIOGRAPHY Kalman, R. E., Bertram, J. E., 1960b. Control system analysis and design via the second method of Lyapunov, II: Discrete-time systems. Transactions of the ASME, Journal of Basic Engineering 82, 394 400. Kaminer, I., Khargonekar, P. P., Rotea, M. A., 1993. Mixed H 2 /H control for discrete time systems via convex optimization. Automatica 29, 57 70. Kassakian, J. G., Schlecht, M. F., Verghese, G. C., 1992. Principles of Power Electronics. Adisson-Wesley, Publishing Company, Inc. Keerthi, S. S., Gilbert, E. G., 1988. Optimal, infinite horizon feedback laws for a general class of constrained discrete time systems: Stability and moving-horizon approximations. Journal of Optimization Theory and Applications 57 (2), 265 293. Kellett, C. M., Teel, A. R., 2004. Smooth Lyapunov functions and robustness of stability for difference inclusions. Systems & Control Letters 52, 395 405. Kerrigan, E. C., Maciejowski, J. M., 2001. Robust feasibility in model predictive control: Necessary and sufficient conditions. In: 40th IEEE Conference on Decision and Control. Orlando, Florida, pp. 728 733. Kerrigan, E. C., Mayne, D. Q., 2002. Optimal control of constrained, piecewise affine systems with bounded disturbances. In: 41st IEEE Conference on Decision and Control. Las Vegas, Nevada, pp. 1552 1557. Khalil, H., 2002. Nonlinear Systems, Third Edition. Prentice Hall. Kokotović, P., Arcak, M., 2001. Constructive nonlinear control: a historical perspective. Automatica 37 (5), 637 662. Kolmanovsky, I., Gilbert, E. G., 1998. Theory and computation of disturbance invariant sets for discrete-time linear systems. Mathematical Problems in Engineering 4, 317 367. Kothare, M. V., Balakrishnan, V., Morari, M., 1996. Robust constrained model predictive control using linear matrix inequalities. Automatica 32 (10), 1361 1379. LaSalle, J. P., 1976. The stability of dynamical systems. In: SIAM (Ed.), Regional Conference Series in Applied Mathematics. No. 25. Philadelphia.

BIBLIOGRAPHY 123 Lazar, M., De Keyser, R., 2004. Nonlinear predictive control of a DC-to- DC converter. In: Symposium on Power Electronics, Electrical Drives, Automation & Motion - SPEEDAM. Capri, Italy. Lazar, M., Heemels, W. P. M. H., 2008a. Global input-to-state stability and stabilization of discrete-time piece-wise affine systems. Nonlinear Analysis: Hybrid Systems 2, 721 734. Lazar, M., Heemels, W. P. M. H., 2008b. Optimized input-to-state stabilization of discrete-time nonlinear systems with bounded inputs. In: American Control Conference. Seattle, U.S.A. Lazar, M., Heemels, W. P. M. H., 2008c. Predictive control of hybrid systems: Stability results for sub-optimal solutions. In: 17th IFAC World Congress. Seoul, Korea. Lazar, M., Heemels, W. P. M. H., 2009. Predictive control of hybrid systems: Input-to-state stability results for sub-optimal solutions. Automatica 45 (1), 180 185. Lazar, M., Heemels, W. P. M. H., Bemporad, A., Weiland, S., 2007a. Discrete-time non-smooth nonlinear MPC: Stability and robustness. In: Assessment and Future Directions of Nonlinear Model Predictive Control. Vol. 358 of Lecture Notes in Control and Information Sciences. Springer Berlin Heidelberg, pp. 93 103. Lazar, M., Heemels, W. P. M. H., Jokic, A., 2009a. Self-optimizing robust nonlinear model predictive control. In: Assessment and Future Directions of Nonlinear Model Predictive Control. Vol. 384 of Lecture Notes in Control and Information Sciences. Springer Berlin Heidelberg, pp. 27 40. Lazar, M., Heemels, W. P. M. H., Muñoz de la Peña, D., Alamo, T., 2009b. Further results on Robust MPC using Linear Matrix Inequalities. In: Assessment and Future Directions of Nonlinear Model Predictive Control. Vol. 384 of Lecture Notes in Control and Information Sciences. Springer Berlin Heidelberg, pp. 89 98. Lazar, M., Heemels, W. P. M. H., Teel, A. R., 2007b. Subtleties in robust stability of discrete-time piecewise affine systems. In: American Control Conference. New York City, U.S.A., pp. 3464 3469. Lazar, M., Heemels, W. P. M. H., Teel, A. R., 2009c. Lyapunov functions, stability and input-to-state stability subtleties for discrete-time disconti-

124 BIBLIOGRAPHY nuous systems, accepted for publication in IEEE Transactions on Automatic Control. Lazar, M., Heemels, W. P. M. H., Weiland, S., Bemporad, A., 2006. Stabilizing model predictive control of hybrid systems. IEEE Transactions on Automatic Control 51 (11), 1813 1818. Lazar, M., Heemels, W. P. M. H., Weiland, S., Bemporad, A., Pastravanu, O., 2005. Infinity norms as Lyapunov functions for model predictive control of constrained PWA systems. In: Hybrid Systems: Computation and Control. Vol. 3414 of Lecture Notes in Computer Science. Springer Verlag, Zürich, Switzerland, pp. 417 432. Lazar, M., Muñoz de la Peña, D., Heemels, W. P. M. H., Alamo, T., 2008a. On input-to-state stability of min-max nonlinear model predictive control. Systems & Control Letters 57, 39 48. Lazar, M., Roset, B. J. P., Heemels, W. P. M. H., Nijmeijer, H., van den Bosch, P. P. J., 2008b. Input-to-state stabilizing sub-optimal nonlinear MPC algorithms with an application to DC-DC converters. International Journal of Robust and Nonlinear Control 18 (8), 890 904. Lee, J. H., Yu, Z., 1997. Worst-case formulations of model predictive control for systems with bounded parameters. Automatica 33 (5), 763 781. Leenaerts, D. M. W., 1996. Further extensions to Chua s explicit piecewise linear function descriptions. International Journal of Circuit Theory and Applications 24, 621 633. Limon, D., Alamo, T., Camacho, E. F., 2002a. Input-to-state stable MPC for constrained discrete-time nonlinear systems with bounded additive uncertainties. In: 41st IEEE Conference on Decision and Control. Las Vegas, Nevada, pp. 4619 4624. Limon, D., Alamo, T., Camacho, E. F., 2002b. Stability analysis of systems with bounded additive uncertainties based on invariant sets: Stability and feasibility of MPC. In: American Control Conference. Anchorage, pp. 364 369. Limon, D., Alamo, T., Salas, F., Camacho, E. F., 2006. Input to state stability of min-max MPC controllers for nonlinear systems with bounded uncertainties. Automatica 42, 797 803.

BIBLIOGRAPHY 125 Magni, L., De Nicolao, G., Magnani, L., Scattolini, R., 2001. A stabilizing model-based predictive control algorithm for nonlinear systems. Automatica 37 (9), 1351 1362. Magni, L., De Nicolao, G., Scattolini, R., 1998. Output feedback recedinghorizon control of discrete-time nonlinear systems. In: 4th IFAC NOLCOS. Vol. 2. Oxford, UK, pp. 422 427. Magni, L., De Nicolao, G., Scattolini, R., Allgöwer, F., 2003. Robust model predictive control for nonlinear discrete-time systems. International Journal of Robust and Nonlinear Control 13, 229 246. Magni, L., Raimondo, D. M., Scattolini, R., 2006. Regional input-to-state stability for nonlinear model predictive control. IEEE Transactions on Automatic Control 51 (9), 1548 1553. Magni, L., Scattolini, R., 2007. Robustness and robust design of MPC for nonlinear discrete-time systems. In: Assessment and Future Directions of Nonlinear MPC. Vol. 358 of LNCIS. Springer Verlag, pp. 239 254. Mayne, D. Q., 2001. Control of constrained dynamic systems. European Journal of Control 7, 87 99. Mayne, D. Q., Kerrigan, E. C., 2007. Tube-based robust nonlinear model predictive control. In: 7th IFAC Symposium on Nonlinear Control Systems. Pretoria, South Africa, pp. 110 115. Mayne, D. Q., Rawlings, J. B., Rao, C. V., Scokaert, P. O. M., 2000. Constrained model predictive control: Stability and optimality. Automatica 36 (6), 789 814. Meadows, E. S., Henson, M. A., Eaton, J. W., Rawlings, J. B., 1995. Receding horizon control and discontinuous state feedback stabilization. International Journal of Control 62 (5), 1217 1229. Michalska, H., Mayne, D. Q., 1993. Robust receding horizon control of constrained nonlinear systems. IEEE Transactions on Automatic Control 38 (11), 1623 1633. Mignone, D., Ferrari-Trecate, G., Morari, M., 2000. Stability and stabilization of piecewise affine and hybrid systems: An LMI approach. In: 39th IEEE Conference on Decision and Control. pp. 504 509.

126 BIBLIOGRAPHY Molchanov, A. P., 1987. Lyapunov functions for nonlinear discrete-time control systems. Avtomatika i Telemekhanika 6, 26 35. Mosca, E., Zappa, G., Manfredi, C., 1984. Multistep horizon self-tuning controllers. In: 9th IFAC World Congress. Budapest, Hungary. Nesic, D., Teel, A. R., 2001. Changing supply functions in input to state stable systems: the discrete-time case. IEEE Transactions on Automatic Control 46 (6), 960 962. Nesic, D., Teel, A. R., Kokotovic, P. V., 1999. Sufficient conditions for the stabilization of sampled-data nonlinear systems via discrete-time approximations. Systems and Control Letters 38 (4 5), 259 270. Qin, S. J., Badgwell, T. A., 2003. A survey of industrial model predictive control technology. Control Engineering Practice 11, 733 764. Raimondo, D. M., Limon, D., Lazar, M., Magni, L., Camacho, E. F., 2009. Min-max model predictive control of nonlinear systems: A unifying overview on stability. European Journal of Control 15 (1), 1 17. Raković, S. V., 2008. Set theoretic methods in model predictive control. In: 3rd Int. Workshop on Assessment and Future Directions of NMPC. Pavia, Italy. Richalet, J. A., Rault, A., Testud, J. L., Papon, J., 1978. Model predictive heuristic control: applications to an industrial process. Automatica 14, 413 428. Scokaert, P. O. M., Mayne, D. Q., 1998. Min-max feedback model predictive control for constrained linear systems. IEEE Transactions on Automatic Control 43 (8), 1136 1142. Scokaert, P. O. M., Mayne, D. Q., Rawlings, J. B., 1999. Suboptimal model predictive control (feasibility implies stability). IEEE Transactions on Automatic Control 44 (3), 648 654. Scokaert, P. O. M., Rawlings, J. B., 1998. Constrained linear quadratic regulation. IEEE Transactions on Automatic Control 43 (8), 1163 1169. Scokaert, P. O. M., Rawlings, J. B., Meadows, E. B., 1997. Discrete-time stability with perturbations: Application to model predictive control. Automatica 33 (3), 463 470.

BIBLIOGRAPHY 127 Soeterboek, R., 1992. Predictive Control - A unified approach. Englewood Cliffs, NJ: Prentice-Hall. Sontag, E. D., 1981. Nonlinear regulation: the piecewise linear approach. IEEE Transactions on Automatic Control 26 (2), 346 357. Sontag, E. D., 1989. Smooth stabilization implies coprime factorization. IEEE Transactions on Automatic Control AC 34, 435 443. Sontag, E. D., 1990. Further facts about input to state stabilization. IEEE Transactions on Automatic Control AC 35, 473 476. Sontag, E. D., 1999. Stability and stabilization: Discontinuities and the effect of disturbances. In: Nonlinear Analysis, Differential Equations, and Control. Clarke, F.H., Stern, R.J. (eds.), Kluwer, Dordrecht, pp. 551 598. Spjøtvold, J., Kerrigan, E. C., Rakovic, S. V., Mayne, D. Q., Johansen, T. A., 2007. Inf-sup control of discontinuous piecewise affine systems. In: European Control Conference. Kos, Greece. Vidyasagar, M., 1993. Nonlinear Systems Analysis (Second Edition). Prentice-Hall, Englewood Cliffs, NJ. Wang, Y. J., Rawlings, J. B., 2004. A new robust model predictive control method I: theory and computation. Journal of Process Control 14, 231 247. Wayne S.U., Mathematical Dept., C. R., 1972. Every convex function is locally lipschitz. The American Mathematica Monthly 79 (10), 1121 1124. Willems, J. L., 1970. Stability theory of dynamical systems. Thomas Nelson and Sons, London, England.