C21 Model Predictive Control

Similar documents
Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

Optimization of warehousing and transportation costs, in a multiproduct multi-level supply chain system, under a stochastic demand

ARX-Model based Model Predictive Control with Offset-Free Tracking.

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA

Nonlinear Model Predictive Control: From Theory to Application

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Fault Accomodation Using Model Predictive Methods - Jovan D. Bošković and Raman K.

A survey of industrial model predictive control technology

Dynamic Modeling, Predictive Control and Performance Monitoring

Adaptive Business Intelligence

19 LINEAR QUADRATIC REGULATOR

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Real-Time Systems Versus Cyber-Physical Systems: Where is the Difference?

Gaussian Process Model Based Predictive Control

Optimization Modeling for Mining Engineers

Adaptive cruise controller design: a comparative assessment for PWA systems

Lecture 13 Linear quadratic Lyapunov theory

Discrete Optimization

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Using MATLAB in a Graduate Electrical Engineering Optimal Control Course

Neuro-Dynamic Programming An Overview

24. The Branch and Bound Method

Lecture notes for the course Advanced Control of Industrial Processes. Morten Hovd Institutt for Teknisk Kybernetikk, NTNU

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms

PID, LQR and LQR-PID on a Quadcopter Platform

STRATEGIC CAPACITY PLANNING USING STOCK CONTROL MODEL

A joint control framework for supply chain planning

PID Controller Design for Nonlinear Systems Using Discrete-Time Local Model Networks

Modeling and Simulation of a Three Degree of Freedom Longitudinal Aero plane System. Figure 1: Boeing 777 and example of a two engine business jet

Real-Time Embedded Convex Optimization

Chapter 3 Nonlinear Model Predictive Control

LPV model identification for power management of Web service systems Mara Tanelli, Danilo Ardagna, Marco Lovera

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Stochastic Gradient Method: Applications

Introduction to Process Optimization

Lecture 3: Linear methods for classification

Controller Design in Frequency Domain

Lecture 7: Finding Lyapunov Functions 1

Statistical Machine Learning

Lecture 2: The SVM classifier

Support Vector Machine (SVM)

Sistemas com saturação no controle

PID Control. Chapter 10

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

8. Linear least-squares

Least Squares Estimation

Online Model Predictive Control of a Robotic System by Combining Simulation and Optimization

Mapping an Application to a Control Architecture: Specification of the Problem

Safe robot motion planning in dynamic, uncertain environments

DCMS DC MOTOR SYSTEM User Manual

Efficient model-based leak detection in boiler steam-water systems

Two-Stage Stochastic Linear Programs

Model-based Parameter Optimization of an Engine Control Unit using Genetic Algorithms

2.3 Convex Constrained Optimization Problems

Distributed computing of failure probabilities for structures in civil engineering

Interactive applications to explore the parametric space of multivariable controllers

Numerical methods for American options

CS 2750 Machine Learning. Lecture 1. Machine Learning. CS 2750 Machine Learning.

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

Kalman Filter Applied to a Active Queue Management Problem

Applications to Data Smoothing and Image Processing I

Improving proposal evaluation process with the help of vendor performance feedback and stochastic optimal control

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Energy Efficient Building Climate Control using Stochastic Model Predictive Control and Weather Predictions

Enhancing the SNR of the Fiber Optic Rotation Sensor using the LMS Algorithm

Robust Optimization for Unit Commitment and Dispatch in Energy Markets

Advanced Operating Systems (M) Dr Colin Perkins School of Computing Science University of Glasgow

Figure 1. The Ball and Beam System.

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

Reinforcement Learning

minimal polyonomial Example

Lecture 5: Variants of the LMS algorithm

Stochastic Inventory Control

A simple method to determine control valve performance and its impacts on control loop performance

Optimization in R n Introduction

A New Quantitative Behavioral Model for Financial Prediction

Automatic Stress and Load Testing for Embedded Systems

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Date: April 12, Contents

Predictive Control Algorithms for Nonlinear Systems

Reinforcement Learning

Matlab and Simulink. Matlab and Simulink for Control

On Parametric Model Estimation

Transcription:

C21 Model Predictive Control Mark Cannon 4 lectures Hilary Term 216-1 Lecture 1 Introduction 1-2

Organisation 4 lectures: week 3 week 4 { Monday 1-11 am LR5 Thursday 1-11 am LR5 { Monday 1-11 am LR5 Thursday 4-5 pm LR2 (place/time changed) 1 example sheet 1 class (for 2 groups) by end of week 5 1-3 Course outline 1. Introduction to MPC and constrained control 2. Prediction and optimization 3. Closed loop properties 4. Disturbances and integral action 5. Robust tube MPC 1-4

Books 1 B. Kouvaritakis and M. Cannon, Model Predictive Control: Classical, Robust and Stochastic, Springer 215 Recommended reading: Chap. 1, 2 & 3 2 J.B. Rawlings and D.Q. Mayne, Model Predictive Control: Theory and Design. Nob Hill Publishing, 29 3 J.M. Maciejowski, Predictive control with constraints. Prentice Hall, 22 Recommended reading: Chap. 1 3, 6 & 8 1-5 Introduction Classical controller design: 1. Determine plant model 2. Design controller (e.g. PID) 3. Apply controller Model predictive control (MPC): 1. Use model to predict system behaviour 2. Choose optimal trajectory 3. Repeat procedure (feedback) discard model x k+1 = f(x k, u k ) user-defined optimality criterion 1-6

Overview of MPC Model predictive control strategy: 1. Prediction 2. Online optimization 3. Receding horizon implementation 1. Prediction Plant model: x k+1 = f(x k, u k ) Simulate forward in time (over a prediction horizon of N steps) input sequence: u k = u k u 1 k. u N 1 k defines state sequence: x k = x 1 k x 2 k. x N k Notation: (u i k, x i k ) = predicted i steps ahead evaluated at time k 1-7 Overview of MPC 2. Optimization Predicted quality criterion/cost: J k = N i= l i (x i k, u i k ) l i (x, u): stage cost Solve numerically to determine optimal input sequence: u k = arg min J k u k 3. Implementation Use first element of u k = actual plant input u k = u k k Repeat optimization at each sampling instant k =, 1,... 1-8

Overview of MPC u prediction horizon at time k prediction horizon at time k + 1 x time k k + 1 k + N k + N + 1 time 1-9 Overview of MPC Optimization is repeated online at each sampling instant k =, 1,... receding horizon: u k = (u k,..., u N 1 k ) x k = (x 1 k,..., x N k ) u k+1 = (u 1 k+1,..., u N k ) x k+1 = (x 2 k,..., x N+1 k ).. This provides feedback so reduces effects of model error and measurement noise and compensates for finite number of free variables in predictions so improves closed-loop performance 1-1

Example Plant model: x k+1 = y k = [ 1 [ ] [ ] 1.1 2 x.95 k +.787 1 ] x k u k Cost: (yi k 2 + u2 i k ) + y2 N k N 1 i= Prediction horizon: N = 3 Free variables in predictions: u k = u k u 1 k u 2 k 1-11 Example 6 4 predicted at k= closed loop input 2-2 -4 1 2 3 4 5 6 7 8 9 1 1.5 output -.5-1 1 2 3 4 5 6 7 8 9 1 sample 1-12

Motivation for MPC 1-13 Advantages Flexible plant model e.g. multivariable linear or nonlinear deterministic, stochastic or fuzzy In recent years the MPC landscape has changed Handles constraints on control inputs and states e.g. actuator technical capability, limits and mergers between several of the vendor companies. The primary purpose of this safety, environmental and economic constraints Approximately optimal control Disadvantages 734 Requires online optimization section presents a view of next-generation MPC e.g. large computation for nonlinear and uncertain systems Historical development drastically, with a large increase in the number of reported applications, significant improvements in paper is to present an updated, representative snapshot of commercially available MPC technology. The information reported here was collected from vendors starting in mid-1999, reflecting the status of MPC practice just prior to the new millennium, roughly 25 years after the first applications. A brief history of MPC technology development is presented first, followed by the results of our industrial survey. Significant features of each offering are outlined and discussed. MPC applications to date by each vendor are then summarized by application area. The final technology, emphasizing potential business and research opportunities. 2. A brief history of industrial MPC This section presents an abbreviated history of industrial MPC technology. Fig. 1 shows an evolutionary tree for the most significant industrial MPC algorithms, illustrating their connections in a concise way. Control algorithms are emphasized here because relatively little information is available on the development of industrial identification technology. The following sub-sections describe key algorithms on the MPC evolutionary tree. Control strategy reinvented several times 2.1. LQG LQ(G) optimal control 195 s 198 s industrial process The development control of modern control 198 s concepts can be traced to the work of Kalman et al. in the early 196s constrained (Kalman, nonlinear 196a, b). control A greatly simplified 199 s today description of their results will be presented here as a reference point for the discussion to come. In the discrete-time context, S.J. Qin, T.A. Badgwell / Control Engineering Practice 11 (23) 733 764 the process considered by Kalman and co-workers c be described by a discrete-time, linear state-space mod x kþ1 ¼ Ax k þ Bu k þ Gw k ; y k ¼ Cx k þ n k : The vector u represents process inputs, or manipula variables, and vector y describes measured proc outputs. The vector x represents process states to controlled. The state disturbance w k and measurem noise n k are independent Gaussian noise with z mean. The initial state x is assumed to be Gauss with non-zero mean. The objective function F to be minimi penalizes expected values of squared input and st deviations from the origin and includes separate st and input weight matrices Q and R to allow for tun trade-offs: F ¼ EðJÞ; J ¼ XN j¼1 ðjjx kþj jj 2 Q þ jju kþjjj 2 R Þ: The norm terms in the objective function are defined follows: jjxjj 2 Q ¼ xt Qx: Implicit in this formulation is the assumption that variables are written in terms of deviations from desired steady state. It was found that the solution this problem, known as the linear quadratic Gauss (LQG) controller, involves two separate steps. At ti interval k; the output measurement y k is first used obtain an optimal state estimate #x kjk : #x kjk 1 ¼ A#x k 1jk 1 þ Bu k 1 ; ð #x kjk ¼ #x kjk 1 þ K f ðy k C#x kjk 1 Þ: ð Then the optimal input u k is computed using an optim proportional state controller: u k ¼ K c #x kjk : ð ð Development of commercial MPC algorithms: 2 199 DMC+ SMOC SMCA IDCOM-M HIECON PFC PCT RMPCT RMPC 4th generation MPC 3rd generation MPC 198 Connoisseur QDMC DMC IDCOM 2nd generation MPC 1st generation MPC 197 196 LQG [from Qin & Badgwell, 23] Fig. 1. Approximate genealogy of linear MPC algorithms. 1-14

Applications: Process Control Steel hot rolling mill G G Characteristics: MIMO Constraints Delays Inputs: G, H,,!! H t Outputs:,t Objectives: control residual stress σ and thickness t 1-15 Applications: Chemical Process Control Applications of predictive control to more than 4,5 different chemical processes (based on a 26 survey) MPC applications in the chemical industry Characteristics: area-wide application [from Nagy, 26] 1-16

Applications: Electromechanical systems Variable-pitch wind turbines Characteristics:? stochastic uncertainty? fatigue constraints 1-17 Applications: Electromechanical systems Predictive swing-up and balancing controllers Characteristics:? reference tracking? short sampling intervals 1-18 Autonomous racing for remote controlled cars

Prediction model Linear plant model: x k+1 = Ax k + Bu k Predicted x k depends linearly on u k [details in Lecture 2] Therefore the cost is quadratic in u k u T k Hu k + 2f T u k + g(x k ) and constraints are linear A c u k b(x k ) Online optimization: min u u T Hu + 2f T u s.t. A c u b c is a convex Quadratic Program (QP), which is reliably and efficiently solvable 1-19 Prediction model Nonlinear plant model: x k+1 = f(x k, u k ) Predicted x k depends nonlinearly on u k In general the cost is nonconvex in u k J k (x k, u k ) and the constraints are nonconvex g c (x k, u k ) Online optimization: min u J k (x k, u) s.t. g c (x k, u) is nonconvex may have local minima may not be solvable efficiently or reliably 1-2

Prediction model Discrete time prediction model Predictions optimized periodically at t =, T, 2T,... Usually T = T s = sampling interval of model But T = nt s for n > 1 is also possible, e.g. if T s < time required to perform online optimization n = integer so a time-shifted version of the optimal input sequence at time k can be implemented at time k + 1 (allows a guarantee of stability [Lecture 3]) e.g. if n = 1, then u k+1 = (u 1 k,..., u N 1 k, u N k ) is possible, where (u k, u 1 k,..., u N 1 k ) = u k 1-21 Prediction model Continuous time prediction model Predicted u(t) need not be piecewise constant, e.g. 1st order hold gives continuous, piecewise linear u(t) or u(t) = polynomial in t (piecewise quadratic, cubic etc) Continuous time prediction model can be integrated online, which is useful for nonlinear continuous time systems This course: discrete-time model and T = T s assumed 1-22

Constraints Constraints are present in almost all control problems Input constraints, e.g. box constraints: u u k u u u k u k 1 u (absolute) (rate) typically active during transients, e.g. valve saturation or d.c. motor saturation State constraints, e.g. box constraints x x k x (linear) can be active during transients, e.g. aircraft stall speed and in steady state, e.g. economic constraints 1-23 Constraints Classify constraints as either hard or soft Hard constraints must be satisfied at all times, if this is not possible, then the problem is infeasible Soft constraints can be violated to avoid infeasibility Strategies for handling soft constraints: impose (hard) constraints on the probability of violating each soft constraint or remove active constraints until the problem becomes feasible This course: only hard constraints are considered 1-24

Constraint handling Suboptimal methods for handling input constraints: (a). Saturate the unconstrained control law constraints are then usually ignored in controller design (b). De-tune the unconstrained control law increase the penalty on u in the performance objective (c). Use an anti-windup strategy to put limits on the state of a dynamic controller (typically the integral term of a PI or PID controller) 1-25 Constraint handling Effects of input saturation, u u k u unconstrained control law: u = u saturated control law: u = max { min{u, u}, u } Example: (A, B, C) as before u = 1, u = 1 u = K LQ x Input saturation causes poor performance possible instability (since the open-loop system is unstable) u y 8 6 4 2 2 5 1 15 2 25 3 35 4 6 4 2 2 saturated lqr unconstrained lqr 4 5 1 15 2 25 3 35 4 sample 1-26

Constraint handling De-tuning of optimal control law: K LQ = optimal gain for LQ cost J ( = xk 2 Q + u k 2 R) Increase R until u = K LQ x satisfies constraints for all initial conditions k= Example: (A, B, C) as before 1 2 R 1 3 settling time increased from 6 to 4 y(t) slowly stability can be ensured u y 8 6 4 2 2 1 2 3 4 5 6 6 4 2 lqr, R=1 lqr, R=.1 2 1 2 3 4 5 6 sample 1-27 Constraint handling Anti-windup prevents instability of the controller while the input is saturated Many possible approaches, e.g. anti-windup PI controller: u = max { min{(ke + v), u}, u } T i v + v = u ( u u u = u = K e + 1 t ) e dt T i u = u or u = v(t) u or u exponentially Strategy is suboptimal and may not prevent instability 1-28

Constraint handling Anti-windup is based in the past behaviour of the system, whereas MPC optimizes future performance Example: (A, B, C) as before u 3 2 1 mpc, N=16 saturated lqr MPC vs saturated LQ (both based on cost J ): settling time reduced to 2 by MPC stability is guaranteed with MPC y 1 6 4 2 2 5 1 15 2 25 3 35 4 4 5 1 15 2 25 3 35 4 sample 1-29 Summary Predict performance using plant model e.g. linear or nonlinear, discrete or continuous time Optimize future (open loop) control sequence computationally much easier than optimizing over feedback laws Implement first sample, then repeat optimization provides feedback to reduce effect of uncertainty Comparison of common methods of handling constraints: saturation, de-tuning, anti-windup, MPC 1-3