Formulations of Model Predictive Control Riccardo Scattolini Riccardo Scattolini Dipartimento di Elettronica e Informazione
Impulse and step response models 2 At the beginning of the 80, the early formulations of MPC were not based on state space models, but on Impulse response models Step response models Most of the existing industrial packages for MPC are still based on these models. The main idea for the algorithms development is always the same: to describe the future evolution of the output in terms known quantities at k (past inputs and outputs, current state and output) and of current and future control moves, which represent the variables to be determined through optimization
Impulse response models - 1 3 Consider the SISO state-space system Its equivalent impulse response representation ti is For asymptotically stable systems and an approximate representation in terms of a finite impulse response (FIR) model is
Impulse response models - 2 4 Therefore and, for N<M, where
Impulse response models - 3 5 From it is easy to extend the previous developments to minimize the usual performance index which leads, in the unconstrained case, to The future control sequence does not depend on state and/or output variables: it is a (critical) open-loop solution
Impulse response models - 4 6 To introduce a feedback term, assume that there exists a disturbance d acting on the output and so that and
Step response models - 1 7 Consider again a SISO asymptotically stable system: Its impulse and step response coefficients satisfy the following relation Then, letting, one has, Considering the presence of a disturbance d
Step response models - 2 8 Assuming again one has Now, setting from the previous expressions one finally obtains As usual, the predicted output can be viewed as the sum of a term which depends on past values of the output and of the control increments and a term which is function of the variables to be selected through optimization. Starting from this expression it is possible to derive the Dynamic Matrix Control (DMC) one of the most popular and widely applied MPC methods
Transfer function models - 1 9 Consider again a transfer function model (possibly identified) or, in the time domain A first approach consists of transforming it in state space form (reachable or observable canonical form) and to use the previous developments together with a Kalman Predictor.
Transfer function models - 2 10 Another possibility is to define the non minimal form where the state x is measurable and This approach can also be followed with (identified) NARX models
Transfer function models - 3 11 In the past, the Generalized Predictive Control (GPC) approach has gained a wide popularity. p It is based on the stochastic (identified) CARIMA (ARIMAX) model where ξ is a white noise signal with zero mean value and
Transfer function models - 4 12 As usual, it is necessary to write the future output (or its expected value) as function of known data at k and of future control increments. To derive the i step ahead predictor define the polynomials solving the diophantine equation
Transfer function models - 5 13 An example: Triangular: easy to solve in a recursive form and the solution for a given i can be used also for i+1
Transfer function models - 6 14 From the system and the diophantine equation or, letting, Then, the prediction of y(k+i) is the sum of two terms and which can not be predicted at k
Transfer function models - 7 15 Therefore, the minimization of the simple cost function is obtained by setting that t is which depends on current and past values of y, on past control increments and on the future control increments to be selected. For multistage cost functions the same arguments can be applied.
Extensions to the basic formulation 16 The main MPC algorithms (IDCOM, GPC, DMC, ) are characterized by a number of tricks which make them very different from a classical LQ algorithm Control horizon Minimum prediction horizon Reference filtering Filtering of disturbances High level optimization
Control horizon - 1 17 If the prediction horizon N is sufficiently large, the number of optimization variables (or the future control increments) can make the optimization problem difficult to solve. For this reason, and to obtain a slower control action, it is often assumed that the control variables remain constant after Nu<N time instants or The cost function can be written as
Control horizon - 2 18 Mean level control Nu = 1 past future predicted state future control moves k K+N K+1 K+N+1
Control horizon - 3 19 Nu=2 past future predicted state future control moves k K+N K+1 K+N+1
Control horizon - 4 20 Some industrial algorithms assume that there is a limited number Nu of control variations and that the control variable remains constant for v time steps past future predicted state Nu = 2, v = 2 future control moves No proofs of stability have been obtained for this kind of algorithms k K+N K+1 K+N+1
Minimum prediction horizon - 1 21 For systems with time delay d, the control variables u(k), u(k+1),, u(k+d) do not affect the future outputs y(k),, y(k+d) and it is better to do not penalize the corresponding errors in the cost function to be minimized. 2 Open-loop p step response 1.5 1 0.5 0-0.5-1 0 1 2 3 4 5 6 7 8 9 10
Minimum prediction horizon - 2 22 Also for nonminimum phase systems it convenient to do not penalize the future time instants corresponding to the inverse response 1.5 Open-loop step response 1 0.5 0-0.5-1 -1.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Minimum prediction horizon - 3 23 The performance index to be minimized can be modified as where N 1 must be chosen to include the delay or the inverse response
Filtering of the reference signal - 1 To reduce the control effort, step variations of the reference signal are avoided, and a filtered reference is computed at any time starting from the current output 24 past future desired reference output filtered reference A possible signal generator is k K+N Note that this causes an additional (external) loop which should be considered in the stability analysis
Filtering of the reference signal - 2 25 funnel band output output k K+N k K+N In many cases it is not mandatory to reach a steady state t value, but to remain into prescribed limits. Then, instead of penalizing the error, it is possible to include in the optimization problem some hard (or soft) constraints. Recall that hard constraints on the output are difficult to be satisfied in view of the possible presence of disturbances.
Filtering of disturbances - 1 26 In order to force the behavior of the MPC algorithm in specified frequency bands, it is assumed that the disturbances are filtered through integrators and an asymptotically stable system (design parameter) plant
Filtering of disturbances - 2 27 plant
Filtering of disturbances - 3 28 plant + S d outputs t plant From the systems equations or
Filtering of disturbances - 4 29 plant Moreover or
Filtering of disturbances - 5 30 The overall system can be written in the standard (KP) form to estimate both δx and δd. The choice of A d influences the estimate (its eigenvalues are eigenvalues of the KP) plant h y(k) = h 0 0 I i x(k)+v 2 (k) i {z } C
Filtering of disturbances - 6 31 For this system it is possible to consider the cost function The output prediction is and which, together with the KP, can be used to find the solution
High level optimization 32 The MPC regulator is usually at an intermediate level, its desired input, state and output targets are computed off-line at a slower sampling rate (h) starting from economical considerations. MPC works at a faster sampling rate (min). Its outputs are the reference signals for inner loops controlling the actuators (cascade control structures) and working at the fastest rate (s). Given the desired control, state and output t variables how to compute the real references coping with input and output constraints and to be used in the optimization problem?
Target calculation - 1 33 Given a desired output signal, for unconstrained square systems (m=p) described by the computation of the desired control signal is trivial In other cases (m p, integrators, state-input-output constraints) the target calculation is formulated as a mathematical programming problem which determines the steady state output.
Target calculation - 2 34 Note however that: Fat systems (m>p) > y = G(z) u There exist many solutions. A least squares problem can be formulated. Thin systems (m<p) y = G(z) u There is no solution. The value of must be as close as possible to
Target calculation - 3 35 The following mathematical programming problem achieves the output target if possible and relaxes the problem in a least squares sense if the target is unfeasible. η is a slack variable, Q s is a positive definite matrix and all the elements of q s are nonnegative. There is no penalty on x s, the solution could be non unique for non detectable systems (tank control without a measure of the level).
References 36 Books E.F. Camacho, C. Bordons: Model predictive control,, Springer, 2004. J. Maciejowski: Predictive control with constraints, Prentice Hall, 2002. Papers C.R. Cutler, B.L. Ramaker: Dynamic matrix control - a computer control algorithm, Proc. of the joint Automatic Control Conf., 1980. J.A. Richalet, A. Rault, J.D. Testud, J. Papon: Model predictive heuristic control: applications to industrial i processes, Automatica, ti Vol. 14, pp. 413-428, 428 1978. D.W. Clarke, C. Mohtadi, P.S. Tuffs: Generalized predictive control part I the basic algorithm, Automatica, Vol. 23, n. 2, pp. 137-148, 1987. D.W. Clarke, C. Mohtadi, P.S. Tuffs: Generalized predictive control part II extensions and interpretations, Automatica, Vol. 23, n. 2, pp. 149-160, 1987. J.B. Rawlings: Tutorial overview of model predictive control, IEEE Control Systems Magazine, Vol. 20, n.3, pp. 38-52, 2000.