Introduction to Constrained Control

Similar documents
C21 Model Predictive Control

19 LINEAR QUADRATIC REGULATOR

Nonlinear Systems and Control Lecture # 15 Positive Real Transfer Functions & Connection with Lyapunov Stability. p. 1/?

Figure 1. The Ball and Beam System.

Lecture 13 Linear quadratic Lyapunov theory

CONTROL SYSTEMS, ROBOTICS AND AUTOMATION Vol. XVI - Fault Accomodation Using Model Predictive Methods - Jovan D. Bošković and Raman K.

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Lecture 7: Finding Lyapunov Functions 1

Solving simultaneous equations using the inverse matrix

Using MATLAB in a Graduate Electrical Engineering Optimal Control Course

Introduction to the Finite Element Method (FEM)

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Nonlinear Programming Methods.S2 Quadratic Programming

Optimization of warehousing and transportation costs, in a multiproduct multi-level supply chain system, under a stochastic demand

Interactive applications to explore the parametric space of multivariable controllers

On the D-Stability of Linear and Nonlinear Positive Switched Systems

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Predictive Control Algorithms: Stability despite Shortened Optimization Horizons

1.7. Partial Fractions Rational Functions and Partial Fractions. A rational function is a quotient of two polynomials: R(x) = P (x) Q(x).

Nonlinear Model Predictive Control: From Theory to Application

More Quadratic Equations

Linear-Quadratic Optimal Controller 10.3 Optimal Linear Control Systems

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

Sampled-Data Model Predictive Control for Constrained Continuous Time Systems

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

PID Control. Chapter 10

The Heat Equation. Lectures INF2320 p. 1/88

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

EQUATIONS and INEQUALITIES

Homework #2 Solutions

Solutions to Math 51 First Exam January 29, 2015

Probability and Random Variables. Generation of random variables (r.v.)

Reliability Guarantees in Automata Based Scheduling for Embedded Control Software

Optimization Modeling for Mining Engineers

Modeling and Simulation of a Three Degree of Freedom Longitudinal Aero plane System. Figure 1: Boeing 777 and example of a two engine business jet

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Equations, Inequalities & Partial Fractions

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Introduction. Appendix D Mathematical Induction D1

LINEAR EQUATIONS IN TWO VARIABLES

Mathematical Induction. Mary Barnes Sue Gordon

8. Linear least-squares

Continued Fractions and the Euclidean Algorithm

YAW RATE AND VELOCITY TRACKING CONTROL OF A HANDS-FREE BICYCLE

TI-83/84 Plus Graphing Calculator Worksheet #2

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms

UNCORRECTED PAGE PROOFS

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

Eigenvalues, Eigenvectors, and Differential Equations

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

POTENTIAL OF STATE-FEEDBACK CONTROL FOR MACHINE TOOLS DRIVES

Linear Programming Notes V Problem Transformations

What is Linear Programming?

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

1 Error in Euler s Method

Lies My Calculator and Computer Told Me

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

1 Review of Least Squares Solutions to Overdetermined Systems

Chapter 12: The Operational Amplifier

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Study Guide 2 Solutions MATH 111

Number Who Chose This Maximum Amount

System of First Order Differential Equations

Optimal linear-quadratic control

Lecture Cost functional

Transistor amplifiers: Biasing and Small Signal Model

Math 120 Final Exam Practice Problems, Form: A

3. Reaction Diffusion Equations Consider the following ODE model for population growth

1 Norms and Vector Spaces

Neuro-Dynamic Programming An Overview

Operation Count; Numerical Linear Algebra

3.2 Sources, Sinks, Saddles, and Spirals

Lecture L6 - Intrinsic Coordinates

PID, LQR and LQR-PID on a Quadcopter Platform

MOBILE ROBOT TRACKING OF PRE-PLANNED PATHS. Department of Computer Science, York University, Heslington, York, Y010 5DD, UK

The Method of Least Squares. Lectures INF2320 p. 1/80

Lecture 5 Principal Minors and the Hessian

T ( a i x i ) = a i T (x i ).

Physics 2048 Test 1 Solution (solutions to problems 2-5 are from student papers) Problem 1 (Short Answer: 20 points)

Section 13.5 Equations of Lines and Planes

1 Solving LPs: The Simplex Algorithm of George Dantzig

Towards Dual MPC. Tor Aksel N. Heirung B. Erik Ydstie Bjarne Foss

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Real-Time Systems Versus Cyber-Physical Systems: Where is the Difference?

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2

Impulse Response Functions

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

At Automated Logic, we ve been doing data for more than 30 years.

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

CHAPTER 9. Integer Programming

Systems of Linear Equations

EECE 460 : Control System Design

Solving Quadratic Equations

Transcription:

Introduction to Constrained Control Graham C. Goodwin September 2004

1.1 Background Most of the literature on Control Theory deals with Linear Unconstrained Systems. However to get the most out of a system, we usually need to deal with nonlinearities. The most common nonlinearity met in practice are Actuator Limits.

To get the most out of a system you need to push up against limits.

Other examples? Playing sport at international level Excelling in business or academia Aerospace, chemical process control,... Control is a key enabling technology in many (all?) areas Getting the most out of control means pushing against boundaries

1.2 Approaches to Constrained Control Cautious (back off performance demands so constraints are not met) Serendipitous (allow occasional constraint violation) Evolutionary (begin with a linear design and add embellishments, for example, antiwindup) Tactical (include constraints from the beginning, for example, MPC)

1.3 Example: Rudder Roll Stabilisation of Ships (See lecture 3.5) It has been observed that unless appropriate actions are taken to deal with constraints, then the performance of rudder roll stabilisation systems can be worse than if nothing is done due to the effect of actuator amplitude and slew rate constraints.

1.4 Model Predictive Control Model Predictive Control (MPC) is a prime example of tactical method. Long history in petrochemical industry. Many thousands of applications. Several commercial products. Industrial credibility.

Background A survey by Mayne et al. (2000) divides the literature on MPC in three categories: Theoretical foundations: the optimal control literature. Dynamic Programming (Bellman 1957), the Maximum Principle (for example, Lee & Markus 1967). Process control literature, responsible for MPC s adoption by industry. Evolving generations of MPC technology. An example of practice leading theory. Modern literature, dealing with theoretical advances such as stability and robustness.

General Description MPC is a control strategy which, for a model of the system, optimises performance (measured through a cost function) subject to constraints on the inputs, outputs and/or internal states. Due to the presence of constraints it is difficult, in general, to obtain closed formulae that solve the above control problem. Hence, MPC has traditionally solved the optimisation problem on line over a finite horizon using the receding horizon technique. This has also restricted the applicability of MPC to processes with slow time constants that allow the optimisation to be solved on line. Recent results allow faster systems to be handled.

An Illustrative Example We will base our design on linear quadratic regulator [LQR] theory. Thus, consider an objective function of the form: V N ({x k }, {u k }) 1 2 xt N Px N + 1 2 N 1 ( ) x T k Qx k + u T k Ru k, (1) where {u k } denotes the control sequence {u 0, u 1,..., u N 1 }, and {x k } denotes the corresponding state sequence {x 0, x 1,..., x N }. In (1), {u k } and {x k } are related by the linear state equation: k=0 x k+1 = Ax k + Bu k, k = 0, 1,..., N 1, where x 0, the initial state, is assumed to be known.

The following parameters allow one to influence performance: the optimisation horizon N the state weighting matrix Q the control weighting matrix R the terminal state weighting matrix P For example, reducing R gives less weight on control effort, hence faster response. R 0 is called cheap control.

Details of Example Consider the specific linear system: x k+1 = Ax k + Bu k, (2) y k = Cx k, with A = [ ] 1 1, B = 0 1 [ ] 0.5, C = [ ] 1 0, 1 which is the zero-order hold discretisation with sampling period 1 of the double integrator d 2 y(t) dt 2 = u(t).

Example placements controller u k sat linear system x k Figure: Feedback control loop for Example 1 if u > 1, sat(u) u if u 1, 1 if u < 1. (3)

(i) Cautious Design (N =, P = 0) and weighting matrices Q = C T C = R = 20 gives the linear state feedback law: u k = Kx k = [ 0.1603 0.5662 ] x k. [ ] 1 0 and 0 0

Cautious Design 1 0.8 0.6 uk 0.4 0.2 g replacements y k 0 0.2 0.4 0 5 10 15 20 25 1 0 1 2 3 4 5 6 0 5 10 15 20 25 Figure: u k and y k for the cautious design u k = Kx k with weights Q = C T C and R = 20. k k

(ii) Serendipitous Design Using the same Q = C T C in the infinite horizon objective function we try to obtain a faster response by reducing the control weight to R = 2. We expect that this will lead to a control law having higher gain.

Serendipitous Design 3 2 uk 1 0 1 0 5 10 15 20 25 k g replacements y k 1 0 1 2 3 4 5 6 0 5 10 15 20 25 Figure: u k and y k for the unconstrained LQR design u k = Kx k (dashed line), and for the serendipitous strategy u k = sat(kx k ) (circle-solid line), with weights Q = C T C and R = 2. k

Encouraged by the above result, we might be tempted to push our luck even further and aim for an even faster response by further reducing the weighting on the input signal. Accordingly, we decrease the control weighting in the LQR design even further, for example, to R = 0.1.

6 4 2 uk 0 2 4 6 0 5 10 15 20 25 k 4 g replacements y k 2 0 2 4 6 0 5 10 15 20 25 Figure: u k and y k for the unconstrained LQR design u k = Kx k (dashed line), and for the serendipitous strategy u k = sat(kx k ) (circle-solid line), with weights Q = C T C and R = 0.1. k

The control law u = sat(kx) partitions the state space into three regions in accordance with the definition of the saturation function (3). Hence, the serendipitous strategy can be characterised as a switched control strategy in the following way: Kx if x R 0, u = K(x) = 1 if x R 1, 1 if x R 2. Notice that this is simply an alternative way of describing the serendipitous strategy since for x R 0 the input actually lies between the saturation limits. The partition is shown in following figure. (4)

Figure 5 4 3 2 1 x 2 k g replacements 0 1 R 2 2 3 R 1 R 0 4 6 4 2 0 2 4 6 Figure: State space trajectory and space partition for the serendipitous strategy u k = sat(kx k ), with weights Q = C T C and R = 0.1. x 1 k

Examination of figure 8 suggests a heuristic argument as to why the serendipitous control law may not be performing well in this case. We can think, in this example, of x 2 as velocity and x 1 as position. Now, in our attempt to change the position rapidly (from 6 to 0), the velocity has been allowed to grow to a relatively high level (+3). This would be fine if the braking action were unconstrained. However, our input (including braking) is limited to the range [ 1, 1]. Hence, the available braking is inadequate to pull the system up, and overshoot occurs.

(iii) Tactical Design Perhaps the above heuristic argument gives us some insight into how we could remedy the problem. A sensible idea would seem to be to try to look ahead and take account of future input constraints (that is, the limited braking authority available). To test this idea, we take the objective function (1) as a starting point.

Tactical Design We use a prediction horizon N = 2 and minimise, at each sampling instant i and for the current state x i, the two-step objective function: V 2 ({x k }, {u k }) = 1 2 xt i+2 Px i+2 + 1 2 i+1 ( ) x T k Qx k + u T k Ru k, (5) k=i subject to the equality and inequality constraints: x k+1 = Ax k + Bu k, u k 1, (6) for k = i and k = i + 1.

Tactical Design In the objective function (5), we set, as before, Q = C T C, R = 0.1. The terminal state weighting matrix P is taken to be the solution of the Riccati equation P = A T PA + Q K T (R + B T PB)K, where K = (R + B T PB) 1 B T PA is the corresponding gain.

Tactical Design As a result of minimising (5) subject to (6), we obtain an optimal fixed-horizon control sequence {u i, u i+1 }. We then apply the resulting value of u i to the system. The state evolves to x i+1. We now shift the time instant from i to i + 1 and repeat this procedure. This is called receding horizon control [RHC] or model predictive control.

Receding Horizon Technique (1) At time i and for the current state x i solve an open-loop (OL) optimal control problem over a prediction horizon using a model of the system to predict future states and taking into account the present and future constraints; 0 1 2 3 k (2) Apply the first step of the resulting optimal OL control sequence; 0 1 2 3 k (3) Move the horizon, that is, repeat the procedure at time i + 1 for the current state x i + 1. 0 1 2 3 4 k

6 4 2 uk 0 2 4 6 0 5 10 15 20 25 k 4 g replacements y k 2 0 2 4 6 0 5 10 15 20 25 Figure: u k and y k for the unconstrained LQR design u k = Kx k (dashed line), and for the receding horizon design (circle-solid line), with weights Q = C T C and R = 0.1. k

We will see later that the receding horizon strategy described above also leads to a partition of the state space into different regions in which affine control laws hold. The result is shown (for interest) in figure 9. The region R 2 corresponds to the region R 2 in figure 8 and represents the area of state space where u = 1 is applied. Comparing figure 8 and figure 9 we see that the region R 2 has been bent over in the figure 8 so that u = 1 occurs at lower values of x 2 (velocity) than was the case in figure 8. This is in accordance with our heuristic argument about needing to brake earlier.

Figure 7 4 3 R 3 2 R 2 replacements x 2 k 1 0 R 0 1 2 3 R 1 R 4 4 6 4 2 0 2 4 6 Figure: State space plot for the receding horizon tactical design. x 1 k

Figure 5 and Figure 7 4 4 3 PSfrag replacements 3 R 3 2 2 R 2 nts 1 1 R 0 x 2 k 0 x 2 k 0 1 R 2 1 2 2 R 1 R 0 R 4 3 R 1 3 4 6 4 2 0 2 4 6 x 1 k 4 6 4 2 0 2 4 6 x 1 k Figure: State space trajectory and space partition for the serendipitous strategy u k = sat(kx k ), with weights Q = C T C and R = 0.1. Figure: State space plot for the receding horizon tactical design.

Summary Can often avoid constraints by lowering performance demands However, this is at a cost If we increase demands - constraints are met Small violations not too significant Soon get poor performance Rethink the problem - add constraints into the design Leads to idea of Receding Horizon Control