MODEL PREDICTIVE CONTROL

Similar documents
Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

19 LINEAR QUADRATIC REGULATOR

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Optimization: Algorithms 3: Interior-point methods

CHAPTER 9. Integer Programming

24. The Branch and Bound Method

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

Duality of linear conic problems

3. Linear Programming and Polyhedral Combinatorics

MixedÀ¾ нOptimization Problem via Lagrange Multiplier Theory

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

10. Proximal point method

x = + x 2 + x

Support Vector Machine (SVM)

T ( a i x i ) = a i T (x i ).

Proximal mapping via network optimization

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

C21 Model Predictive Control

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Lecture 13 Linear quadratic Lyapunov theory

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

2.3 Convex Constrained Optimization Problems

4.6 Linear Programming duality

Linear Programming I

NOTES ON LINEAR TRANSFORMATIONS

1 Sets and Set Notation.

Notes on Determinant

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Solutions to Math 51 First Exam January 29, 2015

Lecture 7: Finding Lyapunov Functions 1

Chapter 3 Nonlinear Model Predictive Control

Several Views of Support Vector Machines

Two-Stage Stochastic Linear Programs

DATA ANALYSIS II. Matrix Algorithms

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

the points are called control points approximating curve

Using MATLAB in a Graduate Electrical Engineering Optimal Control Course

Equilibrium computation: Part 1

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Optimization Methods in Finance

Stability. Chapter 4. Topics : 1. Basic Concepts. 2. Algebraic Criteria for Linear Systems. 3. Lyapunov Theory with Applications to Linear Systems

What is Linear Programming?

LINEAR ALGEBRA. September 23, 2010

Date: April 12, Contents

University of Lille I PC first year list of exercises n 7. Review

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

Linear Programming in Matrix Form

TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA

1 Solving LPs: The Simplex Algorithm of George Dantzig

Linear Equations ! $ & " % & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Optimization of warehousing and transportation costs, in a multiproduct multi-level supply chain system, under a stochastic demand

Statistical machine learning, high dimension and big data

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Using the Theory of Reals in. Analyzing Continuous and Hybrid Systems

Interior-Point Algorithms for Quadratic Programming

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

by the matrix A results in a vector which is a reflection of the given

160 CHAPTER 4. VECTOR SPACES

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Orthogonal Projections

Introduction to the Finite Element Method (FEM)

Support Vector Machines

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

1 Norms and Vector Spaces

Discrete Optimization

Portfolio selection based on upper and lower exponential possibility distributions

Number Who Chose This Maximum Amount

Linear Algebra I. Ronald van Luijk, 2012

Lecture 2: August 29. Linear Programming (part I)

SECTION 10-2 Mathematical Induction

Lecture 2: Homogeneous Coordinates, Lines and Conics

Practical Guide to the Simplex Method of Linear Programming

Minkowski Sum of Polytopes Defined by Their Vertices

Systems of Linear Equations

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

Inner Product Spaces

Formulations of Model Predictive Control. Dipartimento di Elettronica e Informazione

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

Mathematical finance and linear programming (optimization)

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Figure 2.1: Center of mass of four points.

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Real-Time Embedded Convex Optimization

MCS 563 Spring 2014 Analytic Symbolic Computation Wednesday 9 April. Hilbert Polynomials

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/54

it is easy to see that α = a

Transcription:

Institut für Automatik D-ITET ETH Zürich SS 2012 Prof. Dr. M. Morari 22. 03. 2012 MODEL PREDICTIVE CONTROL Exam Stud.-Nr. : Name : Do not use pencils or red color. Make sure that your name and student number is on every sheet you hand in. Use seperate sheets for the four parts. Part Points max. 1 40 2 40 3 25 4 20 125

1. Part Question a) b) c) Total Max. Points 12 12 16 40 Achieved Points Optimal Control of Linear Systems a) Consider the following discrete-time system: [ ] [ ] 0 1/2 1 x k+1 = x 3/2 2 k + u b k }{{}}{{} A B y k = [ 0 2 ] (1) x k }{{} C with parameter b R. i) For which b is the system open-loop stable? ii) For which b is the system controllable? In the following, let b = 0. The goal is to design a linear state feedback controller u k = Kx k with K = [ ] k 1 k 2 such that from any initial state x 0 the closed-loop system reaches the origin in finite time. This is achieved if K is chosen such that all eigenvalues of the closed-loop system are zero. iii) Give a sufficient condition on a general pair A and B for the existence of such a K. Is it fulfilled for A and B given in (1) with b = 0? After how many steps, at most, does the system arrive at the origin with such a controller? b) We want to design a state observer for system (1). i) Derive the update equation for the state estimate ˆx such that the error dynamics are given by e k+1 = (A LC)e k. (2) where e k is the estimation error at time step k which is defined as e k := x k ˆx k. (3) ii) How many states does the closed-loop system with observer and the controller u k = K ˆx k have in total? iii) Let L = [ 1/4 1 ] [ ] T 13/4 0 and V (x) := x P x with P =. 0 1 Show that V is a Lyapunov function for the error dynamics (2). What does this imply?

c) Dynamic Programg. Consider the finite horizon discounted LQR problem N 1 α ( ) k x T k Qx k + u T k Ru k X,U such that x k+1 = Ax k + Bu k, with discount factor α (0, 1), Q = Q T, Q 0, R = R T and R 0. Assume the following form of the optimal cost-to-go at timestep n, n {0, 1,..., N} for the discounted problem (4) where P di n = (P di n ) T and P di n 0. Jn di, (x n ) = α n x T npn di x n i) With the given form of the optimal cost-to-go of the discounted problem and using the principle of optimality derive the recursion Pn+1 di Pn di. ii) Show that the recursion for P di coincides with the standard Riccati recursion for P un (4) P un n ( = ÃT Pn+1à un ÃT Pn+1B un B T Pn+1B un + R ) 1 B T Pn+1à un + Q, of the undiscounted problem X,U N 1 x T k Qx k + u T k Ru k with à = αa and R = R/α. subject to x k+1 = Ãx k + Bu k

2. Part Question a) b) c) Total Max. Points 12 16 12 40 Achieved Points Optimization a) i) Every subspace is a cone ii) Every affine set is a cone iii) Every cone is an affine set iv) The finite intersection of polytopes is always a polytope v) The finite union of polytopes is always a polytope vi) Let f i (x) : R n R, i = 1,..., N be a set of N convex functions. Show that the set is convex. S := {x R n f i (x) 0, i = 1,..., N} b) Consider the following Linear Program subj. to where c R n, G R m n, h R m. i) Problem (5) is always convex ii) Its convexity depends on c iii) It is always feasible iv) Its feasibility depends on c v) Let the matrices in (5) be c = [ ] 1, G = 1 c T x Gx h 1 0 0 1 1 0 0 1, h = 1 1 1. 1 (5) Find the optimal solution of (5) and show that it satisfies the KKT conditions.

vi) The optimal solution of v) remains optimal if we change the cost vector to c = [ 1 0 ] T vii) The optimal solution of (v) remains optimal if we change the right hand side of the constraints to h = [ 1 2 1 1 ] T c) Consider the following quadratic program with equality constraints subj. to x T x Ax = b (6) i) Formulate the Lagrange function corresponding to (6) ii) Formulate the dual function corresponding to (6) iii) Formulate the dual optimization problem corresponding to (6) by imizing (infimizing) over x iv) Let A = [ 1 0 ] and b = 1 in (6). Solve the primal and dual problem and show that the duality gap is zero.

3. Part Question a) b) Total Max. Points 17 8 25 Achieved Points Model Predictive Control Consider the following finite-horizon discrete-time optimal control problem with Q 0,P 0, R 0, (A, B) controllable: V (x) = {u 0,...,u N 1 } subject to: N 1 (x k Qx k + u k Ru k ) + x NP x N x k+1 = Ax k + Bu k, x 0 = x Cx k + Du k f, for k {0,...., N 1} (7) a) Suppose that N = 3 and that the control inputs u k are modeled as u k = Kx k + v k, where K is a constant matrix and the vector v k is a decision variable in the optimization problem. Define v 0 v := v 1. v 2 Find matrices E and S, vectors g and h, and a constant c such that the optimal control problem (7) can be rewritten as v [ v Sv + h v + c ] subject to: Ev g. (8) b) Assume that problem (7) has no constraints, and that one solves, at each time step, the optimization problem with optimal solution V (x) = v [ v Sv + h v + c ], v (x) = (v 0(x); v 1(x); v 2(x)). Assume that the matrix K is chosen such that (A + BK) is stable. Suggest a condition on the matrix P such that the closed-loop system is stable. x(k + 1) = (A + BK)x(k) + Bv 0(x)

4. Part Question a) b) Total Max. Points 14 6 20 Achieved Points Parametric Linear Program / Hybrid MPC a) Consider the parametric linear program J p (x) = max z 1,z 2 z 1 +xz 2 s.t. z 1 + z 2 = 1 z 1 0 z 2 0, (pplp) with parameter x R. i) Sketch the feasible set of (pplp) and the cost gradients for parameter values x {0, 1, 2}. ii) Derive the optimizer function z (x) = (z 1(x), z 2(x)) and the value function J p (x) of (pplp) for parameter values x [0, 2]. Hint: Do it graphically. iii) The dual program of (pplp) is given by Jd(x) = λ λ s.t. λ 1 λ x (dplp) Derive the dual optimizer function λ (x) and the dual value function Jd (x) of (dplp) for parameter values x [0, 2]. iv) Compare Jp (x) with Jd (x). State the reason why or why not the value functions coincide. b) Consider the discrete-time dynamic system x k+1 = Ax k + Bu k, k 0, (SYS) with state x k R n and discrete input u k {v, w}, where v, w are vectors in R m. i) Let us represent (SYS) as a mixed logical dynamical (MLD) system. For this, we introduce binary variables (δ 1,k, δ 2,k ) {0, 1} 2 for every time-step k 0 and rewrite (SYS) as [ ] δ1,k x k+1 = Ax k + B h, k 0 δ 2,k (MLDSYS) c = d 1 δ 1,k + d 2 δ 2,k, k 0. State the input matrix B h R n 2 and the coefficients (c, d 1, d 2 ) R 3 so that (MLDSYS) is an equivalent description of (SYS).

ii) Suppose now that we want to design a model predictive controller based on the MLD representation in (MLDSYS), i.e. at every sampling instant we solve N 1 1 N 2 1 2 xt NP x N + 2 xt k Qx k + s.t. (MLDSYS), k = 0,..., N 1 x,δ1,δ2 f p (δ 1 ) 0 x 0 = x(0), l u (δ 1,k+1, δ 1,k ) where x(0) R n is the initial state of the system and x, δ 1, δ 2 denote the sequences of states/binaries over the prediction horizon of length N. Design the functions l u, f p such that changing the discrete input gets penalized, input v is applied to the system at most N/2 times over the prediction horizon (assug N is an even number). Hint: It suffices to restrict l u and f p to the class of affine and quadratic functions.