Nonlinear Programming Methods.S2 Quadratic Programming



Similar documents
Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Linear Programming Notes V Problem Transformations

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Nonlinear Optimization: Algorithms 3: Interior-point methods

Simplex method summary

Practical Guide to the Simplex Method of Linear Programming

4.6 Linear Programming duality

Chapter 2 Solving Linear Programs

Interior-Point Algorithms for Quadratic Programming

Solving Linear Programs

Linear Programming. March 14, 2014

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Special Situations in the Simplex Algorithm

DATA ANALYSIS II. Matrix Algorithms

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

Lecture 5 Principal Minors and the Hessian

Lecture 2: August 29. Linear Programming (part I)

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

2.3 Convex Constrained Optimization Problems

Date: April 12, Contents

Linear Programming in Matrix Form

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

What is Linear Programming?

Duality in Linear Programming

Linear Programming: Theory and Applications

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

1 Solving LPs: The Simplex Algorithm of George Dantzig

CHAPTER 9. Integer Programming

Lecture 7: Finding Lyapunov Functions 1

56:171 Operations Research Midterm Exam Solutions Fall 2001

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Support Vector Machine (SVM)

GENERALIZED INTEGER PROGRAMMING

Linear Programming Problems

Standard Form of a Linear Programming Problem

Mathematical finance and linear programming (optimization)

General Framework for an Iterative Solution of Ax b. Jacobi s Method

Interior Point Methods and Linear Programming

Zeros of a Polynomial Function

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

Study Guide 2 Solutions MATH 111

Linear Programming I

CONSTRAINED NONLINEAR PROGRAMMING

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Optimization Methods in Finance

Derivative Free Optimization

Linear Programming. April 12, 2005

Several Views of Support Vector Machines

Linearly Independent Sets and Linearly Dependent Sets

24. The Branch and Bound Method

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Big Data - Lecture 1 Optimization reminders

Support Vector Machines

9.4 THE SIMPLEX METHOD: MINIMIZATION

Question 2: How do you solve a matrix equation using the matrix inverse?

Linear Programming. Solving LP Models Using MS Excel, 18

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Support Vector Machines Explained

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

SAS/OR 13.2 User s Guide: Mathematical Programming. The Quadratic Programming Solver

Optimization Modeling for Mining Engineers

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Equations, Inequalities & Partial Fractions

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

19 LINEAR QUADRATIC REGULATOR

Torgerson s Classical MDS derivation: 1: Determining Coordinates from Euclidean Distances

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Systems of Linear Equations

Agricultural and Environmental Policy Models: Calibration, Estimation and Optimization. Richard E. Howitt

Proximal mapping via network optimization

10. Proximal point method

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Massive Data Classification via Unconstrained Support Vector Machines

1 Introduction to Matrices

Notes on Determinant

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

EXCEL SOLVER TUTORIAL

Inner Product Spaces

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

How the European day-ahead electricity market works

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

LU Factorization Method to Solve Linear Programming Problem

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

Chapter 6: Sensitivity Analysis

An Introduction to Linear Programming

Linear Programming Notes VII Sensitivity Analysis

26 Linear Programming

7 Gaussian Elimination and LU Factorization

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Name: Section Registered In:

Can linear programs solve NP-hard problems?

Transcription:

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective function is called a quadratic program (QP). Because of its many applications, quadratic programming is often viewed as a discipline in and of itself. More importantly, though, it forms the basis of several general nonlinear programming algorithms. We begin this section by examining the Karush-Kuhn-Tucker conditions for the QP and see that they turn out to be a set of linear equalities and complementarity constraints. Much like in separable programming, a modified version of the simplex algorithm can be used to find solutions. Problem Statement The general quadratic program can be written as Minimize f(x) = cx + 1 2 xt Q x subject to Ax b and x 0 where c is an n-dimensional row vector describing the coefficients of the linear terms in the objective function, and Q is an (n n) symmetric matrix describing the coefficients of the quadratic terms. If a constant term exists it is dropped from the model. As in linear programming, the decision variables are denoted by the n-dimensional column vector x, and the constraints are defined by an (m n) A matrix and an m-dimensional column vector b of right-hand-side coefficients. We assume that a feasible solution exists and that the constraint region is bounded. When the objective function f(x) is strictly convex for all feasible points the problem has a unique local minimum which is also the global minimum. A sufficient condition to guarantee strictly convexity is for Q to be positive definite. Karush-Kuhn-Tucker Conditions We now specialize the general first-order necessary conditions given in Section 11.3 to the quadratic program. These conditions are sufficient for a global minimum when Q is positive definite; otherwise, the most we can say is that they are necessary. Excluding the nonnegativity conditions, the Lagrangian function for the quadratic program is L(x, µ) = cx + 1 2 xt Q x + µ(ax b),

Quadratic Programming 2 where is an m-dimensional row vector. The Karush-Kuhn-Tucker conditions for a local minimum are given as follows. L x j 0, j = 1,..., n c + x T Q + µa 0 (12a) L µ i 0, i = 1,..., m Ax b 0 (12b) x j L x j = 0, j = 1,..., n x T (c T + Qx + A T µ) = 0 (12c) µ i g i (x) = 0, i = 1,..., m µ(ax b) = 0 (12d) x j 0, j = 1,..., n x 0 (12e) µ i 0, i = 1,..., m µ 0 (12f) To put (12a) (12f) into a more manageable form we introduce nonnegative surplus variables y R n to the inequalities in (12a) and nonnegative slack variables v R m to the inequalities in (12b) to obtain the equations c T + Qx + A T µ T y = 0 and Ax b + v = 0. The KKT conditions can now be written with the constants moved to the right-hand side. Qx + A T µ T y = c T Ax + v = b x 0, µ 0, y 0, v 0 y T x = 0, µv = 0 The first two expressions are linear equalities, the third restricts all the variables to be nonnegative, and the fourth prescribes complementary slackness. (13a) (13b) (13c) (13d)

Quadratic Programming 3 Solving for the Optimum The simplex algorithm can be used to solve (13a) (13d) by treating the complementary slackness conditions (13d) implicitly with a restricted basis entry rule. The procedure for setting up the linear programming model follows. Let the structural constraints be Eqs. (13a) and (13b) defined by the KKT conditions. If any of the right-hand-side values are negative, multiply the corresponding equation by 1. Add an artificial variable to each equation. Let the objective function be the sum of the artificial variables. Put the resultant problem into simplex form. The goal is to find the solution to the linear program that minimizes the sum of the artificial variables with the additional requirement that the complementarity slackness conditions be satisfied at each iteration. If the sum is zero, the solution will satisfy (13a) (13d). To accommodate (13d), the rule for selecting the entering variable must be modified with the following relationships in mind. x j and y j are complementary for j = 1,..., n µ i and v i are complementary for i = 1,..., m The entering variable will be the one whose reduced cost is most negative provided that its complementary variable is not in the basis or would leave the basis on the same iteration. At the conclusion of the algorithm, the vector x defines the optimal solution and the vector µ defines the optimal dual variables. This approach has been shown to work well when the objective function is positive definite, and requires computational effort comparable to a linear programming problem with m + n constraints, where m is the number of constraints and n is the number of variables in the QP. Positive semi-definite forms of the objective function, though, can present computational difficulties. Van De Panne (1975) presents an extensive discussion of the conditions that will yield a global optimum even when f(x) is not positive definite. The simplest practical approach to overcome any difficulties caused by semi-definiteness is to add a small constant to each of the diagonal elements of Q in such a way that the modified Q matrix becomes positive definite. Although the resultant solution will not be exact, the difference will be insignificant if the alterations are kept small.

Quadratic Programming 4 Example 14 Solve the following problem. Minimize f(x) = 8x 1 16x 2 + x 2 1 + 4x2 2 subject to x 1 + x 2 5, x 1 3, x 1 0, x 2 0 Solution: The data and variable definitions are given below. As can be seen, the Q matrix is positive definite so the KKT conditions are necessary and sufficient for a global optimum. c T = 8 16, Q = 2 0 0 8, A = 1 1 1 0, b = 5 3 x = (x 1, x 2 ), y = (y 1, y 2 ), µ = (µ 1, µ 2 ), v = (v 1, v 2 ) The linear constraints (13a) and (13b) take the following form. 2x 1 + µ 1 + µ 2 y 1 = 8 8x 2 + µ 1 y 2 = 16 x 1 + x 2 + v 1 = 5 x 1 + v 2 = 3 To create the appropriate linear program, we add artificial variables to each constraint and minimize their sum. Minimize a 1 + a 2 + a 3 + a 4 subject to 2x 1 + µ 1 + µ 2 y 1 + a 1 = 8 8x 2 + µ 1 y 2 + a 2 = 16 x 1 + x 2 + v 1 + a 3 = 5 x 1 + v 2 + a 4 = 3 all variables 0 and complementarity conditions Applying the modified simplex technique to this example, yields the sequence of iterations given in Table 7. The optimal solution to the original problem is (x 1 *, x* 2 ) = (3, 2).

Quadratic Programming 5 Table 7. Simplex iterations for QP example Iteratio n Basic variables Solution Objective value Entering variable Leaving variable 1 (a 1, a 2, a 3, a 4 ) (8, 16, 5, 3) 32 x 2 a 2 2 (a 1, x 2, a 3, a 4 ) (8, 2, 3, 3) 14 x 1 a 3 3 (a 1, x 2, x 1, a 4 ) (2, 2, 3, 0) 2 µ 1 a 4 4 (a 1, x 2, x 1, µ 1 ) (2, 2, 3, 0) 2 µ 1 a 1 5 (µ 2, x 2, x 1, µ 1 ) (2, 2, 3, 0) 0