# Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon

Save this PDF as:

Size: px
Start display at page:

Download "Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon"

## Transcription

1 lp.nb 1 Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon

2 lp.nb 2 Overview This is a tutorial about some interesting math and geometry connected with constrained optimization. It is not primarily about algorithms while it mentions one algorithm for linear programming, that algorithm is not new, and the math and geometry apply to other constrained optimization algorithms as well. The talk is organized around three increasingly sophisticated versions of the Lagrange multiplier theorem: the usual version, for optimizing smooth functions within smooth boundaries, an equivalent version based on saddle points, which we will generalize to the final version, for optimizing continuous functions over convex regions. Between the second and third versions, we will detour through the geometry of convex cones. Finally, we will conclude with a practical example: a high-level description of the log-barrier method for solving linear programs.

3 lp.nb 3 Lagrange Multipliers Lagrange multipliers are a way to solve constrained optimization problems. For example, suppose we want to minimize the function f Hx, yl = x 2 + y 2 subject to the constraint 0 = ghx, yl = x + y - 2 Here are the constraint surface, the contours of f, and the solution.

4 lp.nb 4 More Lagrange Multipliers Notice that, at the solution, the contours of f are tangent to the constraint surface. The simplest version of the Lagrange Multiplier theorem says that this will always be the case for equality constraints: at the constrained optimum, if it exists, f will be a multiple of g. The Lagrange Multiplier theorem lets us translate the original constrained optimization problem into an ordinary system of simultaneous equations at the cost of introducing an extra variable: ghx, yl = 0 f Hx, yl = p ghx, yl The first equation states that x and y must satisfy the original constraint; the second equation adds the qualification that f and g must be parallel. The new variable p is called a Lagrange multiplier.

5 lp.nb 5 The Lagrangian We can write this system of equations more compactly by defining the Lagrangian L: LHx, y, pl = f Hx, yl + pghx, yl = x 2 + y 2 + phx + y - 2L The equations are then just L = 0, or in this case x + y = 2 2 x =-p 2 y =-p The unique solution for our example is x = 1, y = 1, p =-2.

6 lp.nb 6 Multiple Constraints The same technique allows us to solve problems with more than one constraint by introducing more than one Lagrange multiplier. For example, if we want to minimize subject to x 2 + y 2 + z 2 x + y - 2 = 0 x + z - 2 = 0 we can write the Lagrangian LHx, y, z, p, ql = x 2 + y 2 + z 2 + phx + y - 2L + qhx + z - 2L and the corresponding equations 0 = x L = 2 x + p + q 0 = y L = 2 y + p 0 = z L = 2 z + q 0 = p L = x + y = q L = x + z - 2

7 lp.nb 7 Multiple Constraints: the Picture The solution to the above equations is 9p Ø-ÅÅÅÅ 4, q 3 Ø-4 ÅÅÅÅÅ, x Ø ÅÅÅÅ 4, y Ø ÅÅÅÅÅ 2, z Ø ÅÅÅÅÅ Here are the two constraints, together with a level surface of the objective function. Neither constraint is tangent to the level surface; instead, the normal to the level surface is a linear combination of the normals to the two constraint surfaces (with coefficients p and q). 3 =

8 lp.nb 8 Saddle Points Lagrange multiplier theorem, version 2: The solution, if it exists, is always at a saddle point of the Lagrangian: no change in the original variables can decrease the Lagrangian, while no change in the multipliers can increase it. For example, consider minimizing x 2 subject to x = 1. The Lagrangian is LHx, pl = x 2 + phx - 1L, which has a saddle point at x = 1, p = p x For fixed p, L is a parabola with minimum at - p ÅÅÅÅÅ fixed x, L is a line with slope x - 1 (0 when x = 1). 2 (1 when p =-2). For

9 lp.nb 9 Lagrangians as Games Because the constrained optimum always occurs at a saddle point of the Lagrangian, we can view a constrained optimization problem as a game between two players: one player controls the original variables and tries to minimize the Lagrangian, while the other controls the multipliers and tries to maximize the Lagrangian. If the constrained optimization problem is well-posed (that is, has a finite and achievable minimum), the resulting game has a finite value (which is equal to the value of the Lagrangian at its saddle point). On the other hand, if the constraints are unsatisfiable, the player who controls the Lagrange multipliers can win (i.e., force the value to +!), while if the objective function has no finite lower bound within the constraint region, the player who controls the original variables can win (i.e., force the value to -!). We will not consider the case where the problem has a finite but unachievable infimum (e.g., minimize exphxl over the real line).

10 lp.nb 10 Inequality Constraints What if we want to minimize x 2 + y 2 subject to x + y - 2 0? We can use the same Lagrangian as before: LHx, y, pl = x 2 + y 2 + phx + y - 2L but with the additional restriction that p 0. Now, as long as x + y - 2 0, the player who controls p can't do anything: making p more negative is disadvantageous, since it decreases the Lagrangian, while making p more positive is not allowed. Unfortunately, when we add inequality constraints, the simple condition L = 0 is neither necessary nor sufficient to guarantee a solution to a constrained optimization problem. (The optimum might occur at a boundary.) This is why we introduced the saddle-point version of the LM theorem. We can generalize the Lagrange multiplier theorem for inequality constraints, but we must use saddle points to do so. First, though, we need to look at the geometry of convex cones.

11 lp.nb 11 Cones If g 1, g 2,... are vectors, the set 8 a i g i» a i 0< of all their nonnegative linear combinations is a cone. The g i are the generators of the cone. The following picture shows two 2-dimensional cones which meet at the origin. The dark lines bordering each cone form a minimal set of generators for it; adding additional generating vectors from inside the cone wouldn't change the result. These particular two cones are duals of each other. The dual of a set C, written C, is the set of all vectors which have a nonpositive dot product with every vector in C. If C is a cone, then HC L = C.

12 lp.nb 12 Special Cones Any set of vectors can generate a cone. The null set generates a cone which contains only the origin; the dual of this cone is all of n. Fewer than n vectors generate a cone which is contained in a subspace of n. If a cone contains both a (nonzero) vector and its negative, it is flat. Any linear subspace of n is an example of a flat cone. The following picture shows another flat cone, along with its dual (which is not flat). A cone which is not flat is pointed. The dual of a full-rank flat cone is a pointed cone which is not of full rank; the dual of a full-rank pointed cone is also a full-rank pointed cone. The dual of the positive orthant in n is the negative orthant. If we reflect the negative orthant around the origin, we get back the positive orthant again. Cones with this property (that is, C =-C ) are called self-dual.

13 lp.nb 13 Faces and Such We have already seen the representation of a cone by its generators. Any vector in the minimal set of generators for a cone is called an edge. (The minimal set of generators is unique up to scaling.) Cones can also be represented by their faces, or bounding hyperplanes. Since each face must pass through the origin, we can describe a face completely by its normal vector. Conveniently, the face normals of any cone are the edges of its dual:

14 lp.nb 14 Representations of Cones Any linear transform of a cone is another cone. If we apply the matrix A (not necessarily square) to the cone generated by 8g i» i = 1.. m<, the result is the cone generated by 8A g i» i = 1.. m<. In particular, if we write G for the matrix with columns g i, the cone C generated by the g i is just the nonnegative orthant m+ transformed by G. G is called the generating matrix for C. For example, the narrower of the two cones on the previous slide was generated by the matrix i y j z k { On the other hand, we sometimes have the bounding hyperplanes of C rather than its edges. That is, we have a matrix N whose columns are the n face normals of C. In this case, N is the generating matrix of C, and we can write C as HN n+ L. The bounding-hyperplane representation of our example cone is i j k y z { Unfortunately, in higher dimensions it is difficult to translate between these two representations of a cone. For example, finding the bounding hyperplanes of a cone from its generators is equivalent to finding the convex hull of a set of points.

15 lp.nb 15 Convex Polytopes as Cones A convex polytope is a region formed by the intersection of some number of halfspaces. A cone is also the intersection of halfspaces, with the additional constraint that the halfspace boundaries must pass through the origin. With the addition of an extra variable to represent the constant term, we can represent any convex polytope as a cone: each edge of the cone corresponds to a vertex of the polytope, and each face of the cone corresponds to a face of the polytope. For example, to represent the 2D equilateral triangle with vertices at i j 1 y z i k 1 { k j 1 + è!!! 3 2 y z j i { k è!!! 3 y z { we add an extra coordinate (call it t) and write the triangle as the intersection of the plane t = 1with the cone generated by i j k y i z { j k 1 + è!!! y i z { j k è!!! 3 1 y z {

16 lp.nb 16 Polytopes cont'd Equivalently, we can represent the same triangle by its bounding hyperplanes, one of which is i j k 1 ÅÅÅÅÅ 6 1 ÅÅÅÅÅ 6 I-3 + è!!! 3M I-3 + è!!! 3M 1 y z { i. j k x y y z 0 1 { We can represent an unbounded polytope by allowing some of the cone s edges not to intersect the plane t = 1.

17 lp.nb 17 Geometric Duality The idea of duality for cones is almost the same as the standard idea of geometric duality. A pair of dual cones represents a pair of dual polytopes, with each vertex of one polytope corresponding to a face of the other and vice versa. Why almost: usually the geometric dual is thought of as insensitive to location and scaling, while the size and location of the polytope represented by the dual cone depend on the size of the original polytope and its position relative to the origin. Also, strictly speaking, each cone must be the negative of the dual of the other, since by convention we intersect with the plane t = 1 rather than t =-1.

18 lp.nb 18 The Lagrange Multiplier Theorem We can now state the Lagrange multiplier theorem in its most general form, which tells how to minimize a function over an arbitrary convex polytope N x + c 0. Lagrange multiplier theorem: The problem of finding x to minimize f HxL subject to N x + c 0 is equivalent to the problem of finding arg min x max pe n+ LHx, pl where the Lagrangian L is defined as LHx, pl = f HxL + p T HN x + cl for a vector of Lagrange multipliers p. The Lagrange multiplier theorem uses properties of convex cones and duality to transform our original problem (involving an arbitrary polytope) to a problem which mentions only the very simple cone n+. ( n+ is simple for two reasons: it is axis-parallel, and it is self-dual.) The trade-offs are that we have introduced extra variables and that we are now looking for a saddle point rather than a minimum.

19 lp.nb 19 Linear Programming Duality If f HxL is a linear function (say f ' x) then the minimization problem is a linear program. The Lagrangian is f ' x + p' HNx+ cl or, rearranging terms, H f' + p' NL x + p' c. The latter is the Lagrangian for a new linear program (called the dual program): maximize p' c subject to N' p + f = 0 p 0 Since x was unrestricted in the original, or primal, program, it becomes a vector of Lagrange multipliers for an equality constraint in the dual program. Note we minimize over x but maximize over p. (But changing to changes p to - p.)

20 lp.nb 20 Duality cont'd Even though the primal and dual Lagrangians are the same, the primal and dual programs are not (quite) the same problem: in one, we want min p max x L, in the other we want max x min p L. It is a theorem (due to von Neumann) that these two values are the same (and so any x, p which achieves one also achieves the other). For nonlinear optimization problems, inf p sup x L and sup x inf p L are not necessarily the same. The difference between the two is called a duality gap.

21 lp.nb 21 Complementary Slackness If x, p is a saddle point of L, p T HN x + cl 0 (else LHx, 2 pl > LHx, pl). Similarly, p T HN x + cl 0 (else LIx, 1 ÅÅÅÅ 2 pm > LHx, pl). So, p T HN x + cl = 0 (the complementary slackness condition). Since, in general, N x + c will be zero in at most n coordinates, the remaining m - n components of p must be 0. (In fact, barring degeneracy, the constrained optimum will be at a vertex of the feasible region, so N x + c will be zero in exactly n coordinates and p will have exactly n nonzeros.)

22 lp.nb 22 Prices Suppose x, p are a saddle point of f ' x + p' HN x + cl. How much does an increase of d in f i change the value of the optimum? The value of the optimum increases by at least d x i, since the same x will still be feasible (although perhaps not optimal). Similarly, a decrease of d in c i decreases the optimum by at least d p i. For this reason, dual variables are sometimes called shadow prices.

23 lp.nb 23 Log-barrier Methods Commercial linear-programming solvers today use variants of the logarithmic barrier method, a descendent of an algorithm introduced in the 1980s by N. Karmarkar. Karmarkar's was the second polynomial-time algorithm to be discovered for linear programming, and the first practical one. (Khachiyan's ellipsoid algorithm was the first, but in practice simplex seems to beat it.) We want to find a saddle point of L = f ' x + p' HN x + cl with the constraint p 0. Since even this simple constraint is inconvenient, consider what happens if we add an extra barrier term: L m = f ' x + p' HN x + cl - m ln p i The effect of the barrier term is to make L m Ø! as p i Ø 0, so that we can ignore the constraints on p and work entirely with a smooth function. The relative importance of the barrier term is controlled by the barrier parameter m > 0.

24 lp.nb 24 Log-Barrier cont'd For fixed x and m, L m is strictly convex, and so has a unique minimum. For fixed p and m, L m is linear. For each m, L m defines a saddle-point problem in x, p. For large m, the problem is very smooth (and easy to solve); as m Ø 0, L m Ø L and Hx *, p * L m Ø x *, p *. Since L m is smooth, we can find the saddle point by setting L m = 0 or N' p + f = 0 N x + c =mp -1 where p -1 means the componentwise inverse of p. The basic idea of the log-barrier method is to start with a large m and follow the trajectory of solutions as m Ø 0. This trajectory is called the central path.

25 lp.nb 25 The Central Path Here is a portion of the central path for the problem minimize x + y subject to x + 2 y 1 x, y 0 This plot shows x, y as a function of m:

26 lp.nb 26 Central Path cont'd Here are the multipliers p, q, r. Note that p, q, r always satisfy the equality constraints of the dual problem, namely p + r = 1, q + 2 r =

27 lp.nb 27 Following the Central Path Again, the optimality conditions are L m = 0 or N' p + f = 0 N x + c =mp -1 If we start with a guess at x, p for a particular m, we can refine the guess with one or more steps of Newton's method. After we're satisfied with the accuracy, we can decrease m and start again (using our previous solution as an initial guess). (One could although we won't derive bounds on how close we need to be to the solution before we can safely decrease m.) With the appropriate (nonobvious!) strategy for reducing m as much as possible after each Newton step, we have a polynomial time algorithm. (In practice, we never use a provably-polynomial strategy since all the known ones are very conservative.)

### Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

### Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

### LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

### A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

### Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

### Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

### Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

### Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Several Views of Support Vector Machines

Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

### 4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

### 1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

### Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

### Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

### What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

### Support Vector Machine (SVM)

Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Linear Programming. March 14, 2014

Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

### Arrangements And Duality

Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

### Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

### Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

### Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

### Duality in Linear Programming

Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

### Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

### Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

### 26 Linear Programming

The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

### Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

### Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

### Converting a Linear Program to Standard Form

Converting a Linear Program to Standard Form Hi, welcome to a tutorial on converting an LP to Standard Form. We hope that you enjoy it and find it useful. Amit, an MIT Beaver Mita, an MIT Beaver 2 Linear

### Lecture 5: Conic Optimization: Overview

EE 227A: Conve Optimization and Applications January 31, 2012 Lecture 5: Conic Optimization: Overview Lecturer: Laurent El Ghaoui Reading assignment: Chapter 4 of BV. Sections 3.1-3.6 of WTB. 5.1 Linear

### Introduction to Convex Optimization for Machine Learning

Introduction to Convex Optimization for Machine Learning John Duchi University of California, Berkeley Practical Machine Learning, Fall 2009 Duchi (UC Berkeley) Convex Optimization for Machine Learning

### Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

### Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

### Linear Programming in Matrix Form

Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

### Simplex method summary

Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

### Date: April 12, 2001. Contents

2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

### Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

### (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

### Linear Programming Problems

Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

### Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many

### This exposition of linear programming

Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

### 5.1 Bipartite Matching

CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

### Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

(für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

### Minkowski Sum of Polytopes Defined by Their Vertices

Minkowski Sum of Polytopes Defined by Their Vertices Vincent Delos, Denis Teissandier To cite this version: Vincent Delos, Denis Teissandier. Minkowski Sum of Polytopes Defined by Their Vertices. Journal

### Linear Programming I: Maximization 2009 Samuel L. Baker Assignment 10 is on the last page.

LINEAR PROGRAMMING I 1 Learning objectives: Linear Programming I: Maximization 2009 Samuel L. Baker Assignment 10 is on the last page. 1. Recognize problems that linear programming can handle. 2. Know

### Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

### 4.1 Learning algorithms for neural networks

4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to

### SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

### Support Vector Machines

Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

### Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

### Linear Programming. Solving LP Models Using MS Excel, 18

SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

### 3 Does the Simplex Algorithm Work?

Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Chapter 5 Linear Programming in Two Dimensions: A Geometric Approach Linear Inequalities and Linear Programming Section 3 Linear Programming gin Two Dimensions: A Geometric Approach In this section, we

### 9.4 THE SIMPLEX METHOD: MINIMIZATION

SECTION 9 THE SIMPLEX METHOD: MINIMIZATION 59 The accounting firm in Exercise raises its charge for an audit to \$5 What number of audits and tax returns will bring in a maximum revenue? In the simplex

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON

Computational Geometry Lab: FEM BASIS FUNCTIONS FOR A TETRAHEDRON John Burkardt Information Technology Department Virginia Tech http://people.sc.fsu.edu/ jburkardt/presentations/cg lab fem basis tetrahedron.pdf

### MOSEK modeling manual

MOSEK modeling manual August 12, 2014 Contents 1 Introduction 1 2 Linear optimization 3 2.1 Introduction....................................... 3 2.1.1 Basic notions.................................. 3

### Solving Linear Programs

Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

### 1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities

### Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 7 Transformations in 2-D

Computer Graphics Prof. Sukhendu Das Dept. of Computer Science and Engineering Indian Institute of Technology, Madras Lecture 7 Transformations in 2-D Welcome everybody. We continue the discussion on 2D

### The Graphical Method: An Example

The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

### Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

### Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

### Overview of Math Standards

Algebra 2 Welcome to math curriculum design maps for Manhattan- Ogden USD 383, striving to produce learners who are: Effective Communicators who clearly express ideas and effectively communicate with diverse

7.5 SYSTEMS OF INEQUALITIES Copyright Cengage Learning. All rights reserved. What You Should Learn Sketch the graphs of inequalities in two variables. Solve systems of inequalities. Use systems of inequalities

### 1. (a) Multiply by negative one to make the problem into a min: 170A 170B 172A 172B 172C Antonovics Foster Groves 80 88

Econ 172A, W2001: Final Examination, Possible Answers There were 480 possible points. The first question was worth 80 points (40,30,10); the second question was worth 60 points (10 points for the first

### Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### 3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

### CHAPTER 9. Integer Programming

CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

### Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

### Linear Programming. April 12, 2005

Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

### Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

### Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

### Lecture 7: Finding Lyapunov Functions 1

Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

### Solving Systems of Equations with Absolute Value, Polynomials, and Inequalities

Solving Systems of Equations with Absolute Value, Polynomials, and Inequalities Solving systems of equations with inequalities When solving systems of linear equations, we are looking for the ordered pair

### Equilibrium computation: Part 1

Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

### Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

### Proximal mapping via network optimization

L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

### Support Vector Machines Explained

March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),

### Linear Programming: Penn State Math 484 Lecture Notes. Christopher Griffin

Linear Programming: Penn State Math 484 Lecture Notes Version 1.8.3 Christopher Griffin 2009-2014 Licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License With

### Linear Dependence Tests

Linear Dependence Tests The book omits a few key tests for checking the linear dependence of vectors. These short notes discuss these tests, as well as the reasoning behind them. Our first test checks

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### . P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

### Linear Equations ! 25 30 35\$ & " 350 150% & " 11,750 12,750 13,750% MATHEMATICS LEARNING SERVICE Centre for Learning and Professional Development

MathsTrack (NOTE Feb 2013: This is the old version of MathsTrack. New books will be created during 2013 and 2014) Topic 4 Module 9 Introduction Systems of to Matrices Linear Equations Income = Tickets!

### Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

### Solving Systems of Linear Equations

LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

### SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

### Module1. x 1000. y 800.

Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

### Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra