Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method



Similar documents
Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Notes V Problem Transformations

2.3 Convex Constrained Optimization Problems

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Lecture 2: August 29. Linear Programming (part I)

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming I

Nonlinear Optimization: Algorithms 3: Interior-point methods

Linear Programming. March 14, 2014

Can linear programs solve NP-hard problems?

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3.1 Solving Systems Using Tables and Graphs

Max Flow. Lecture 4. Optimization on graphs. C25 Optimization Hilary 2013 A. Zisserman. Max-flow & min-cut. The augmented path algorithm

What is Linear Programming?

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Discrete Optimization

1 Solving LPs: The Simplex Algorithm of George Dantzig

Optimization Modeling for Mining Engineers

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Linear Programming Problems

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

Basic Components of an LP:

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Practical Guide to the Simplex Method of Linear Programming

Math 120 Final Exam Practice Problems, Form: A

5.1 Bipartite Matching

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Proximal mapping via network optimization

4.6 Linear Programming duality

Linear Programming. Solving LP Models Using MS Excel, 18

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

3. Linear Programming and Polyhedral Combinatorics

Optimization in R n Introduction

Standard Form of a Linear Programming Problem

4 UNIT FOUR: Transportation and Assignment problems

Lecture 2: The SVM classifier

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

Nonlinear Programming Methods.S2 Quadratic Programming

An Introduction to Linear Programming

4. How many integers between 2004 and 4002 are perfect squares?

Largest Fixed-Aspect, Axis-Aligned Rectangle

26 Linear Programming

Mathematical finance and linear programming (optimization)

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Approximation Algorithms

Polynomial and Rational Functions

This exposition of linear programming

Decision Mathematics D1. Advanced/Advanced Subsidiary. Friday 12 January 2007 Morning Time: 1 hour 30 minutes. D1 answer book

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Decision Mathematics D1 Advanced/Advanced Subsidiary. Tuesday 5 June 2007 Afternoon Time: 1 hour 30 minutes

CHAPTER 9. Integer Programming

Airport Planning and Design. Excel Solver

Simplex method summary

Dual Methods for Total Variation-Based Image Restoration

Solving Linear Programs

Some representability and duality results for convex mixed-integer programs.

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b.

Special Situations in the Simplex Algorithm

Answer Key for California State Standards: Algebra I

Duality in Linear Programming

Linear Programming. April 12, 2005

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

24. The Branch and Bound Method

Applied Algorithm Design Lecture 5

MA107 Precalculus Algebra Exam 2 Review Solutions

Zeros of Polynomial Functions

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Equilibrium computation: Part 1

Solving Curved Linear Programs via the Shadow Simplex Method

Optimal Scheduling for Dependent Details Processing Using MS Excel Solver

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

Chapter 10: Network Flow Programming

Problem Set 7 Solutions

DATA ANALYSIS II. Matrix Algorithms

Special cases in Transportation Problems

On the effect of forwarding table size on SDN network utilization

A Labeling Algorithm for the Maximum-Flow Network Problem

Zeros of a Polynomial Function

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

EdExcel Decision Mathematics 1

Linear Programming: Theory and Applications

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Review of Fundamental Mathematics

Discuss the size of the instance for the minimum spanning tree problem.

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

OPRE 6201 : 2. Simplex Method

The Graphical Method: An Example

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Module1. x y 800.

Solving Quadratic Equations

EQUATIONS and INEQUALITIES

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Transportation Polytopes: a Twenty year Update

Degeneracy in Linear Programming

Adaptive Linear Programming Decoding

Transcription:

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation

The Optimization Tree

Linear Programming The name is historical, it should really be called Linear Optimization. The problem consists of three parts: A linear function to be maximized maximize f(x) = c 1 x 1 + c 2 x 2 + + c n x n Problem constraints Non-negative variables subject to a 11 x 1 + a 12 x 2 + + a 1n x n < b 1 a 21 x 1 + a 22 x 2 + + a 2n x n < b 2 a m1 x 1 + a m2 x 2 + + a mn x n < b m e.g. x 1, x 2 > 0

Linear Programming The problem is usually expressed in matrix form and then it becomes: Maximize c > x subject to Ax b, x 0 where A is a m n matrix. The objective function, c > x,andconstraints,ax b, are all linear functions of x. (Linear programming problems are convex, so a local optimum is the global optimum.)

Example 1 Suppose that a farmer has a piece of farm land, say A square kilometers large, to be planted with either wheat or barley or some combination of the two. The farmer has a limited permissible amount F of fertilizer and P of insecticide which can be used, each of which is required in different amounts per unit area for wheat (F 1, P 1 ) and barley (F 2, P 2 ). Let S 1 be the selling price of wheat, and S 2 the price of barley, and denote the area planted with wheat and barley as x 1 and x 2 respectively. The optimal number of square kilometers to plant with wheat vs. barley can be expressed as a linear programming problem. x 1 x 2 A

Example 1 cont. Maximize S 1 x 1 + S 2 x 2 (the revenue this is the objective function ) subject to x 1 +x 2 < A (limit on total area) F 1 x 1 + F 2 x 2 < F (limit on fertilizer) P 1 x 2 + P 2 x 2 < P (limit on insecticide) x 1 >= 0, x 2 > 0 (cannot plant a negative area) which in matrix form becomes maximize subject to

Example 2: Max flow Given: a weighted directed graph, source s, destination t Interpret edge weights as capacities Goal: Find maximum flow from s to t Flow does not exceed capacity in any edge Flow at every vertex satisfies equilibrium [ flow in equals flow out ] e.g. oil flowing through pipes, internet routing

Example 3 cont. Slide: Robert Sedgewick and Kevin Wayne

Example 2 cont.

Example 3: shortest path Given: a weighted directed graph, with a single source s Distance from s to v: length of the shortest part from s to v Goal: Find distance (and shortest path) to every vertex e.g. plotting routes on Google maps

Application Minimize number of stops (lengths = 1) Minimize amount of time (positive lengths)

LP why is it important? We have seen examples of: allocating limited resources network flow shortest path Others include: matching assignment It is a widely applicable problem-solving model because: non-negativity is the usual constraint on any variable that represents an amount of something one is often interested in bounds imposed by limited resources. Dominates world of industry

Linear Programming 2D example Cost function: f(x 1,x 2 ) = 0x 1 +0.5x 2 Inequality constraints: x 1 + x 2 2 x 1 1.5 x 1 + 8 3 x 2 4 max f(x x 1,x 1,x 2 ) 2 x 1,x 2 0

Feasible Region Inequality constraints: x 2 2 1.8 x 1 + x 2 =2 x 1 =1.5 x 1 + x 2 2 x 1 1.5 x 1 + 8 3 x 2 4 x 1,x 2 0 1.6 1.4 1.2 1 0.8 0.6 0.4 x 1 + 8 3 x 2 =4 0.2 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 x 1

LP example max f(x x 1,x 1,x 2 ) 2 Cost function: z z =0.5x 2 f(x 1,x 2 ) = 0x 1 +0.5x 2 Inequality constraints: x 1 + x 2 2 x 1 1.5 feasible region x 2 x 1 + 8 3 x 2 4 x 1,x 2 0 x 1 extreme point

LP example max f(x x 1,x 1,x 2 ) 2 x 2 2 1.8 extreme point contour lines Cost function: 1.6 f(x 1,x 2 ) = 0x 1 +0.5x 2 Inequality constraints: x 1 + x 2 2 x 1 1.5 x 1 + 8 3 x 2 4 1.4 1.2 1 0.8 0.6 0.4 0.2 x 1,x 2 0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 feasible region x 1

LP example change of cost function max f(x x 1,x 1,x 2 ) 2 Cost function: f(x 1,x 2 ) = c > x = c 1 x 1 + c 2 x 2 x 2 2 1.8 1.6 1.4 1.2 extreme point contour lines Inequality constraints: x 1 + x 2 2 x 1 1.5 x 1 + 8 3 x 2 4 1 0.8 0.6 0.4 0.2 c x 1,x 2 0 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 feasible region x 1

Linear Programming optima at vertices The key point is that for any (linear) objective function the optima only occur at the corners (vertices) of the feasible polygonal region (never on the interior region). Similarly, in 3D the optima only occur at the vertices of a polyhedron (and in nd at the vertices of a polytope). However, the optimum is not necessarily unique: it is possible to have a set of optimal solutions covering an edge or face of a polyhedron.

Cf Constrained Quadratic Optimization z 15 10 extreme point 5 x 2 0 x 1-5 -5 0 5 10 15 feasible region

Linear Programming solution idea So, even though LP is formulated in terms of continuous variables, x, there are only a finite number of discrete possible solutions that need be explored. How to find the maximum?? Try every vertex? But there are too many in large problems. Instead, simply go from one vertex to the next increasing the cost function each time, and in an intelligent manner to avoid having to visit (and test) every vertex. This is the idea of the simplex algorithm

Sketch solutions for optimization methods We will look at 2 methods of solution: 1. Simplex method Tableau exploration of vertices based on linear algebra 2. Interior point method Continuous optimization with constraints cast as barriers

1. Simplex method Optimum must be at the intersection of constraints Intersections are easy to find, change inequalities to equalities Intersections are called basic solutions 2 x 1 =1.5 some intersections are outside the feasible region (e.g. f) and so need not be considered the others (which are vertices of the feasible region) are called basic feasible solutions 1.8 1.6 1.4 1.2 1 0.8 a x 1 + x 2 =2 b f x 1 + 8 3 x 2 =4 0.6 c 0.4 0.2 e 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 d

Worst complexity There are Cn m m! = (m n)! n! possible solutions, where m is the total number of constraints and n the dimension of the space. 2 1.8 1.6 a x 1 + x 2 =2 x 1 =1.5 1.4 In this case C 5 2 = 5! (3)! 2! =10 1.2 1 0.8 b f x 1 + 8 3 x 2 =4 However, for large problems the number of solutions can be huge, and it is not realistic to explore them all. 0.6 0.4 0.2 e 0 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 c d

The Simplex Algorithm Start from a basic feasible solution (i.e. a vertex of feasible region) Consider all the vertices connected to the current one by an edge Choose the vertex which increases the cost function the most (and is still feasible of course) Repeat until no further increases are possible In practice this is very efficient and avoids visiting all vertices

Matlab LP Function linprog Linprog() for medium scale problems uses the Simplex algorithm Example Find x that minimizes f(x) = 5x 1 4x 2 6x 3, subject to x 1 x 2 + x 3 20 3x 1 + 2x 2 + 4x 3 42 3x 1 + 2x 2 30 0 x 1, 0 x 2, 0 x 3. >> f = [-5; -4; -6]; >> A = [1-1 1 3 2 4 3 2 0]; >> b = [20; 42; 30]; >> lb = zeros(3,1); >> x = linprog(f,a,b,[],[],lb); >> Optimization terminated. >> x x = 0.0000 15.0000 3.0000

MOSEK For large scale linear and quadratic optimization problems

2. Interior Point Method Solve LP using continuous optimization methods Represent inequalities by barrier functions Follow path through interior of feasible region to vertex c

Barrier function method We wish to solve the following problem Minimize subject to f(x) =c > x a i > x b i,i=1...m Problem could be rewritten as Minimize f(x)+ mx i=1 I(a i > x b i ) where I is the indicator function I(u) = 0 for u 0 = for u>0 The difficulty here is that I(u) is not differentiable.

Approximation via logarithmic barrier functon Approximate indicator function by logarithmic barrier Minimize f(x) 1 t mx i=1 log( a i > x + b i ) for t>0, (1/t)log( u) is a smooth approximation of I(u) approximation improves as t log( u) t

Minimizing function for different Values of t. Function f(x) to be minimized subject to x 1, x 1 Minima converge to the constrained minimum. Function tf(x) log(1 x 2 ) for increasing values of t.

Algorithm for Interior Point Method Problem becomes Algorithm Minimize tf(x) mx i=1 log( a i > x + b i ) Solve using Newton s method t μt repeat until convergence As t increases this converges to the solution of the original problem

Example Diagram copied from Boyd and Vandenberghe

Integer programming There are often situations where the solution is required to be an integer or have boolean values (0 or 1 only). For example: assignment problems scheduling problems matching problems Linear programming can also be used for these cases

Example: assignment problem agents 1 2 3 4 5 tasks 1 2 3 4 5 Objective: assign n agents to n tasks to minimize total cost Assignment constraints: each agent assigned to one task only each task assigned to one agent only

Example: assignment problem agents 1 2 3 4 5 tasks 1 2 3 4 5 Objective: assign n agents to n tasks to minimize total cost Assignment constraints: each agent assigned to one task only each task assigned to one agent only

cost matrix tasks tasks a g e n t s a g e n t s

Example: assignment problem Problem specification x ij is the assignment of agent i to task j (can take values 0 or 1) c ij is the (non-negative) cost of assigning agent i to task j Minimize total cost f(x) = P ij x ij c ij over the n 2 variables x ij tasks j Example solution for x ij each agent i assigned to one task only only one entry in each row each task j assigned to one agent only only one entry in each column a g e n t s i 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0

Example: assignment problem Linear Programme formulation min x ij subject to (inequalities) (equalities) f(x) = X ij X j X i x ij c ij tasks j 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 x ij 1, i each agent assigned to at most one task x ij =1, j each task assigned to exactly one agent This is a relaxation of the problem because the variables x ij are not forced to take boolean values. However, it can be shown that the solution x ij only takes the values 0 or 1 a g e n t s i

Agents to tasks assignment problem Optimal agents to tasks assignment cost: 11.735499 1.5 1 agents tasks 1.5 1 agents tasks 0.5 0.5 0 0-0.5-0.5-1 -1-1.5-1.5-2 -2-1.5-1 -0.5 0 0.5 1 1.5 2 2.5-1.5-1 -0.5 0 0.5 1 1.5 2 2.5 Cost of assignment = distance between agent and task e.g. application: tracking, correspondence

Example application: tracking pedestrians Multiple Object Tracking using Flow Linear Programming, Berclaz, Fleuret & Fua, 2009

What is next? Convexity (when does a function have only a global minimum?) Robust cost functions Stochastic algorithms