Math 5593 Linear Programming Weeks 12/13

Similar documents
3. Linear Programming and Polyhedral Combinatorics

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Discrete Optimization

CHAPTER 9. Integer Programming

4.6 Linear Programming duality

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Linear Programming. March 14, 2014

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Approximation Algorithms

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

26 Linear Programming

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Noncommercial Software for Mixed-Integer Linear Programming

Arrangements And Duality

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Determinants and the Solvability of Linear Systems

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Mathematical finance and linear programming (optimization)

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

5.1 Bipartite Matching

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

8 Square matrices continued: Determinants

2.3 Convex Constrained Optimization Problems

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook

Some representability and duality results for convex mixed-integer programs.

Proximal mapping via network optimization

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Practical Guide to the Simplex Method of Linear Programming

Name: Section Registered In:

24. The Branch and Bound Method

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Applied Algorithm Design Lecture 5

Optimization Modeling for Mining Engineers

Convex Programming Tools for Disjunctive Programs

Similarity and Diagonalization. Similar Matrices

An Introduction to Linear Programming

Notes on Determinant

Systems of Linear Equations

Permutation Betting Markets: Singleton Betting with Extra Information

Chapter 13: Binary and Mixed-Integer Programming

Integrating Benders decomposition within Constraint Programming

Classification of Cartan matrices

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem

Polytope Examples (PolyComp Fukuda) Matching Polytope 1

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

In this paper we present a branch-and-cut algorithm for

Optimization in R n Introduction

Scheduling of Mixed Batch-Continuous Production Lines

Special Situations in the Simplex Algorithm

Linear Algebra Notes

Scheduling Shop Scheduling. Tim Nieberg

This exposition of linear programming

What is Linear Programming?

Algorithm Design and Analysis

Max-Min Representation of Piecewise Linear Functions

Linear Programming I

Duality of linear conic problems

Linear Programming. April 12, 2005

Unit 18 Determinants

arxiv: v1 [math.co] 7 Mar 2012

11. APPROXIMATION ALGORITHMS

Can linear programs solve NP-hard problems?

Equilibrium computation: Part 1

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Several Views of Support Vector Machines

7 Gaussian Elimination and LU Factorization

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Guessing Game: NP-Complete?

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Two-Stage Stochastic Linear Programs

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

Collinear Points in Permutations

Lecture 4: Partitioned Matrices and Determinants

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

Solution of Linear Systems

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

Solving Linear Systems, Continued and The Inverse of a Matrix

Statistical machine learning, high dimension and big data

Operation Count; Numerical Linear Algebra

Largest Fixed-Aspect, Axis-Aligned Rectangle

Near Optimal Solutions

Chapter 17. Orthogonal Matrices and Symmetries of Space

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph

Solutions to Homework 6

Lecture 3: Finding integer solutions to systems of linear equations

Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing

Transcription:

Math 5593 Linear Programming Weeks 12/13 University of Colorado Denver, Fall 2013, Prof. Engau 1 Introduction 2 Polyhedral Theory 3 LP and Lagrangean Relaxation 4 Computational Methods and Algorithms

Integer Programming and Combinatorial Optimization In practice, many decision variables must be limited to integers (people, products, etc.) or binary values (on / off, yes / no, etc.): var var name integer / binary; (Binary) Integer Program (IP/BIP): max c T x : Ax b, x integer / binary Mixed Integer (Linear) Program (MIP): max c T x + d T y : Ax + By b, y integer / binary Combinatorial Optimization Problems (COP): Given a finite set N = 1, 2,..., n, weights c j for all j N, and a family F P(N) of subsets of N: max or min j S c j : S F. Solving IPs, BIPs, or MIPs as LP and rounding fractional values to integers does not often work well and may destroy feasibility!

Review: Example Problems and Modeling Techniques Many classic OR problems and models are inherently discrete: Easy : Transportation, assignment, other network flows Difficult : TSP, scheduling, facility location, knapsack In addition, integer or binary variables have model applications: Cardinality constraints: at least k (set covering constraint), at most k (set packing/budget), exactly k (set partitioning) Zero or minimum/maximum/range constraints: ly x uy Either-or disjunctions: A 1 x b 1 + My, A 2 x b 2 + M(1 y) If-then conditionals: A 1 x > b 1 My, A 2 x b 2 + M(1 y) Fixed costs (in objective): min cvar T x + cfix T y where x My Piecewise linear function/approximation: L i y i x i L i y i 1 Linearization of binary quadratic programs: x 0, 1 n x 2 i = x i, x i x j = x ij, maxx i + x j 1, 0 x ij minx i, x j

IP Basics: Polyhedral Theory and Formulations Goal: Given a discrete (integer) set X Z n, find a continuous (polyhedral) formulation P R n for X such that X = P Z n : X = x Z n : Ax b = x R n : Ax b Z n = P Z n Ex: X = x Z 2 : 7x 1 + 10x 2 56, 0 x 1 6, 0 x 2 4 P = x R 2 : 7x 1 + 10x 2 56, 0 x 1 6, 0 x 2 4 x 2 4 2 0 0 2 4 6 x 1 P is a trivial formulation for X: P is the gray shaded area. X are the grid points in P. Is P a good formulation for X? Perfect for box constraints. Poor for 7x 1 + 10x 2 56.

Improving P: Better and Ideal Formulations Def: A formulation P 1 for X Z n is better than P 2 if P 1 P 2. Say P 1 strengthens or tightens or cuts off parts of P 2. Example: X = 7x 1 + 10x 2 56, 0 x 1 6, 0 x 2 4 Z 2 x 2 4 2 Idea: Improve a formulation by adding supporting constraints! Add x 1 + x 2 7 Add 2x 1 + 3x 2 16 But P must stay convex! 0 0 2 4 6 x Best possible: convex hull! 1 The ideal formulation for X Z n is its convex hull P = conv(x)! All extreme points of P are integer ( integer polyhedron ). Can solve IP maxc T x : x X as LP maxc T x : x P. Unfortunately, finding conv(x) is often (at least) as hard as IP!

When is Solving IP as Easy as LP: Integer Polyhedra! Let A, b integer and consider IP max c T x : Ax = b, x Z n +. Question: When does LP (x R n +) have integer solution? Answer: If P = x R n + : Ax = b is an integer polyhedron! Equivalent Answer: If the extreme points (bfs) of P are integer: (x B, x N ) = (B 1 b, 0) Z n + for all (the optimal) basis matrix B of A. Cramer s Rule: If Ax = b is nonsingular, then x i = det(a i ) det(a) where A i is the matrix formed by replacing the ith column of A with b. We can apply Cramer s Rule to the basic system Bx B = b Clear that det(bi ) is an integer, but need det(b) 1, 1 Inverse Rule: B 1 = Adj(B) det(b) where Adj(B) is adjugate (adjunct) matrix (transpose of cofactor matrix C where C ij = ( 1) i+j B ij ) Again clear that Adj(B) is integer, so need det(b) ±1. Result: If B is optimal and det B ±1, 0, then LP solves IP.

Integer Polyhedra and Totally Unimodular Matrices Definition: A matrix A is called totally unimodular (TU) if every square submatrix B of A has its determinant det(b) ±1, 0. [ ] 1 1 A = has det(a) = 1, but det(b) = 2 = 2 (not TU). 1 2 [ ] [ ] 1 1 1 1 A = has det(a) = 0 and is TU, but is not. 1 1 1 1 1 0 1 1 [ ] How about A = 1 1 0 1 1 0? Not TU: find! 1 1 1 1 0 1 1 0 1 1 How about A = 0 1 1 0? TU! (good exercise) 1 1 0 1 Result: P = x R n + : Ax = b is an integer polyhedron for all integer values of b for which it is bounded if and only if A is TU.

TU Matrices: Conditions and Examples Verifying whether a matrix is TU is a challenging undertaking! There is an exponential number of submatrices to check. However, it is easy to show that a matrix is not TU (if we already found or know a suitable certificate submatrix) For insiders: yes, that sounds a lot like NP-complete However, some conditions and special cases that might help: If A is TU, then all entries of A must satisfy a ij ±1, 0. A is TU if and only if A T is TU if and only if [ A I ] is TU. A is TU if a ij 0, 1, 1, a ij ±1 for at most two i in each column j, and there is a partition of rows I = I 1 I 2 such that for all columns with exactly two nonzero entries: a ij a ij = 0. i I 1 i I 2 Examples: [ ] 1 0 1 1 0 1 1 0 (not TU), 1 1 0 1 [ ] 1 0 1 1 0 1 1 0 (I 1 = I, I 2 = ) 1 1 0 1

TU Matrices: Conditions and Examples (Continued) A special class of TU matrices are node-arc incidence matrices for graphs G = (V, E) with node or vertex set V = 1, 2,..., n and arc or edge set E = e 1,..., e m : For every e j = (i 1, i 2 ) let 1 if i = i 1 (arc e j originates at node i 1 ) a ij = 1 if i = i 2 (arc e j terminates at node i 2 ) 0 otherwise Example: A = e 1 e 2 e 3 1 0 1 0 1 1 1 1 0 1 2 3 1 e 1 2 3 Network matrices satisfy partition condition (I 1 = I, I 2 = ). Discrete network flow problems without mean constraints (assignment, transportation, shortest-path, maximum-flow) are easy. Covering, packing, or TSP constraints are mean! e 3 e 2

Improving P (Continued): Valid Inequalities Let P = x R n : Ax b be a formulation for X = P Z n. If A is TU, then P = conv(x) is integer: ideal formulation. The converse is not true, in general! (A is hardly ever TU). Example: X = 7x 1 + 10x 2 56, 0 x 1 6, 0 x 2 4 Z 2 conv(x) = x 1 +x 2 7, 2x 1 +3x 2 16, 0 x 1 6, 0 x 2 4 Why drop 7x 1 + 10x 2 56? Valid but dominated redundant! An inequality (a, b) (or a T x b) is a valid inequality (vi) if a T x b for all x X. A vi (a 1, b 1 ) dominates (a 2, b 2 ) if there is λ > 0 such that a 1 λa 2, b 1 λb 2, and (a 1, b 1 ) λ(a 2, b 2 ). Note that two vis (a 1, b 1 ) = λ(a 2, b 2 ) are the same if λ > 0. A vi is redundant if it is dominated by a linear combination of other vis (Note: 1 (1, 1, 7) + 3 (2, 3, 16) = (7, 10, 55))

What are Good Valid Inequalities: Faces and Facets Result: A vi (a 1, b 1 ) dominates another vi (a 2, b 2 ) if and only if x R n : a1 T x b 1 x R n : a2 T x b 2. Result: Given a formulation P = x R n + : Ax b, a vi (c, d) is redundant for P iff there is y such that A T y c and b T y d. Proof: LP Duality Theorem or Farkas Lemma (as Exercise) Clear that good vis are non-dominated and non-redundant. Def: Given a vi (c, d) for P = x R n : Ax b, a face is a set F = x P : c T x = d (a face is proper if F ). A face of dimension dim(p) 1 (usually n 1) is called a facet. Intersections of P with a supporting hyperplane are faces. Face-defining vis are non-dominated, could be redundant. Facet-defining vis are non-dominated and non-redundant. Optimal representation only needs facet-defining inequalities!

3D Geometric Illustration: Faces, Edges, and Vertices For n = 3, we distinguish three types of k-faces (k = 0, 1, 2): 2-faces: faces of max dimension n 1 (polyhedral facets) 1-faces: edges (can be facets for 2-dimensional polygons) 0-faces: vertices (can still be facets for 1-dimensional lines) This is part of Polyhedral Combinatorics!

Finding Good Valid Inequalities: Preprocessing Consider the following IP and try to strengthen its formulation: max 2x 1 + 3x 2 + x 3 s.t. 8x 1 + 4x 2 3x 3 21, 3x 1 2x 2 + 4x 3 16 0 x 1 5, 1 x 2 4, 0 x 3 3, x Z 3. Idea: Use the box constraints to tighten lower or upper bounds! Solve first inequality for x 1 and use bounds from x 2 and x 3 : 8x 1 21 4x 2 + 3x 3 21 4(1) + 3(3) = 26 So x 1 26 8 = 3.25 and thus x 1 3 (because x 1 is integer). Repeat for x 2 and x 3 (paying attenting to inequality signs): 4x 2 21 8x 1 + 3x 3 21 8(0) + 3(3) = 30 3x 3 8x 1 + 4x 2 21 8(0) + 4(1) 21 = 17 So x 2 30 4 = 7 and x3 17 3 = 5 does not improve!

Finding Good Valid Inequalities: Preprocessing (Cont.) After preprocessing the first inequality, our new formulation is P 1 = 8x 1 + 4x 2 3x 3 21, 3x 1 2x 2 + 4x 3 16, 0 x 1 3, 1 x 2 4, 0 x 3 3. Give up? No way! Let s try the same for the second constraint: 3x 1 16 + 2x 2 4x 3 16 + 2(1) 4(3) = 6 2x 2 3x 1 + 4x 3 16 3(3) + 4(3) 16 = 5 4x 3 16 3x 1 + 2x 2 16 3(3) + 2(1) = 9 So x 1 6 3 = 2, x 2 5 2 = 2, x3 9 4 = 3 and we improved to P 2 = 8x 1 + 4x 2 3x 3 21, 3x 1 2x 2 + 4x 3 16, 2 x 1 3, 1 x 2 2, 3 x 3 3. Only four solutions to check: (2, 1, 3), (2, 2, 3), (3, 1, 3), (3, 2, 3).

Finding More Valid Inequalities: Preprocessing (Cont.) Preprocessing also helps for formulations of binary programs: X = x R 4 + : 0 (x 1, x 2, x 3, x 4 ) 1, 2x 1 3x 2 + x 3 x 4 0, 2x 1 + 5x 2 + 4x 3 + x 4 6, 3x 2 + 2x 3 + 4x 4 5 0, 1 4 Idea: Fix one variable value and look for hidden relationships! Suppose that x 1 = 1 and solve the first inequality for x 2 : 3x 2 2x 1 + x 3 x 4 2(1) + 0 1 1 So x 1 = 1 implies x 2 1 3 = 1 (if-then!) and thus x1 x 2. Suppose that x 2 = 1 and solve second inequality for x 3 : 4x 3 6 + 2x 1 5x 2 x 4 6 + 2(1) 5(1) 0 = 3 So x 2 = 1 implies x 3 3 4 = 0 and thus x2 + x 3 1. Similarly, the third inequality implies x 2 + x 3 + x 4 2. Together with x 2 + x 3 1, then x 4 = 1 and x 2 + x 3 = 1. Only three solutions to check: (1, 1, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1).

Finding All (!) Valid Inequalities: Integer Rounding To keep things simple, consider a single knapsack constraint: 2x 1 + 5x 2 + 4x 3 + x 4 7 where x Z 4 + or 0, 1 4 Idea: We can multiply inequality by arbitrary positive numbers! Example: Multiply by 1 2 4, then 4 x 1 + 5 4 x 2 + x 3 + 1 4 x 4 7 4. Constraint remains valid if we round coefficients downward: 2 4 x1 + 5 4 x2 + x 3 + 1 4 x4 = x 1 + x 2 + x 3 7 4 Now LHS is integer so also round down RHS to 7 4 = 1: x 1 + x 2 + x 3 1 where x Z 4 + or 0, 1 4 Is the new constraint better than 2x 1 + 5x 2 + 4x 3 + x 4 7? No: (1, 1, 1, 1) is infeasible but satisfies the new constraint. But: ( 7 2, 0, 0, 0), (0, 7 5, 0, 0), (0, 0, 7 4, 0) were feasible and are now cut off the new constraint tightens formulation!

Gomory Cuts and the Chvátal-Gomory Procedure Consider a formulation P = x R n + : Ax b for X = P Z n +. 1 Let (a i, b i ) be one or a lin. comb. of the inequalities in P. 2 Let λ 0, then λai T x = n λa ijx j λb i same as (a i, b i ). j=1 3 Since x 0, then n λaij xj λb i is vi for P and X. j=1 4 Since LHS is integer, n j=1 λaij xj λb i is valid for X. 5 New vi may be invalid for P and improve formulation for X! Result: This simple procedure is sufficient to generate all valid inequalities, in principle. Generated vis are called Gomory cuts. Given an arbitrary first formulation P for a set X = P Z n +, every valid inequality for X can be generated by applying the Chvátal-Gomory procedure a finite (!) number of times. Proofs: Gomory 1958 (proof of concept), Chvátal 1973 (for bounded sets), Schrijver 1980 (for unbounded sets)

Example: Gomory Cuts for Rolling Mill Formulation Box, polyhedron, and convex hull: B = x R 2 + : x 1 6, x 2 4 P = B x : 7x 1 + 10x 2 56 C = B x : 2x 1 + 3x 2 16, x 1 + x 2 7 0 0 2 4 6 x 1 Preprocessing does not help: x 1 56 7 = 8, x2 56 10 = 5 Try Gomoroy: 7 3 x1 + 10 3 x2 = 2x 1 + 3x 2 56 3 = 18 ( 3 ) 10 7 x1 + ( ) 3 10 10 x2 = 2x 1 + 3x 2 ( 3 10) 56 = 16 Exercise: Generate x 1 + x 2 7. Need linear combination! ( 710 ) ( 10 ) ( 810 ) + = x 1 + 10 8 x2 = x 1 + x 2 62 8 = 7 56 6 62 x 2 4 2

Binary Programs: Covers and Extended Formulation Consider P = x R n + : Ax b for binary X = P 0, 1 n. Let a i be row of A and consider inequality n j=1 a ijx j b i. Find a cover C 1, 2,..., n such that j C a ij > b i. Then j C x j C 1 is called a cover inequality for X. Ex: 3x 1 + 2x 2 + 4x 3 5 has covers 1, 2, 3, 1, 3, 2, 3 x 1 + x 2 + x 3 2, x 1 + x 3 1, x 2 + x 3 1. Extended Formulation: rather than adding new constraints, add new variables and lift problem into a higher-dimensional space! Relaxation-Linearization Technique (Adams-Sherali 1990): Let x k 0, 1 and multiply inequalities by x k and (1 x k ) n j=1 a ijx j x k b i x k n i=1 a ijx j (1 x k ) b i (1 x k ). Add x jk = x j x k and max0, x j + x k 1 x jk minx j, x k. Result: Repeated RLT lift-and-project finds the convex hull!

IP Optimality Conditions and Relaxation Gaps Question: Given IP max c T x : Ax b, x Z n + and a feasible (integer) solution x, how do we know whether x is optimal? If z LP = max c T x : Ax b, x R n + and z LP = z IP = ct x (or c T x = z LP if c is integer), then x is optimal solution. Otherwise, we don t: no easy-to-check optimality condition! In practice, we use some sort of optimality (relaxation) gap. Definition: If (P) maxf (x): x X, then (R) maxg(x): x Y is a relaxation for (P) if X Y and f (x) g(x) for all x X. If (R) is infeasible/bounded, then (P) is infeasible/bounded. If (R) is optimal at x X and f (x ) = g(x ), then x is also optimal for (P) and there is no relaxation gap: z R z P = 0. In general, the relaxation gap of (R) for (P) is z R z P 0. Note: An ideal relaxation for IP is LP max c T x : x conv(x).

Linear Programming Relaxation Consider zip = max x Z n + c T x : Ax b and its LP relaxation: zlp c = max T x : Ax b, x R n +. Relaxation clear: larger (relaxed) feasible set, same objective. When is it tight (no gap)? If A is TU, b integer, or we lucky. Otherwise, try to improve relaxation using valid inequalities. Example: Consider a knapsack problem max c T x : a T x b : z LP(λ) = max c T x : n λa i T x λb. i=1 Drawback of LP Relaxations: finding zlp requires optimization! Although just LPs solving many LPs is not very practical. A way to more quickly get good upper bounds is desirable. Duality! Dual feasible solutions give primal upper bounds.

Primal-Dual Bounds and Duality Gaps Def: Problems (P) maxf (x): x X and (D) ming(y): y Y are a (weak) primal-dual pair if f (x) g(y) for all x X, y Y. If one problem is unbounded, then the other is infeasible. If one problem is feasible, then the other is bounded: f (x) zp z D g(y) for all x X and y Y. Duality gap is zd z P 0 (if no gap, then pair is strong). Note: get bounds from feasible solutions without optimization! Lots of examples: LP duality (strong), max flows/min cuts in integer networks (strong), max cardinality matchings/min node cover in bipartitite graphs (strong, König s Theorem) For IP max c T x : Ax b, x Z n can use weak (LP) dual min b T y : A T y c, y R m +. Unfortunately, bounds from LP duals are often quite weak.

Lagrangean Relaxation and Dual Problem Consider max f (x): g(x) 0, h(x) = 0 and relax constraints: L(λ, µ) = max f (x) λ T g(x) µ T h(x) Exercise: Show that this is a relaxation for λ > 0 (and any µ). L(λ, µ) is Lagrangean relaxation with L-multipliers (λ, µ). The Lagrangean Dual Problem is the min-max problem min L(λ, µ) = min max λ>0,µ λ>0,µ x f (x) λ T g(x) µ T h(x) For LP/IP: max c T x : Ax b, x X (X is R n + or Z n +): zl = min L(λ) = min max c T x + λ T (b Ax). λ>0 λ>0 x X. Easy to get bounds: compute objective with λ > 0, x X. Not easy to optimize: uses subgradiant and NLP methods.

Generic (not Genetic!) Integer Programming Algorithm Let X = x Z n : Ax b and consider IP max c T x : x X. 0 Start with an initial formulation P = x R n : Ax b. 1 Apply preprocessing to strengthen the formulation P. 2 Solve relaxation for x R n and an upper bound z. 3 If x Z n and c T x = z, then stop: x is optimal. 4 If x Z n but c T x < z, then x may be optimal: check dual bounds and their duality gaps (if small enough, stop). 5 If x / Z n, then x is not optimal. Try some of the following: Apply rounding heuristic to find x X and apply Step 4. Find a cutting plane / vi for X that cuts off infeasible x. Select xi / Z and split the problem into two subproblems: P 1 = P x R n : x i x i P 2 = P x R n : x i x i 6 Update the formulation(s) P (P 1, P 2 ) and go back to Step 1. Or use Complete Enumeration: try all x X and pick the best!

Gomory s Fractional Cutting-Plane Method Separation Problem: Given formulation P = x R n + : Ax b for X = P Z n and x P, either show that x conv(x) or find a vi (c, d) such that for all x conv(x): c T x d < c T x. Do relaxations ever have optimal solutions x conv(x) but x / Z n? Yes: if there are multiple optimal solutions along an at least 2-dimensional face (edge) of conv(x). Will we ever encounter such solutions? Yes: if we use an interior-point method. Not when using the simplex method! If x / Z n + is found using a simplex method, then we know that ( ) xb = B 1 b B 1 N x N = b āijx j. j N Take any fractional xi = b i / Z and generate valid Gomory cut: x i + āijx j b i x i + āij xj bi. j N j N New inequality cuts off x because x i = b > bi and x N = 0.

Decomposition / Branch-and-Bound (Vanderbei 23.5) Decomposition: Consider the problem z = maxf (x): x X. If X = i X i and z i = maxf (x): x X i, then z = maxz i. If X i are singletons, then this is complete enumeration. Smart decomposition schemes use partial enumeration. We will discuss Branch-and-Bound on the following example: max 17x 1 + 12x 2 s.t. 10x 1 + 7x 2 40 x 1 + x 2 5 x 1, x 2 Z + Formulation: P 0 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5 LP Solution: (x 1, x 2 ) = ( 5 3, 10 ) 3 Upper Bound: z 0 = 205 3 = 68.33

Branch-and-Bound Example (First Branching) Note that (x 1, x 2 ) = ( 5 3, 10 ) 3 is fractional, so we can decompose: P 1 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 1 P 2 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 2 First solve formulation P 1 : LP Solution: (x 1, x 2 ) = (1, 4) Objective Value: z 1 = 65 Feasible: incumbent best! Then solve formulation P 2 : LP Solution: (x 1, x 2 ) = ( 2, 20 ) 7 Upper Bound: z 2 = 478 7 = 68.29 Can still beat incumbent best!

Enumeration Tree: Status after First Branching

Branch-and-Bound Example (Second Branching) For P 2, (x 1, x 2 ) = ( 2, 20 ) 7 is fractional so we decompose further: P 3 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 2, x 2 2 P 9 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 2, x 2 3 First solve formulation P 3 : LP Solution: (x 1, x 2 ) = ( 13 5, 2) Upper Bound: z 3 = 341 5 = 68.2 Can still beat incumbent best! Then decide between two options: Breath-first search: solve P 9. Depth-first search: decompose into P 4 (x 1 2) and P 5 (x 1 3).

Enumeration Tree: Status after Second Branching

Branch-and-Bound Example (Third Branching) Applying a depth-first search, we continue to decompose P 3 : P 4 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 = 2, x 2 2 P 5 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 3, x 2 2 First solve formulation P 4 : LP Solution: (x 1, x 2 ) = (2, 2) Objective Value: z 4 = 58 Feasible but not optimal. Continue with formulation P 5 : LP Solution: (x 1, x 2 ) = ( 3, 10 ) 7 Upper Bound: z 4 = 477 7 = 68.14 Can still beat incumbent best!

Enumeration Tree: Status after Third Branching

Branch-and-Bound Example (Fourth Branching) For P 5, (x 1, x 2 ) = ( 3, 10 ) 7 is fractional so again we decompose: P 6 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 3, x 2 1 P 10 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 3, x 2 = 2 First solve formulation P 6 : LP Solution: (x 1, x 2 ) = ( 10 3, 1) Upper Bound: z 4 = 681 10 = 68.1 Can still beat incumbent best! Continue with depth-first search: P 7 (x 1 3) yields x = (3, 1) with z 7 = 63 (feasible but not optimal) P 8 (x 1 4) yields x = (4, 0) with z 8 = 68 (best / optimal solution!)

Full Enumeration Tree with Problem Decomposition

Depth-First Search versus Breadth-First Search Several reasons for depth-first when using branch-and-bound: Integer solutions tend to lie deep in the enumeration tree, and finding (good) solutions quickly can improve method. If our current best solution has objective z and some LP bound z < z, we can stop to decompose (prune) branch. Also: if algorithm crashes - at least you have something Can use recursion to implement method (fun exercise). Easy to restart simplex after adding a new constraint. Example: Optimal dictionary for initial LP relaxation over P 0 is ζ = 205 3 5 3 w 1 1 3 w 2 x 1 = 5 3 1 3 w 1 + 7 3 w 2 x 2 = 10 3 + 1 3 w 1 10 3 w 2 For P 2 (x 1 2), add constraint w 3 = x 1 2 = 1 3 1 3 w 1 + 7 3 w 2.

Restarting Branch-and-Bound using Dual Simplex After adding w 3 = x 1 2 = 1 3 1 3 w 1 + 7 3 w 2, new dictionary is ζ = 205 3 5 3 w 1 1 3 w 2 x 1 = 5 3 1 3 w 1 + 7 3 w 2 x 2 = 10 3 + 1 3 w 1 10 3 w 2 w 3 = 1 3 1 3 w 1 + 7 3 w 2 Because dictionary is primal infeasible but dual feasible, we can use the dual simplex method and pivot rules (check: w 3 w 2 ): ζ = 478 7 12 7 w 1 1 7 w 3 x 1 = 2 + w 3 x 2 = 20 7 1 7 w 1 10 7 w 3 w 2 = 1 7 + 1 7 w 1 + 3 7 w 3 The new dictionary is primal-dual feasible, so (x 1, x 2 ) = ( 2, 20 7 and x 2 = 478 7 are optimal for the LP relaxation over P 2. Quick! )