# Algorithms for Integer Programming

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Algorithms for Integer Programming Laura Galli December 18, 2014 Unlike linear programming problems, integer programming problems are very difficult to solve. In fact, no efficient general algorithm is known for their solution. Given our inability to solve integer programming problems efficiently, it is natural to ask whether such problems are inherently hard. Complexity theory, offers some insight on this question. It provides us with a class of problems with the following property: if a polynomial time algorithm exists for any problem in this class, then all integer programming problems can be solved by a polynomial algorithm, but this is considered unlikely. Algorithms for integer programming problems rely on two basic concepts: Relaxations Bounds There are three main categories of algorithms for integer programming problems: Exact algorithms that guarantee to find an optimal solution, but may take an exponential number of iterations. They include cutting-planes, branch-and-bound, and dynamic programming. Heuristic algorithms that provide a suboptimal solution, but without a guarantee on its quality. Although the running time is not guaranteed to be polynomial, empirical evidence suggests that some of these algorithms find a good solution fast. Approximation algorithms that provide in polynomial time a suboptimal solution together with a bound on the degree of sub-optimality. 1

2 1 Optimality, Relaxation, and Bounds Given an IP max{c T x x X Z n +}, how is it possible to prove that a given point x is optimal? Put differently, we are looking for some optimality conditions that will provide stopping criteria in an algorithm for IP. The naive but nonetheless important reply is that we need to find a lower bound z z and an upper bound z z such that z = z = z. Practically, this means that any algorithm will find a decreasing sequence of upper bounds, and an increasing sequence of lower bounds, and stop when z z ɛ. Primal bounds: every feasible solution x provides a lower bound z = x z. Dual bounds: The most important approach is by relaxation, the idea being to replace a difficult (max) IP by a simpler optimization problem whose optimal value is at least as large as z. For the relaxed problem to have this property, there are two obvious possibilities: Enlarge the set of feasible solutions so that one optimizes over a larger set, or Replace the (max) objective function by a function that has the same or a larger value everywhere. Def.: A problem (RP ) z R = max{f(x) x T R n } is a relaxation of (IP ) z = max{c(x) x X R n } if: 1. X T, and 2. f(x) c(x) x X. If RP is a relaxation of IP = z R z. z R is an upper bound for z. The question then arises of how to construct interesting relaxations: Linear Programming relaxation: note that the definition of better formulation is intimately related to that of linear programming relaxations. In particular, better formulations give tighter (dual) bounds. Combinatorial relaxation: the relaxation is an easy combinatorial problem that can be solved rapidly. Relaxation by elimination: suppose we are given an integer program IP in the form z = max{c(x) Ax b, x X R n }. If the problem is too difficult to solve directly, one possibility is just to drop the constraints Ax b. Clearly, the resulting problem z = max{c(x) x X R n } is a relaxation of IP. 2

3 Lagrangian relaxation: An important extension of this idea is not just to drop complicating constraints, but then to add them into the objective function with Lagrange multipliers ( penalizing the violation). Let z(u) = max{c(x) + u(b Ax) x X}. Then, z(u) z u 0. Surrogate relaxation: A subset of constraints is substituted by a unique constraint corresponding to their linear combination using a multiplier vector u 0: z(u) = max{c(x) u T (Ax) u T b, x X}. 2 Exact methods 2.1 Cutting-Planes Given an integer programming problem ILP = max{c T x Ax b, x Z n }, and its continuous relaxation LP = max{c T x Ax b, x R n } a generic cutting plane algorithm looks as follows: 1. Solve the linear programming relaxation LP. Let x be an optimal solution. 2. If x is integer stop; x is an optimal solution to ILP. 3. If not, add a linear inequality constraint to LP that all integer solutions in ILP satisfy, but x does not; go to Step# LPs with too many rows: constraint separation Note that the above cutting-planes scheme can be generalized to solve LP problems with a large number of constraints (e.g., exponential). We call this generalization dynamic constraint generation or row generation: 1. Initialize reduced master problem Ãx b with a small subset of constraints Ax b. 2. Solve LP associated to the reduced master problem RMP = max{c T x Ãx b, x R n } and obtain x. 3. If x satisfies all constraints in Ax b stop; otherwise add a (some) violated constraints to Ãx b and goto #2. The scheme is very simple: constraints are added only if needed. The crucial part is step #3, aka separation problem, where we want to find a violated constraint (or prove that none exists). If the number of constraints is large, we cannot enumerate all the constraints explicitly and check them one by one. So we need to formulate the separation problem as an LP, more likely as an ILP. It may sound strange to solve an ILP in an algorithm to solve an LP, yet this scheme is quite efficient in practice! Note that according to the scheme 3

4 above, the optimal value c T x of any reduced master is always greater than or equal to the optimal value of the original LP problem, so even if we stop the procedure before the stopping condition is met, we still have a valid upper bound available. 2.3 LPs with too many columns: column generation What about models where the number of variables is too large to deal with explicitly (e.g., exponential)? Again, the idea is simple: we consider only a subset of the variables. The process is called column generation or variable pricing: 1. Initialize reduced master problem Ãx b with a small subset of variables. 2. Solve LP associated to the reduced master problem RMP = max{ c T x Ãx b, x R n } and obtain primal solution x (padding with 0 the values of the variables the do not appear in the restricted master). Let y be the corresponding dual solution. 3. If x is optimal for the original LP, i.e. y is dual feasible, stop; otherwise add a (some) columns with positive reduced cost, i.e., columns corresponding to violated dual constraints, and goto #2. Note that in the dynamic row generation algorithm we had to test feasibility of the solution x of the RMP. Here, in the dynamic column generation algorithm, we need to test optimality of the solution x of the RMP, i.e., we need to test if there is any column with positive reduced cost. Testing optimality is equivalent to testing dual feasibility, so given a dual solution y of RMP, the quantity c j m i=1 a ijyi is the reduced cost of variable x j. Again, the crucial part is step #3, aka pricing problem, where we want to find a violated dual constraint (or prove that none exists). If the number of dual constraints is large, we cannot enumerate all the dual constraints explicitly and check them one by one. So we need to formulate the pricing problem as an LP, more likely as an ILP. And again, it may sound strange to solve an ILP in an algorithm to solve an LP, yet this scheme is quite efficient in practice! Note that during the column generation process, the value cx is always smaller than the optimal cost of the original LP, so we do not know whether such value is larger or smaller than the optimal cost of the original ILP. Hence, if we stop the procedure before the stopping condition is met, we do not have a valid upper bound. 2.4 Branch-and-Bound Branch-and-bound uses a divide and conquer approach to explore the set of feasible solutions. However, instead of exploring the entire feasible set, it uses bounds on the optimal cost to avoid exploring certain parts of the set of feasible integer solutions. Let X be the set of feasible integer solutions to the problem max{c T x x X}. We partition the set X into a finite collection of subsets X 1,..., X k and solve separately each one of the subproblems: max{c T x x X i } i = 1... k. We then compare the optimal solutions to the 4

5 subproblems, and choose the best one. Each subproblem may be almost as difficult as the original problem and this suggests trying to solve each subproblem by means of the same method; that is by splitting it into further subproblems, etc. This is the branching part of the method and leads to a tree of subproblems. We also assume that there is a fairly efficient algorithm, which for every X i of interest, computes an upper bound z(x i ) to the optimal cost of the corresponding subproblem. In the course of the algorithm, we will also occasionally solve certain subproblems to optimality, or simply evaluate the cost of certain feasible solutions. This allows us to maintain a lower bound L on the optimal cost, which could be the cost of the best solution found thus far. The essence of the algorithm lies in the following observation. If the upper bound z(x i ) corresponding to a particular subproblem X i satisfies z(x i ) L, then this subproblem need not be considered further, since the optimal solution to the subproblem is no better than that the best feasible solution encountered thus far. A generic branch-and-bound algorithm looks as follows: 1. Select an active subproblem X i. 2. If the subproblem is infeasible, delete it; otherwise, compute upper bound z(x i ) for the corresponding subproblem. 3. If z(x i ) L delete the subproblem. 4. If z(x i ) L, either obtain an optimal solution to the subproblem, or break the corresponding subproblem into further subproblems, which are added to the list of active subproblems. Note that there are several free parameters in this algorithm: 1. There are several ways to choose the subproblem, i.e., a rule for which subproblem to work on next, e.g., breadth-first search, depth-first search, best-first search. 2. There are several ways to obtain an upper bound, e.g., continuous relaxation. 3. There are several ways of breaking a problem into subproblems, i.e., a rule for deciding how to branch (aka branching rule). In particular, an LP-based Branch-and-Bound scheme for a (max) ILP problem, looks as follows: 1. Init subproblems list with ILP and set L =. 2. If list is empty: stop; otherwise select subproblem P from the list. 3. Solve LP relaxation of P and find optimal solution x. 4. If infeasible goto #2. 5. If x integer, set L = max{l, c T x } and goto #2. 5

6 6. If c T x L goto #2. 7. Select variable x j such that x j is fractional, generate two subproblems one with constraint x j x j and the other with constraint x j x j, add the two subproblems to the list, and goto step # Cut-and-Branch In the 1970s and 1980s, people experimented with the following approach: 1. Solve an initial LP relaxation (by primal simplex). 2. Let x be the solution. If x is integer and feasible, output the optimal solution and stop. 3. Search for strong (possibly facet-defining) inequalities violated by x. If none are found go to Step#6. 4. Add the inequalities to the LP as cutting planes. 5. Resolve the LP (by dual simplex) and go to Step#2. 6. Run branch-and-bound (keeping the cutting planes in the ILP formulation). Cut-and-branch has one disadvantage: if our ILP formulation has an exponential number of constraints (as in the TSP) then cut-and-branch might lead to an infeasible solution! This is because the final solution may be integer, but may violate a constraint that was not generated in the cutting-plane phase. In the 1980s, Grötschel, Padberg and coauthors used a modified version of cut-and-branch to solve quite large TSPs (up to 300 cities or so): 1. Run the cutting-plane algorithm. 2. If the solution represents a tour, stop. 3. Run branch-and-bound on the enlarged formulation. 4. If the integer solution represents a tour, stop. 5. Find one or more sub-tour elimination constraints that are violated by the integer solution. Add them to the ILP formulation and return to Step#3. 6

7 2.4.2 Branch-and-Cut The idea was: why not search for cutting planes at every node of the branch-and-bound tree? The term branch-and-cut was coined by Padberg & Rinaldi (1987, 1991), they used it to solve large TSP instances (up to 2000 cities or so). The main ingredients of branch-and-cut are: Linear programming solver (capable of both primal and dual simplex) Branch-and-bound shell which enables cutting planes to be added at any node of the tree. Subroutines that search for strong cutting planes. In other words, branch-and-cut utilizes cuts when solving the subproblems. In particular, we augment the formulation of the subproblems with additional cuts, in order to improve the bounds obtained from the linear programming relaxations Branch-and-Price Imagine that we have established that the LP relaxation of our (max ) ILP can be solved using column generation: If we then want to solve the original ILP to proven optimality, we have to start branching! So, we price at each node of the branch-and-bound tree, just in case more columns of positive reduced cost can be found. The name branch-and-price comes from Savelsbergh (1997). 2.5 Dynamic Programming In the previous (sub)section, we introduced branch and bound, which is an exact, intelligent enumerative technique that attempts to avoid enumerating a large portion of the feasible integer solutions. In this section, we introduce another exact technique, called dynamic programming that solves integer programming problems sequentially. We start recalling a well-known algorithm for the shortest path problem in a di-graph with no negative cycles: Bellman-Ford algorithm. This is a label correcting algorithm. At iteration k, label π j represents the cost of a shortest path from s to j with at most k arcs. The correctness is based on the the so called Bellman-Ford optimality conditions. 7

8 Algorithm 1: Bellman-Ford algorithm, time complexity O( V A ) Input: A di-graph G = (V, A) with arc costs c, no negative cycles, and a source vertex s V. Output: Shortest path from s to v V or reports negative-weight cycle π(s) := 0; p(s) := /; foreach v V \ {s} do π(v) := ; p(v) := /; end for k 1 to V 1 do π = π foreach (i, j) A do if (π (j) > π (i) + c ij ) then π(j) = π(i) + c ij ; p(j) = i; end end end foreach (i, j) A do if (π(j) > π(i) + c ij ) then report that a negative-weight cycle exists end end Optimality Conditions: given a graph G = (V, A) with costs c ij (ij) A and source s, the labels π v v V represent the costs of shortest paths from s to v if and only if π v are the lengths of feasible paths from s to v and π j π i c ij (ij) A. The algorithm was designed as an application of dynamic programming, in fact the optimality conditions can also be proved in terms of dynamic programming optimality principle for shortest paths. Dynamic programming is based on Bellman s Optimality Principle: Regardless of the decisions taken to enter a particular state in a particular stage, the remaining decisions made for leaving that stage must constitute an optimal policy. Consequence: If we have entered the final state of an optimal policy, we can trace it back. Guidelines for constructing dynamic programming algorithms: 8

9 1. View the choice of a feasible solution as a sequence of decisions occurring in stages, and so that the total cost is the sum of the costs of individual decisions. 2. Define the state as a summary of all relevant past decisions. 3. Determine which state transitions are possible. Let the cost of each state transition be the cost of the corresponding decision. 4. Define the objective function and write a recursion on the optimal cost from the origin state to a destination state. 3 Heuristics A heuristic algorithm is any algorithm that finds a feasible solution. Typically, heuristics are fast (polynomial) algorithms. For some problems, even feasibility is N P -complete, in such cases a heuristic algorithm generally cannot guarantee to find a feasible solution. Heuristic algorithms can be classified in the following way: Constructive start from an empty solution iteratively add new elements to the current partial solution, until a complete solution is found greedy, optimization based, implicit enumeration based Local Search start from an initial feasible solution iteratively try to improve it by slightly modifying it stop when no improving adjustments are possible (local optimum) Meta-heuristics an evolution of local search algorithms avoid local optima using special techniques try to avoid cycling in the execution of the algorithm Typically, the quality of the solutions found improves as we move from constructive, to local search, and finally to meta-heuristics. 9

10 3.1 Constructive algorithms Constructive algorithms construct a feasible solution starting only from the input data. For some problems where feasibility is N P -complete, they try to find a solution as feasible as possible Greedy The idea is to start from an empty solution and iteratively construct a solution using an expansion criterion that allows one to take a convenient choice at each iteration subject to the problem constraints. Often times, the expansion criterion is based on sorting the elements according to a score that represents the quality of the choice. This way, we try to reward at each iteration the most promising choice. The performance of the algorithm, generally improves if the scores are updated dynamically as the algorithm proceeds. Note that any choice taken, cannot be changed! Here are some examples. Greedy for KP-01: 1. S :=, c := c 2. If w j > c j / S STOP 3. Among j / S with w j c, determine j having max score k(j) := p j w j, set S := S {j }, c := c w j GOTO 2. Greedy for SCP: 1. S :=, M := {1... m} 2. If M = STOP 3. Among j / S determine j such having min score k(j) := c j m i=1 a ij, set S := S {j }, M := M \ {i : aij = 1} GOTO Optimization based These are iterative heuristics where at each iteration the choice is based on the solution of an optimization problem that is easier than the original one. The most common ones are heuristics based on relaxations, where the scores correspond to the information obtained from the solution of the relaxation (e.g., optimal solution, reduced costs, Lagrangian costs etc.) The following scheme has proved to be successful in practice for many classes of ILP problems: 1. Let P be the ILP problem to be solved. 10

11 2. Solve the LP relaxation of P and obtain solution x. 3. If x is integer stop. 4. Fix some variables x j to an integer value a j suggested by x and add the corresponding constraint x j = a j to P ; goto step #2. Clearly, depending on the fixing rule at step #4 we obtain different algorithms and solutions. Here are some examples. ContHeur for KP-01: 1. S :=, c := c, R := {1... n} 2. If R = STOP 3. Solve C(KP01) for j R with capacity c, let x denote the corresponding optimal solution. 4. Among j R determine j having max score x j, set S := S {j }, c := c w j, R := R \ ({j } {j : w j > c}) GOTO 2. LagrCostHeur for KP-01: 1. S :=, c := c 2. If w j > c j / S STOP 3. Among j / S with w j c, determine j having max lagrangian profit p j := p j λw j, set S := S {j }, c := c w j GOTO 2. ContHeur for SCP: 1. S :=, M := {1... m} 2. If M = STOP 3. Solve C(SCP) with additional constraint x j = 1 j S, let x denote the corresponding optimal solution. 4. Among j / S determine j having max score score x j, set S := S {j }, M := M \ {i : aij = 1} GOTO 2. LagrCostHeur for SCP: 11

12 1. S :=, M := {1... m} 2. If M = STOP 3. Among j / S determine j having min lagrangian cost c j = c j m i=1 a ijλ i, set S := S {j }, M := M \ {i : aij = 1} GOTO 2. LagrSolHeur for SCP: 1. S :=, M := {1... m} 2. For j = 1... n do c j = c j m i=1 a ijλ i if c j 0 set S := S {j}, M := M \ {i : aij = 1} 3. If M = STOP 4. Foreach i M do: c = min{ c j : a ij = 1} u i = u i + c Foreach j / S such that a ij = 1 c j := c j c if c j = 0 set S := S {j}, M := M \ {l : alj = 1} Implicit enumeration based These heuristics are based on the partial execution of an implicit enumeration algorithm (branch-and-bound). The most common version consists in stopping the branch-and-bound algorithm after a certain time-limit. Another possibility is to limit the number of subproblems generated at each node, keeping only the most promising ones. 3.2 Local Search These algorithms start from an initial solution ( current solution) and try to improve it with small adjustments. The algorithm terminates when no adjustment can improve the current solution (local optimum). They define a neighborhood of a feasible solution x as a function N : x N(x) where N(x) is a subset of the feasible set. In other words, the neighborhood of N(x) is defined by the set of available modifications that can be applied to x maintaining feasibility. The basic scheme for a local search algorithm (maximization problem) looks as follows: 1. if ( x N(x) : f(x ) > f(x)) then 12

13 2. x := x 3. if (f(x) = UB) stop (x is optimal) 4. goto #2 5. else 6. stop (local optimum) 7. endif Here are some examples. Local Search for KP-01: 1. Init solution S 2. Consider the following moves: S {j}, j / S (insertion of one item) S {j} \ {i}, j / S, i S (exchange between two items) 3. Let R be the best among these moves 4. if j R p j j S p j STOP (local optimum) 5. else S := R GOTO2. Local Search for SCP: 1. Init solution S 2. Consider the following moves: S \ {j}, j S (removal of one column) S \ {j} {i}, j S, i / S (exchange of two columns) 3. Let R be the best among these moves 4. if j R c j j S c j STOP (local optimum) 5. else S := R GOTO2. 13

### Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

### 24. The Branch and Bound Method

24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

### INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

### Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows

TECHNISCHE UNIVERSITEIT EINDHOVEN Branch-and-Price Approach to the Vehicle Routing Problem with Time Windows Lloyd A. Fasting May 2014 Supervisors: dr. M. Firat dr.ir. M.A.A. Boon J. van Twist MSc. Contents

### Near Optimal Solutions

Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

### ! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

### A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

Abstract A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Fang He and Rong Qu The Automated Scheduling, Optimisation and Planning (ASAP) Group School of Computer Science,

### Dynamic programming. Doctoral course Optimization on graphs - Lecture 4.1. Giovanni Righini. January 17 th, 2013

Dynamic programming Doctoral course Optimization on graphs - Lecture.1 Giovanni Righini January 1 th, 201 Implicit enumeration Combinatorial optimization problems are in general NP-hard and we usually

### Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery

### Integrating Benders decomposition within Constraint Programming

Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP

### Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

### Solving Linear Programs

Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

### Branch and Cut for TSP

Branch and Cut for TSP jla,jc@imm.dtu.dk Informatics and Mathematical Modelling Technical University of Denmark 1 Branch-and-Cut for TSP Branch-and-Cut is a general technique applicable e.g. to solve symmetric

### Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing

Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing Pietro Belotti, Antonio Capone, Giuliana Carello, Federico Malucelli Tepper School of Business, Carnegie Mellon University, Pittsburgh

### Algorithm Design and Analysis

Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

### Chapter 13: Binary and Mixed-Integer Programming

Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

### 2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

### EXCEL SOLVER TUTORIAL

ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

### A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem

[J. Res. Natl. Inst. Stand. Technol. 111, 161-185 (2006)] A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem Volume 111 Number 2 March-April 2006 Christine Villa and Karla

### Batch Production Scheduling in the Process Industries. By Prashanthi Ravi

Batch Production Scheduling in the Process Industries By Prashanthi Ravi INTRODUCTION Batch production - where a batch means a task together with the quantity produced. The processing of a batch is called

### The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

### Two objective functions for a real life Split Delivery Vehicle Routing Problem

International Conference on Industrial Engineering and Systems Management IESM 2011 May 25 - May 27 METZ - FRANCE Two objective functions for a real life Split Delivery Vehicle Routing Problem Marc Uldry

### Solutions to Homework 6

Solutions to Homework 6 Debasish Das EECS Department, Northwestern University ddas@northwestern.edu 1 Problem 5.24 We want to find light spanning trees with certain special properties. Given is one example

### Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

### 11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

### 3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

### Scheduling Single Machine Scheduling. Tim Nieberg

Scheduling Single Machine Scheduling Tim Nieberg Single machine models Observation: for non-preemptive problems and regular objectives, a sequence in which the jobs are processed is sufficient to describe

### Discrete Optimization

Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

### 1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

### Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

### Route optimization applied to school transports A method combining column generation with greedy heuristics

PREPRINT Route optimization applied to school transports A method combining column generation with greedy heuristics Mikael Andersson Peter Lindroth Department of Mathematics CHALMERS UNIVERSITY OF TECHNOLOGY

### Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique

### Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

### Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

MASTER STHESIS Minimizing costs for transport buyers using integer programming and column generation Eser Esirgen DepartmentofMathematicalSciences CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG

### OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

### Randomization Approaches for Network Revenue Management with Customer Choice Behavior

Randomization Approaches for Network Revenue Management with Customer Choice Behavior Sumit Kunnumkal Indian School of Business, Gachibowli, Hyderabad, 500032, India sumit kunnumkal@isb.edu March 9, 2011

### In this paper we present a branch-and-cut algorithm for

SOLVING A TRUCK DISPATCHING SCHEDULING PROBLEM USING BRANCH-AND-CUT ROBERT E. BIXBY Rice University, Houston, Texas EVA K. LEE Georgia Institute of Technology, Atlanta, Georgia (Received September 1994;

### Motivated by a problem faced by a large manufacturer of a consumer product, we

A Coordinated Production Planning Model with Capacity Expansion and Inventory Management Sampath Rajagopalan Jayashankar M. Swaminathan Marshall School of Business, University of Southern California, Los

### Branch, Cut, and Price: Sequential and Parallel

Branch, Cut, and Price: Sequential and Parallel T.K. Ralphs 1, L. Ladányi 2, and L.E. Trotter, Jr. 3 1 Department of Industrial and Manufacturing Systems Engineering, Lehigh University, Bethlehem, PA 18017,

### Discuss the size of the instance for the minimum spanning tree problem.

3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

### Noncommercial Software for Mixed-Integer Linear Programming

Noncommercial Software for Mixed-Integer Linear Programming J. T. Linderoth T. K. Ralphs December, 2004. Revised: January, 2005. Abstract We present an overview of noncommercial software tools for the

### QoS optimization for an. on-demand transportation system via a fractional linear objective function

QoS optimization for an Load charge ratio on-demand transportation system via a fractional linear objective function Thierry Garaix, University of Avignon (France) Column Generation 2008 QoS optimization

### GENERALIZED INTEGER PROGRAMMING

Professor S. S. CHADHA, PhD University of Wisconsin, Eau Claire, USA E-mail: schadha@uwec.edu Professor Veena CHADHA University of Wisconsin, Eau Claire, USA E-mail: chadhav@uwec.edu GENERALIZED INTEGER

### Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs

Analysis of Approximation Algorithms for k-set Cover using Factor-Revealing Linear Programs Stavros Athanassopoulos, Ioannis Caragiannis, and Christos Kaklamanis Research Academic Computer Technology Institute

### Chapter 3 INTEGER PROGRAMMING 3.1 INTRODUCTION. Robert Bosch. Michael Trick

Chapter 3 INTEGER PROGRAMMING Robert Bosch Oberlin College Oberlin OH, USA Michael Trick Carnegie Mellon University Pittsburgh PA, USA 3.1 INTRODUCTION Over the last 20 years, the combination of faster

### Cloud Branching. Timo Berthold. joint work with Domenico Salvagnin (Università degli Studi di Padova)

Cloud Branching Timo Berthold Zuse Institute Berlin joint work with Domenico Salvagnin (Università degli Studi di Padova) DFG Research Center MATHEON Mathematics for key technologies 21/May/13, CPAIOR

### Research Paper Business Analytics. Applications for the Vehicle Routing Problem. Jelmer Blok

Research Paper Business Analytics Applications for the Vehicle Routing Problem Jelmer Blok Applications for the Vehicle Routing Problem Jelmer Blok Research Paper Vrije Universiteit Amsterdam Faculteit

### How to speed-up hard problem resolution using GLPK?

How to speed-up hard problem resolution using GLPK? Onfroy B. & Cohen N. September 27, 2010 Contents 1 Introduction 2 1.1 What is GLPK?.......................................... 2 1.2 GLPK, the best one?.......................................

### Computer Algorithms. NP-Complete Problems. CISC 4080 Yanjun Li

Computer Algorithms NP-Complete Problems NP-completeness The quest for efficient algorithms is about finding clever ways to bypass the process of exhaustive search, using clues from the input in order

### Routing in Line Planning for Public Transport

Konrad-Zuse-Zentrum für Informationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Germany MARC E. PFETSCH RALF BORNDÖRFER Routing in Line Planning for Public Transport Supported by the DFG Research

### COORDINATION PRODUCTION AND TRANSPORTATION SCHEDULING IN THE SUPPLY CHAIN ABSTRACT

Technical Report #98T-010, Department of Industrial & Mfg. Systems Egnieering, Lehigh Univerisity (1998) COORDINATION PRODUCTION AND TRANSPORTATION SCHEDULING IN THE SUPPLY CHAIN Kadir Ertogral, S. David

### Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

### THE SCHEDULING OF MAINTENANCE SERVICE

THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single

### Testing LTL Formula Translation into Büchi Automata

Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN-02015 HUT, Finland

### Het inplannen van besteld ambulancevervoer (Engelse titel: Scheduling elected ambulance transportation)

Technische Universiteit Delft Faculteit Elektrotechniek, Wiskunde en Informatica Delft Institute of Applied Mathematics Het inplannen van besteld ambulancevervoer (Engelse titel: Scheduling elected ambulance

### Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

### A Linear Programming Based Method for Job Shop Scheduling

A Linear Programming Based Method for Job Shop Scheduling Kerem Bülbül Sabancı University, Manufacturing Systems and Industrial Engineering, Orhanlı-Tuzla, 34956 Istanbul, Turkey bulbul@sabanciuniv.edu

### Proximal mapping via network optimization

L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

### A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,

### Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints

Efficient and Robust Allocation Algorithms in Clouds under Memory Constraints Olivier Beaumont,, Paul Renaud-Goud Inria & University of Bordeaux Bordeaux, France 9th Scheduling for Large Scale Systems

### ASSIGNMENT 4 PREDICTIVE MODELING AND GAINS CHARTS

DATABASE MARKETING Fall 2015, max 24 credits Dead line 15.10. ASSIGNMENT 4 PREDICTIVE MODELING AND GAINS CHARTS PART A Gains chart with excel Prepare a gains chart from the data in \\work\courses\e\27\e20100\ass4b.xls.

### A Branch-Cut-and-Price Approach to the Bus Evacuation Problem with Integrated Collection Point and Shelter Decisions

A Branch-Cut-and-Price Approach to the Bus Evacuation Problem with Integrated Collection Point and Shelter Decisions Marc Goerigk, Bob Grün, and Philipp Heßler Fachbereich Mathematik, Technische Universität

### Basic Components of an LP:

1 Linear Programming Optimization is an important and fascinating area of management science and operations research. It helps to do less work, but gain more. Linear programming (LP) is a central topic

### Chapter 15: Dynamic Programming

Chapter 15: Dynamic Programming Dynamic programming is a general approach to making a sequence of interrelated decisions in an optimum way. While we can describe the general characteristics, the details

### Fairness in Routing and Load Balancing

Fairness in Routing and Load Balancing Jon Kleinberg Yuval Rabani Éva Tardos Abstract We consider the issue of network routing subject to explicit fairness conditions. The optimization of fairness criteria

### On the effect of forwarding table size on SDN network utilization

IBM Haifa Research Lab On the effect of forwarding table size on SDN network utilization Rami Cohen IBM Haifa Research Lab Liane Lewin Eytan Yahoo Research, Haifa Seffi Naor CS Technion, Israel Danny Raz

Adaptive Linear Programming Decoding Mohammad H. Taghavi and Paul H. Siegel ECE Department, University of California, San Diego Email: (mtaghavi, psiegel)@ucsd.edu ISIT 2006, Seattle, USA, July 9 14, 2006

### 3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

### A Column Generation Model for Truck Routing in the Chilean Forest Industry

A Column Generation Model for Truck Routing in the Chilean Forest Industry Pablo A. Rey Escuela de Ingeniería Industrial, Facultad de Ingeniería, Universidad Diego Portales, Santiago, Chile, e-mail: pablo.rey@udp.cl

### 1 Description of The Simpletron

Simulating The Simpletron Computer 50 points 1 Description of The Simpletron In this assignment you will write a program to simulate a fictional computer that we will call the Simpletron. As its name implies

### IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

### Embedded Systems 20 BF - ES

Embedded Systems 20-1 - Multiprocessor Scheduling REVIEW Given n equivalent processors, a finite set M of aperiodic/periodic tasks find a schedule such that each task always meets its deadline. Assumptions:

### Dantzig-Wolfe bound and Dantzig-Wolfe cookbook

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline LP strength of the Dantzig-Wolfe The exercise from last week... The Dantzig-Wolfe

### December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

### Concepts of digital forensics

Chapter 3 Concepts of digital forensics Digital forensics is a branch of forensic science concerned with the use of digital information (produced, stored and transmitted by computers) as source of evidence

### Solution of a Large-Scale Traveling-Salesman Problem

Chapter 1 Solution of a Large-Scale Traveling-Salesman Problem George B. Dantzig, Delbert R. Fulkerson, and Selmer M. Johnson Introduction by Vašek Chvátal and William Cook The birth of the cutting-plane

### Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design

Bicolored Shortest Paths in Graphs with Applications to Network Overlay Design Hongsik Choi and Hyeong-Ah Choi Department of Electrical Engineering and Computer Science George Washington University Washington,

### Embedded Systems 20 REVIEW. Multiprocessor Scheduling

Embedded Systems 0 - - Multiprocessor Scheduling REVIEW Given n equivalent processors, a finite set M of aperiodic/periodic tasks find a schedule such that each task always meets its deadline. Assumptions:

### Optimization in R n Introduction

Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

### A Survey of Very Large-Scale Neighborhood Search Techniques

A Survey of Very Large-Scale Neighborhood Search Techniques Ravindra K. Ahuja Department of Industrial & Systems Engineering University of Florida Gainesville, FL 32611, USA ahuja@ufl.edu Özlem Ergun Operations

### This paper introduces a new method for shift scheduling in multiskill call centers. The method consists of

MANUFACTURING & SERVICE OPERATIONS MANAGEMENT Vol. 10, No. 3, Summer 2008, pp. 411 420 issn 1523-4614 eissn 1526-5498 08 1003 0411 informs doi 10.1287/msom.1070.0172 2008 INFORMS Simple Methods for Shift

### Optimization models for targeted offers in direct marketing: exact and heuristic algorithms

Optimization models for targeted offers in direct marketing: exact and heuristic algorithms Fabrice Talla Nobibon, Roel Leus and Frits C.R. Spieksma {Fabrice.TallaNobibon; Roel.Leus; Frits.Spieksma}@econ.kuleuven.be

### Single machine parallel batch scheduling with unbounded capacity

Workshop on Combinatorics and Graph Theory 21th, April, 2006 Nankai University Single machine parallel batch scheduling with unbounded capacity Yuan Jinjiang Department of mathematics, Zhengzhou University

### 1.204 Lecture 10. Greedy algorithms: Job scheduling. Greedy method

1.204 Lecture 10 Greedy algorithms: Knapsack (capital budgeting) Job scheduling Greedy method Local improvement method Does not look at problem globally Takes best immediate step to find a solution Useful

### Facility Location: Discrete Models and Local Search Methods

Facility Location: Discrete Models and Local Search Methods Yury KOCHETOV Sobolev Institute of Mathematics, Novosibirsk, Russia Abstract. Discrete location theory is one of the most dynamic areas of operations

### Branch-and-Price for the Truck and Trailer Routing Problem with Time Windows

Branch-and-Price for the Truck and Trailer Routing Problem with Time Windows Sophie N. Parragh Jean-François Cordeau October 2015 Branch-and-Price for the Truck and Trailer Routing Problem with Time Windows

### Chapter 10: Network Flow Programming

Chapter 10: Network Flow Programming Linear programming, that amazingly useful technique, is about to resurface: many network problems are actually just special forms of linear programs! This includes,

### Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

### Analysis of Algorithms I: Optimal Binary Search Trees

Analysis of Algorithms I: Optimal Binary Search Trees Xi Chen Columbia University Given a set of n keys K = {k 1,..., k n } in sorted order: k 1 < k 2 < < k n we wish to build an optimal binary search

### Optimal Scheduling for Dependent Details Processing Using MS Excel Solver

BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 8, No 2 Sofia 2008 Optimal Scheduling for Dependent Details Processing Using MS Excel Solver Daniela Borissova Institute of

### Combining (Integer) Linear Programming Techniques and Metaheuristics for Combinatorial Optimization

Combining (Integer) Linear Programming Techniques and Metaheuristics for Combinatorial Optimization Günther R. Raidl 1 and Jakob Puchinger 2 1 Institute of Computer Graphics and Algorithms, Vienna University

### Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

### Incremental Network Design with Shortest Paths

Incremental Network Design with Shortest Paths Tarek Elgindy, Andreas T. Ernst, Matthew Baxter CSIRO Mathematics Informatics and Statistics, Australia Martin W.P. Savelsbergh University of Newcastle, Australia

### Equilibrium computation: Part 1

Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

### Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve