Math 5593 Linear Programming Weeks 12/13

Size: px
Start display at page:

Download "Math 5593 Linear Programming Weeks 12/13"

Transcription

1 Math 5593 Linear Programming Weeks 12/13 University of Colorado Denver, Fall 2013, Prof. Engau 1 Introduction 2 Polyhedral Theory 3 LP and Lagrangean Relaxation 4 Computational Methods and Algorithms

2 Integer Programming and Combinatorial Optimization In practice, many decision variables must be limited to integers (people, products, etc.) or binary values (on / off, yes / no, etc.): var var name integer / binary; (Binary) Integer Program (IP/BIP): max c T x : Ax b, x integer / binary Mixed Integer (Linear) Program (MIP): max c T x + d T y : Ax + By b, y integer / binary Combinatorial Optimization Problems (COP): Given a finite set N = 1, 2,..., n, weights c j for all j N, and a family F P(N) of subsets of N: max or min j S c j : S F. Solving IPs, BIPs, or MIPs as LP and rounding fractional values to integers does not often work well and may destroy feasibility!

3 Review: Example Problems and Modeling Techniques Many classic OR problems and models are inherently discrete: Easy : Transportation, assignment, other network flows Difficult : TSP, scheduling, facility location, knapsack In addition, integer or binary variables have model applications: Cardinality constraints: at least k (set covering constraint), at most k (set packing/budget), exactly k (set partitioning) Zero or minimum/maximum/range constraints: ly x uy Either-or disjunctions: A 1 x b 1 + My, A 2 x b 2 + M(1 y) If-then conditionals: A 1 x > b 1 My, A 2 x b 2 + M(1 y) Fixed costs (in objective): min cvar T x + cfix T y where x My Piecewise linear function/approximation: L i y i x i L i y i 1 Linearization of binary quadratic programs: x 0, 1 n x 2 i = x i, x i x j = x ij, maxx i + x j 1, 0 x ij minx i, x j

4 IP Basics: Polyhedral Theory and Formulations Goal: Given a discrete (integer) set X Z n, find a continuous (polyhedral) formulation P R n for X such that X = P Z n : X = x Z n : Ax b = x R n : Ax b Z n = P Z n Ex: X = x Z 2 : 7x x 2 56, 0 x 1 6, 0 x 2 4 P = x R 2 : 7x x 2 56, 0 x 1 6, 0 x 2 4 x x 1 P is a trivial formulation for X: P is the gray shaded area. X are the grid points in P. Is P a good formulation for X? Perfect for box constraints. Poor for 7x x 2 56.

5 Improving P: Better and Ideal Formulations Def: A formulation P 1 for X Z n is better than P 2 if P 1 P 2. Say P 1 strengthens or tightens or cuts off parts of P 2. Example: X = 7x x 2 56, 0 x 1 6, 0 x 2 4 Z 2 x Idea: Improve a formulation by adding supporting constraints! Add x 1 + x 2 7 Add 2x 1 + 3x 2 16 But P must stay convex! x Best possible: convex hull! 1 The ideal formulation for X Z n is its convex hull P = conv(x)! All extreme points of P are integer ( integer polyhedron ). Can solve IP maxc T x : x X as LP maxc T x : x P. Unfortunately, finding conv(x) is often (at least) as hard as IP!

6 When is Solving IP as Easy as LP: Integer Polyhedra! Let A, b integer and consider IP max c T x : Ax = b, x Z n +. Question: When does LP (x R n +) have integer solution? Answer: If P = x R n + : Ax = b is an integer polyhedron! Equivalent Answer: If the extreme points (bfs) of P are integer: (x B, x N ) = (B 1 b, 0) Z n + for all (the optimal) basis matrix B of A. Cramer s Rule: If Ax = b is nonsingular, then x i = det(a i ) det(a) where A i is the matrix formed by replacing the ith column of A with b. We can apply Cramer s Rule to the basic system Bx B = b Clear that det(bi ) is an integer, but need det(b) 1, 1 Inverse Rule: B 1 = Adj(B) det(b) where Adj(B) is adjugate (adjunct) matrix (transpose of cofactor matrix C where C ij = ( 1) i+j B ij ) Again clear that Adj(B) is integer, so need det(b) ±1. Result: If B is optimal and det B ±1, 0, then LP solves IP.

7 Integer Polyhedra and Totally Unimodular Matrices Definition: A matrix A is called totally unimodular (TU) if every square submatrix B of A has its determinant det(b) ±1, 0. [ ] 1 1 A = has det(a) = 1, but det(b) = 2 = 2 (not TU). 1 2 [ ] [ ] A = has det(a) = 0 and is TU, but is not [ ] How about A = ? Not TU: find! How about A = ? TU! (good exercise) Result: P = x R n + : Ax = b is an integer polyhedron for all integer values of b for which it is bounded if and only if A is TU.

8 TU Matrices: Conditions and Examples Verifying whether a matrix is TU is a challenging undertaking! There is an exponential number of submatrices to check. However, it is easy to show that a matrix is not TU (if we already found or know a suitable certificate submatrix) For insiders: yes, that sounds a lot like NP-complete However, some conditions and special cases that might help: If A is TU, then all entries of A must satisfy a ij ±1, 0. A is TU if and only if A T is TU if and only if [ A I ] is TU. A is TU if a ij 0, 1, 1, a ij ±1 for at most two i in each column j, and there is a partition of rows I = I 1 I 2 such that for all columns with exactly two nonzero entries: a ij a ij = 0. i I 1 i I 2 Examples: [ ] (not TU), [ ] (I 1 = I, I 2 = )

9 TU Matrices: Conditions and Examples (Continued) A special class of TU matrices are node-arc incidence matrices for graphs G = (V, E) with node or vertex set V = 1, 2,..., n and arc or edge set E = e 1,..., e m : For every e j = (i 1, i 2 ) let 1 if i = i 1 (arc e j originates at node i 1 ) a ij = 1 if i = i 2 (arc e j terminates at node i 2 ) 0 otherwise Example: A = e 1 e 2 e e Network matrices satisfy partition condition (I 1 = I, I 2 = ). Discrete network flow problems without mean constraints (assignment, transportation, shortest-path, maximum-flow) are easy. Covering, packing, or TSP constraints are mean! e 3 e 2

10 Improving P (Continued): Valid Inequalities Let P = x R n : Ax b be a formulation for X = P Z n. If A is TU, then P = conv(x) is integer: ideal formulation. The converse is not true, in general! (A is hardly ever TU). Example: X = 7x x 2 56, 0 x 1 6, 0 x 2 4 Z 2 conv(x) = x 1 +x 2 7, 2x 1 +3x 2 16, 0 x 1 6, 0 x 2 4 Why drop 7x x 2 56? Valid but dominated redundant! An inequality (a, b) (or a T x b) is a valid inequality (vi) if a T x b for all x X. A vi (a 1, b 1 ) dominates (a 2, b 2 ) if there is λ > 0 such that a 1 λa 2, b 1 λb 2, and (a 1, b 1 ) λ(a 2, b 2 ). Note that two vis (a 1, b 1 ) = λ(a 2, b 2 ) are the same if λ > 0. A vi is redundant if it is dominated by a linear combination of other vis (Note: 1 (1, 1, 7) + 3 (2, 3, 16) = (7, 10, 55))

11 What are Good Valid Inequalities: Faces and Facets Result: A vi (a 1, b 1 ) dominates another vi (a 2, b 2 ) if and only if x R n : a1 T x b 1 x R n : a2 T x b 2. Result: Given a formulation P = x R n + : Ax b, a vi (c, d) is redundant for P iff there is y such that A T y c and b T y d. Proof: LP Duality Theorem or Farkas Lemma (as Exercise) Clear that good vis are non-dominated and non-redundant. Def: Given a vi (c, d) for P = x R n : Ax b, a face is a set F = x P : c T x = d (a face is proper if F ). A face of dimension dim(p) 1 (usually n 1) is called a facet. Intersections of P with a supporting hyperplane are faces. Face-defining vis are non-dominated, could be redundant. Facet-defining vis are non-dominated and non-redundant. Optimal representation only needs facet-defining inequalities!

12 3D Geometric Illustration: Faces, Edges, and Vertices For n = 3, we distinguish three types of k-faces (k = 0, 1, 2): 2-faces: faces of max dimension n 1 (polyhedral facets) 1-faces: edges (can be facets for 2-dimensional polygons) 0-faces: vertices (can still be facets for 1-dimensional lines) This is part of Polyhedral Combinatorics!

13 Finding Good Valid Inequalities: Preprocessing Consider the following IP and try to strengthen its formulation: max 2x 1 + 3x 2 + x 3 s.t. 8x 1 + 4x 2 3x 3 21, 3x 1 2x 2 + 4x x 1 5, 1 x 2 4, 0 x 3 3, x Z 3. Idea: Use the box constraints to tighten lower or upper bounds! Solve first inequality for x 1 and use bounds from x 2 and x 3 : 8x x 2 + 3x (1) + 3(3) = 26 So x = 3.25 and thus x 1 3 (because x 1 is integer). Repeat for x 2 and x 3 (paying attenting to inequality signs): 4x x 1 + 3x (0) + 3(3) = 30 3x 3 8x 1 + 4x (0) + 4(1) 21 = 17 So x = 7 and x = 5 does not improve!

14 Finding Good Valid Inequalities: Preprocessing (Cont.) After preprocessing the first inequality, our new formulation is P 1 = 8x 1 + 4x 2 3x 3 21, 3x 1 2x 2 + 4x 3 16, 0 x 1 3, 1 x 2 4, 0 x 3 3. Give up? No way! Let s try the same for the second constraint: 3x x 2 4x (1) 4(3) = 6 2x 2 3x 1 + 4x (3) + 4(3) 16 = 5 4x x 1 + 2x (3) + 2(1) = 9 So x = 2, x = 2, x3 9 4 = 3 and we improved to P 2 = 8x 1 + 4x 2 3x 3 21, 3x 1 2x 2 + 4x 3 16, 2 x 1 3, 1 x 2 2, 3 x 3 3. Only four solutions to check: (2, 1, 3), (2, 2, 3), (3, 1, 3), (3, 2, 3).

15 Finding More Valid Inequalities: Preprocessing (Cont.) Preprocessing also helps for formulations of binary programs: X = x R 4 + : 0 (x 1, x 2, x 3, x 4 ) 1, 2x 1 3x 2 + x 3 x 4 0, 2x 1 + 5x 2 + 4x 3 + x 4 6, 3x 2 + 2x 3 + 4x 4 5 0, 1 4 Idea: Fix one variable value and look for hidden relationships! Suppose that x 1 = 1 and solve the first inequality for x 2 : 3x 2 2x 1 + x 3 x 4 2(1) So x 1 = 1 implies x = 1 (if-then!) and thus x1 x 2. Suppose that x 2 = 1 and solve second inequality for x 3 : 4x x 1 5x 2 x (1) 5(1) 0 = 3 So x 2 = 1 implies x = 0 and thus x2 + x 3 1. Similarly, the third inequality implies x 2 + x 3 + x 4 2. Together with x 2 + x 3 1, then x 4 = 1 and x 2 + x 3 = 1. Only three solutions to check: (1, 1, 0, 1), (0, 1, 0, 1), (0, 0, 1, 1).

16 Finding All (!) Valid Inequalities: Integer Rounding To keep things simple, consider a single knapsack constraint: 2x 1 + 5x 2 + 4x 3 + x 4 7 where x Z 4 + or 0, 1 4 Idea: We can multiply inequality by arbitrary positive numbers! Example: Multiply by 1 2 4, then 4 x x 2 + x x Constraint remains valid if we round coefficients downward: 2 4 x x2 + x x4 = x 1 + x 2 + x Now LHS is integer so also round down RHS to 7 4 = 1: x 1 + x 2 + x 3 1 where x Z 4 + or 0, 1 4 Is the new constraint better than 2x 1 + 5x 2 + 4x 3 + x 4 7? No: (1, 1, 1, 1) is infeasible but satisfies the new constraint. But: ( 7 2, 0, 0, 0), (0, 7 5, 0, 0), (0, 0, 7 4, 0) were feasible and are now cut off the new constraint tightens formulation!

17 Gomory Cuts and the Chvátal-Gomory Procedure Consider a formulation P = x R n + : Ax b for X = P Z n +. 1 Let (a i, b i ) be one or a lin. comb. of the inequalities in P. 2 Let λ 0, then λai T x = n λa ijx j λb i same as (a i, b i ). j=1 3 Since x 0, then n λaij xj λb i is vi for P and X. j=1 4 Since LHS is integer, n j=1 λaij xj λb i is valid for X. 5 New vi may be invalid for P and improve formulation for X! Result: This simple procedure is sufficient to generate all valid inequalities, in principle. Generated vis are called Gomory cuts. Given an arbitrary first formulation P for a set X = P Z n +, every valid inequality for X can be generated by applying the Chvátal-Gomory procedure a finite (!) number of times. Proofs: Gomory 1958 (proof of concept), Chvátal 1973 (for bounded sets), Schrijver 1980 (for unbounded sets)

18 Example: Gomory Cuts for Rolling Mill Formulation Box, polyhedron, and convex hull: B = x R 2 + : x 1 6, x 2 4 P = B x : 7x x 2 56 C = B x : 2x 1 + 3x 2 16, x 1 + x x 1 Preprocessing does not help: x = 8, x = 5 Try Gomoroy: 7 3 x x2 = 2x 1 + 3x = 18 ( 3 ) 10 7 x1 + ( ) x2 = 2x 1 + 3x 2 ( 3 10) 56 = 16 Exercise: Generate x 1 + x 2 7. Need linear combination! ( 710 ) ( 10 ) ( 810 ) + = x x2 = x 1 + x = x 2 4 2

19 Binary Programs: Covers and Extended Formulation Consider P = x R n + : Ax b for binary X = P 0, 1 n. Let a i be row of A and consider inequality n j=1 a ijx j b i. Find a cover C 1, 2,..., n such that j C a ij > b i. Then j C x j C 1 is called a cover inequality for X. Ex: 3x 1 + 2x 2 + 4x 3 5 has covers 1, 2, 3, 1, 3, 2, 3 x 1 + x 2 + x 3 2, x 1 + x 3 1, x 2 + x 3 1. Extended Formulation: rather than adding new constraints, add new variables and lift problem into a higher-dimensional space! Relaxation-Linearization Technique (Adams-Sherali 1990): Let x k 0, 1 and multiply inequalities by x k and (1 x k ) n j=1 a ijx j x k b i x k n i=1 a ijx j (1 x k ) b i (1 x k ). Add x jk = x j x k and max0, x j + x k 1 x jk minx j, x k. Result: Repeated RLT lift-and-project finds the convex hull!

20 IP Optimality Conditions and Relaxation Gaps Question: Given IP max c T x : Ax b, x Z n + and a feasible (integer) solution x, how do we know whether x is optimal? If z LP = max c T x : Ax b, x R n + and z LP = z IP = ct x (or c T x = z LP if c is integer), then x is optimal solution. Otherwise, we don t: no easy-to-check optimality condition! In practice, we use some sort of optimality (relaxation) gap. Definition: If (P) maxf (x): x X, then (R) maxg(x): x Y is a relaxation for (P) if X Y and f (x) g(x) for all x X. If (R) is infeasible/bounded, then (P) is infeasible/bounded. If (R) is optimal at x X and f (x ) = g(x ), then x is also optimal for (P) and there is no relaxation gap: z R z P = 0. In general, the relaxation gap of (R) for (P) is z R z P 0. Note: An ideal relaxation for IP is LP max c T x : x conv(x).

21 Linear Programming Relaxation Consider zip = max x Z n + c T x : Ax b and its LP relaxation: zlp c = max T x : Ax b, x R n +. Relaxation clear: larger (relaxed) feasible set, same objective. When is it tight (no gap)? If A is TU, b integer, or we lucky. Otherwise, try to improve relaxation using valid inequalities. Example: Consider a knapsack problem max c T x : a T x b : z LP(λ) = max c T x : n λa i T x λb. i=1 Drawback of LP Relaxations: finding zlp requires optimization! Although just LPs solving many LPs is not very practical. A way to more quickly get good upper bounds is desirable. Duality! Dual feasible solutions give primal upper bounds.

22 Primal-Dual Bounds and Duality Gaps Def: Problems (P) maxf (x): x X and (D) ming(y): y Y are a (weak) primal-dual pair if f (x) g(y) for all x X, y Y. If one problem is unbounded, then the other is infeasible. If one problem is feasible, then the other is bounded: f (x) zp z D g(y) for all x X and y Y. Duality gap is zd z P 0 (if no gap, then pair is strong). Note: get bounds from feasible solutions without optimization! Lots of examples: LP duality (strong), max flows/min cuts in integer networks (strong), max cardinality matchings/min node cover in bipartitite graphs (strong, König s Theorem) For IP max c T x : Ax b, x Z n can use weak (LP) dual min b T y : A T y c, y R m +. Unfortunately, bounds from LP duals are often quite weak.

23 Lagrangean Relaxation and Dual Problem Consider max f (x): g(x) 0, h(x) = 0 and relax constraints: L(λ, µ) = max f (x) λ T g(x) µ T h(x) Exercise: Show that this is a relaxation for λ > 0 (and any µ). L(λ, µ) is Lagrangean relaxation with L-multipliers (λ, µ). The Lagrangean Dual Problem is the min-max problem min L(λ, µ) = min max λ>0,µ λ>0,µ x f (x) λ T g(x) µ T h(x) For LP/IP: max c T x : Ax b, x X (X is R n + or Z n +): zl = min L(λ) = min max c T x + λ T (b Ax). λ>0 λ>0 x X. Easy to get bounds: compute objective with λ > 0, x X. Not easy to optimize: uses subgradiant and NLP methods.

24 Generic (not Genetic!) Integer Programming Algorithm Let X = x Z n : Ax b and consider IP max c T x : x X. 0 Start with an initial formulation P = x R n : Ax b. 1 Apply preprocessing to strengthen the formulation P. 2 Solve relaxation for x R n and an upper bound z. 3 If x Z n and c T x = z, then stop: x is optimal. 4 If x Z n but c T x < z, then x may be optimal: check dual bounds and their duality gaps (if small enough, stop). 5 If x / Z n, then x is not optimal. Try some of the following: Apply rounding heuristic to find x X and apply Step 4. Find a cutting plane / vi for X that cuts off infeasible x. Select xi / Z and split the problem into two subproblems: P 1 = P x R n : x i x i P 2 = P x R n : x i x i 6 Update the formulation(s) P (P 1, P 2 ) and go back to Step 1. Or use Complete Enumeration: try all x X and pick the best!

25 Gomory s Fractional Cutting-Plane Method Separation Problem: Given formulation P = x R n + : Ax b for X = P Z n and x P, either show that x conv(x) or find a vi (c, d) such that for all x conv(x): c T x d < c T x. Do relaxations ever have optimal solutions x conv(x) but x / Z n? Yes: if there are multiple optimal solutions along an at least 2-dimensional face (edge) of conv(x). Will we ever encounter such solutions? Yes: if we use an interior-point method. Not when using the simplex method! If x / Z n + is found using a simplex method, then we know that ( ) xb = B 1 b B 1 N x N = b āijx j. j N Take any fractional xi = b i / Z and generate valid Gomory cut: x i + āijx j b i x i + āij xj bi. j N j N New inequality cuts off x because x i = b > bi and x N = 0.

26 Decomposition / Branch-and-Bound (Vanderbei 23.5) Decomposition: Consider the problem z = maxf (x): x X. If X = i X i and z i = maxf (x): x X i, then z = maxz i. If X i are singletons, then this is complete enumeration. Smart decomposition schemes use partial enumeration. We will discuss Branch-and-Bound on the following example: max 17x x 2 s.t. 10x 1 + 7x 2 40 x 1 + x 2 5 x 1, x 2 Z + Formulation: P 0 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5 LP Solution: (x 1, x 2 ) = ( 5 3, 10 ) 3 Upper Bound: z 0 = = 68.33

27 Branch-and-Bound Example (First Branching) Note that (x 1, x 2 ) = ( 5 3, 10 ) 3 is fractional, so we can decompose: P 1 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 1 P 2 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 2 First solve formulation P 1 : LP Solution: (x 1, x 2 ) = (1, 4) Objective Value: z 1 = 65 Feasible: incumbent best! Then solve formulation P 2 : LP Solution: (x 1, x 2 ) = ( 2, 20 ) 7 Upper Bound: z 2 = = Can still beat incumbent best!

28 Enumeration Tree: Status after First Branching

29 Branch-and-Bound Example (Second Branching) For P 2, (x 1, x 2 ) = ( 2, 20 ) 7 is fractional so we decompose further: P 3 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 2, x 2 2 P 9 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 2, x 2 3 First solve formulation P 3 : LP Solution: (x 1, x 2 ) = ( 13 5, 2) Upper Bound: z 3 = = 68.2 Can still beat incumbent best! Then decide between two options: Breath-first search: solve P 9. Depth-first search: decompose into P 4 (x 1 2) and P 5 (x 1 3).

30 Enumeration Tree: Status after Second Branching

31 Branch-and-Bound Example (Third Branching) Applying a depth-first search, we continue to decompose P 3 : P 4 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 = 2, x 2 2 P 5 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 3, x 2 2 First solve formulation P 4 : LP Solution: (x 1, x 2 ) = (2, 2) Objective Value: z 4 = 58 Feasible but not optimal. Continue with formulation P 5 : LP Solution: (x 1, x 2 ) = ( 3, 10 ) 7 Upper Bound: z 4 = = Can still beat incumbent best!

32 Enumeration Tree: Status after Third Branching

33 Branch-and-Bound Example (Fourth Branching) For P 5, (x 1, x 2 ) = ( 3, 10 ) 7 is fractional so again we decompose: P 6 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 3, x 2 1 P 10 = x R 2 + : 10x 1 + 7x 2 40, x 1 + x 2 5, x 1 3, x 2 = 2 First solve formulation P 6 : LP Solution: (x 1, x 2 ) = ( 10 3, 1) Upper Bound: z 4 = = 68.1 Can still beat incumbent best! Continue with depth-first search: P 7 (x 1 3) yields x = (3, 1) with z 7 = 63 (feasible but not optimal) P 8 (x 1 4) yields x = (4, 0) with z 8 = 68 (best / optimal solution!)

34 Full Enumeration Tree with Problem Decomposition

35 Depth-First Search versus Breadth-First Search Several reasons for depth-first when using branch-and-bound: Integer solutions tend to lie deep in the enumeration tree, and finding (good) solutions quickly can improve method. If our current best solution has objective z and some LP bound z < z, we can stop to decompose (prune) branch. Also: if algorithm crashes - at least you have something Can use recursion to implement method (fun exercise). Easy to restart simplex after adding a new constraint. Example: Optimal dictionary for initial LP relaxation over P 0 is ζ = w w 2 x 1 = w w 2 x 2 = w w 2 For P 2 (x 1 2), add constraint w 3 = x 1 2 = w w 2.

36 Restarting Branch-and-Bound using Dual Simplex After adding w 3 = x 1 2 = w w 2, new dictionary is ζ = w w 2 x 1 = w w 2 x 2 = w w 2 w 3 = w w 2 Because dictionary is primal infeasible but dual feasible, we can use the dual simplex method and pivot rules (check: w 3 w 2 ): ζ = w w 3 x 1 = 2 + w 3 x 2 = w w 3 w 2 = w w 3 The new dictionary is primal-dual feasible, so (x 1, x 2 ) = ( 2, 20 7 and x 2 = are optimal for the LP relaxation over P 2. Quick! )

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Noncommercial Software for Mixed-Integer Linear Programming

Noncommercial Software for Mixed-Integer Linear Programming Noncommercial Software for Mixed-Integer Linear Programming J. T. Linderoth T. K. Ralphs December, 2004. Revised: January, 2005. Abstract We present an overview of noncommercial software tools for the

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York. 1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many

More information

8 Square matrices continued: Determinants

8 Square matrices continued: Determinants 8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook Dantzig-Wolfe bound and Dantzig-Wolfe cookbook thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline LP strength of the Dantzig-Wolfe The exercise from last week... The Dantzig-Wolfe

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Name: Section Registered In:

Name: Section Registered In: Name: Section Registered In: Math 125 Exam 3 Version 1 April 24, 2006 60 total points possible 1. (5pts) Use Cramer s Rule to solve 3x + 4y = 30 x 2y = 8. Be sure to show enough detail that shows you are

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough. 1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

Similarity and Diagonalization. Similar Matrices

Similarity and Diagonalization. Similar Matrices MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

More information

An Introduction to Linear Programming

An Introduction to Linear Programming An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

More information

Chapter 13: Binary and Mixed-Integer Programming

Chapter 13: Binary and Mixed-Integer Programming Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

More information

Integrating Benders decomposition within Constraint Programming

Integrating Benders decomposition within Constraint Programming Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem

A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem [J. Res. Natl. Inst. Stand. Technol. 111, 161-185 (2006)] A Column-Generation and Branch-and-Cut Approach to the Bandwidth-Packing Problem Volume 111 Number 2 March-April 2006 Christine Villa and Karla

More information

Polytope Examples (PolyComp Fukuda) Matching Polytope 1

Polytope Examples (PolyComp Fukuda) Matching Polytope 1 Polytope Examples (PolyComp Fukuda) Matching Polytope 1 Matching Polytope Let G = (V,E) be a graph. A matching in G is a subset of edges M E such that every vertex meets at most one member of M. A matching

More information

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits

Outline. NP-completeness. When is a problem easy? When is a problem hard? Today. Euler Circuits Outline NP-completeness Examples of Easy vs. Hard problems Euler circuit vs. Hamiltonian circuit Shortest Path vs. Longest Path 2-pairs sum vs. general Subset Sum Reducing one problem to another Clique

More information

In this paper we present a branch-and-cut algorithm for

In this paper we present a branch-and-cut algorithm for SOLVING A TRUCK DISPATCHING SCHEDULING PROBLEM USING BRANCH-AND-CUT ROBERT E. BIXBY Rice University, Houston, Texas EVA K. LEE Georgia Institute of Technology, Atlanta, Georgia (Received September 1994;

More information

Optimization in R n Introduction

Optimization in R n Introduction Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

More information

Scheduling of Mixed Batch-Continuous Production Lines

Scheduling of Mixed Batch-Continuous Production Lines Université Catholique de Louvain Faculté des Sciences Appliquées Scheduling of Mixed Batch-Continuous Production Lines Thèse présentée en vue de l obtention du grade de Docteur en Sciences Appliquées par

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

Scheduling Shop Scheduling. Tim Nieberg

Scheduling Shop Scheduling. Tim Nieberg Scheduling Shop Scheduling Tim Nieberg Shop models: General Introduction Remark: Consider non preemptive problems with regular objectives Notation Shop Problems: m machines, n jobs 1,..., n operations

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;

More information

Max-Min Representation of Piecewise Linear Functions

Max-Min Representation of Piecewise Linear Functions Beiträge zur Algebra und Geometrie Contributions to Algebra and Geometry Volume 43 (2002), No. 1, 297-302. Max-Min Representation of Piecewise Linear Functions Sergei Ovchinnikov Mathematics Department,

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

Unit 18 Determinants

Unit 18 Determinants Unit 18 Determinants Every square matrix has a number associated with it, called its determinant. In this section, we determine how to calculate this number, and also look at some of the properties of

More information

arxiv:1203.1525v1 [math.co] 7 Mar 2012

arxiv:1203.1525v1 [math.co] 7 Mar 2012 Constructing subset partition graphs with strong adjacency and end-point count properties Nicolai Hähnle haehnle@math.tu-berlin.de arxiv:1203.1525v1 [math.co] 7 Mar 2012 March 8, 2012 Abstract Kim defined

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information

Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

More information

Equilibrium computation: Part 1

Equilibrium computation: Part 1 Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

More information

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH

DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH DETERMINANTS IN THE KRONECKER PRODUCT OF MATRICES: THE INCIDENCE MATRIX OF A COMPLETE GRAPH CHRISTOPHER RH HANUSA AND THOMAS ZASLAVSKY Abstract We investigate the least common multiple of all subdeterminants,

More information

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture 11: 0-1 Quadratic Program and Lower Bounds Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Abstract A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Fang He and Rong Qu The Automated Scheduling, Optimisation and Planning (ASAP) Group School of Computer Science,

More information

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 DAVID L. BERNICK dbernick@soe.ucsc.edu 1. Overview Typical Linear Programming problems Standard form and converting

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

Guessing Game: NP-Complete?

Guessing Game: NP-Complete? Guessing Game: NP-Complete? 1. LONGEST-PATH: Given a graph G = (V, E), does there exists a simple path of length at least k edges? YES 2. SHORTEST-PATH: Given a graph G = (V, E), does there exists a simple

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Complexity Theory IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Outline Goals Computation of Problems Concepts and Definitions Complexity Classes and Problems Polynomial Time Reductions Examples

More information

Two-Stage Stochastic Linear Programs

Two-Stage Stochastic Linear Programs Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic

More information

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,

More information

Collinear Points in Permutations

Collinear Points in Permutations Collinear Points in Permutations Joshua N. Cooper Courant Institute of Mathematics New York University, New York, NY József Solymosi Department of Mathematics University of British Columbia, Vancouver,

More information

Lecture 4: Partitioned Matrices and Determinants

Lecture 4: Partitioned Matrices and Determinants Lecture 4: Partitioned Matrices and Determinants 1 Elementary row operations Recall the elementary operations on the rows of a matrix, equivalent to premultiplying by an elementary matrix E: (1) multiplying

More information

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen MASTER STHESIS Minimizing costs for transport buyers using integer programming and column generation Eser Esirgen DepartmentofMathematicalSciences CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Statistical machine learning, high dimension and big data

Statistical machine learning, high dimension and big data Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,

More information

Operation Count; Numerical Linear Algebra

Operation Count; Numerical Linear Algebra 10 Operation Count; Numerical Linear Algebra 10.1 Introduction Many computations are limited simply by the sheer number of required additions, multiplications, or function evaluations. If floating-point

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

Near Optimal Solutions

Near Optimal Solutions Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

More information

Chapter 17. Orthogonal Matrices and Symmetries of Space

Chapter 17. Orthogonal Matrices and Symmetries of Space Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length

More information

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph

Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph FPSAC 2009 DMTCS proc (subm), by the authors, 1 10 Determinants in the Kronecker product of matrices: The incidence matrix of a complete graph Christopher R H Hanusa 1 and Thomas Zaslavsky 2 1 Department

More information

Solutions to Homework 6

Solutions to Homework 6 Solutions to Homework 6 Debasish Das EECS Department, Northwestern University ddas@northwestern.edu 1 Problem 5.24 We want to find light spanning trees with certain special properties. Given is one example

More information

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

More information

Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing

Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing Multi-layer MPLS Network Design: the Impact of Statistical Multiplexing Pietro Belotti, Antonio Capone, Giuliana Carello, Federico Malucelli Tepper School of Business, Carnegie Mellon University, Pittsburgh

More information