Duality for Mixed Integer Linear Programming

Size: px
Start display at page:

Download "Duality for Mixed Integer Linear Programming"

Transcription

1 Duality for Mixed Integer Linear Programming TED RALPHS MENAL GÜZELSOY ISE Department Lab Lehigh University University of Newcastle, Newcastle, Australia 13 May 009 Thanks: Work supported in part by the National Science Foundation Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

2 Outline 1 Duality Theory Introduction IP Duality The Subadditive Dual Formulation Properties 3 Constructing Dual Functions The Value Function Gomory s Procedure Branch-and-Bound Method Branch-and-Cut Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February 009 / 56

3 What is Duality? It is difficult to give a general definition of mathematical duality, though mathematics is replete with various notions of it. Set Theory and Logic (De Morgan Laws) Geomety (Pascal s Theorem & Brianchon s Theorem) Combinatorics (Graph Coloring) The duality we are interested in is a sort of functional duality. We define a generic optimization problem to be a mapping f : X R, where X is the set of possible inputs and f(x) is the result. Duality may then be defined as a method of transforming a given primal problem to an associated dual problem such that the dual problem yields a bound on the primal problem, and applying a related transformation to the dual produces the primal again. In many case, we would also like to require that the dual bound be close to the primal result for a specific input of interest. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

4 Duality in Mathematical Programming In mathematical programming, the input is the problem data (e.g., the constraint matrix, right-hand side, and cost vector for a linear program). We view the primal and the dual as parametric problems, but some data is held constant. Uses of the Dual in Mathematical Programing If the dual is easier to evaluate, we can use it to obtain a bound on the primal optimal value. We can also use the dual to perform sensitivity analysis on the parameterized primal input data. Finally, we can also use the dual to warm start solution procedure based on evaluation of the dual. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

5 Duality in Integer Programming We will initially be interested in the mixed integer linear program (MILP) instance z IP = min x S cx, (P) where, c R n, S = {x Z r + R n r + Ax = b} with A Q m n, b R m. We call this instance the base primal instance. To construct a dual, we need a parameterized version of this instance. For reasons that will become clear, the most relevant parameterization is of the right-hand side. The value function (or primal function) of the base primal instance (P) is z(d) = min x S(d) cx, where for a given d R m, S(d) = {x Z r + R n r + Ax = d}. We let z(d) = if d Ω = {d R m S(d) = }. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

6 Example: Value Function z IP = min 1 x 1 + x 3 + x 4 s.t x 1 3 x + x 3 x 4 = b and x 1, x Z +, x 3, x 4 R +. z(d) d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

7 Dual Functions A dual function F : R m R is one that satisfies F(d) z(d) for all d R m. How to select such a function? We choose may choose one that is easy to construct/evaluate and/or for which F(b) z(b). This results in the base dual instance where Υ m {f f : R m R} z D = max {F(b) : F(d) z(d), d R m, F Υ m } We call F strong for this instance if F is a feasible dual function and F (b) = z(b). This dual instance always has a solution F that is strong if the value function is bounded and Υ m {f f : R m R}. Why? Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

8 The LP Relaxation Dual Function It is easy to obtain a feasible dual function for any MILP. Consider the value function of the LP relaxation of the primal problem: F LP (d) = max {vd : va c}. v Rm By linear programming duality theory, we have F LP (d) z(d) for all d R m. Of course, F LP is not necessarily strong. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

9 Example: LP Dual Function F LP (d) = min s.t which can be written explicitly as { F LP (d) = vd, 0 v 1, and v R, 0, d 0 1 d, d > 0. 3 z(d) F LP(d) d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

10 The Subadditive Dual By considering that F(d) z(d), d R m F(d) cx, x S(d), d R m F(Ax) cx, x Z n +, the generalized dual problem can be rewritten as z D = max {F(b) : F(Ax) cx, x Z r + R n r +, F Υ m }. Can we further restrict Υ m and still guarantee a strong dual solution? The class of linear functions? NO! The class of convex functions? NO! The class of sudadditive functions? YES! Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

11 The Subadditive Dual Let a function F be defined over a domain V. Then F is subadditive if F(v 1 ) + F(v ) F(v 1 + v ) v 1, v, v 1 + v V. Note that the value function z is subadditive over Ω. Why? If Υ m Γ m {F is subadditive F : R m R, F(0) = 0}, we can rewrite the dual problem above as the subadditive dual z D = max F(b) F(a j ) c j j = 1,..., r, F(a j ) c j j = r + 1,..., n, and F Γ m, where the function F is defined by F(d) = lim sup δ 0 + F(δd) δ d R m. Here, F is the upper d-directional derivative of F at zero. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

12 Example: Upper D-directional Derivative The upper d-directional derivative can be interpreted as the slope of the value function in direction d at 0. For the example, we have 3 z(d) z(d) d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

13 Weak Duality Weak Duality Theorem Let x be a feasible solution to the primal problem and let F be a feasible solution to the subadditive dual. Then, F(b) cx. Proof. Corollary For the primal problem and its subadditive dual: 1 If the primal problem (resp., the dual) is unbounded then the dual problem (resp., the primal) is infeasible. If the primal problem (resp., the dual) is infeasible, then the dual problem (resp., the primal) is infeasible or unbounded. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

14 Strong Duality Strong Duality Theorem If the primal problem (resp., the dual) has a finite optimum, then so does the subadditive dual problem (resp., the primal) and they are equal. Outline of the Proof. Show that the value function z or an extension to z is a feasible dual function. Note that z satisfies the dual constraints. Ω R m : z Γ m. Ω R m : z e Γ m with z e (d) = z(d) d Ω and z e (d) < d R m. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

15 Example: Subadditive Dual For our IP instance, the subadditive dual problem is max F(b) F(1) 1 F( 3 ) 0 F(1) F( 1) 1 F Γ 1.. and we have the following feasible dual functions: 1 F 1 (d) = d is an optimal dual function for b {0, 1,,...}. F (d) = 0 is an optimal function for b {..., 3, 3, 0}. d d d 4, d 3 3 F 3 (d) = max{ 1 b {[0, 1 4 ] [1, 5 4 ] [, 9 4 ]...}. d d d 4 } is an optimal function for 4 F 4 (d) = max{ 3 d 3 d 3 d 3 3 d, 3 4 d d 3 3 d d } is an optimal function for b {... [ 7, 3] [, 3 ] [ 1, 0]} Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

16 Example: Feasible Dual Functions 3 z(d) F(d) d Notice how different dual solutions are optimal for some right-hand sides and not for others. Only the value function is optimal for all right-hand sides. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

17 Farkas Lemma (Pure Integer) For the primal problem, exactly one of the following holds: 1 S There is an F Γ m with F(a j ) 0, j = 1,..., n, and F(b) < 0. Proof. Let c = 0 and apply strong duality theorem to subadditive dual. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

18 Complementary Slackness (Pure Integer) For a given right-hand side b, let x and F be feasible solutions to the primal and the subadditive dual problems, respectively. Then x and F are optimal solutions if and only if 1 x j (c j F (a j )) = 0, j = 1,..., n and F (b) = n j=1 F (a j )x j. Proof. For an optimal pair we have F (b) = F (Ax ) = n F (a j )xj = cx. j=1 Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

19 Example: Pure Integer Case If we require integrality of all variables in our previous example, then the value function becomes z(d) d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

20 Constructing Dual Functions Explicit construction The Value Function Generating Functions Relaxations Lagrangian Relaxation Quadratic Lagrangian Relaxation Corrected Linear Dual Functions Primal Solution Algorithms Cutting Plane Method Branch-and-Bound Method Branch-and-Cut Method Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

21 Properties of the Value Function It is subadditive over Ω. It is piecewise polyhedral. For an ILP, it can be obtained by a finite number of limited operations on elements of the RHS: (i) rational multiplication (ii) nonnegative combination (iii) rounding (iv) taking the minimum Chvátal fcns. Gomory fcns. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

22 The Value Function for MILPs There is a one-to-one correspondence between ILP instances and Gomory functions. The Jeroslow Formula shows that the value function of a MILP can also be computed by taking the minimum of different values of a single Gomory function with a correction term to account for the continuous variables. The value function of the earlier example is { 3 max d 3, z(d) = min { 3 max d 3, d d } + 3 d + d, } d, Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February 009 / 56

23 Jeroslow Formula Let the set E consist of the index sets of dual feasible bases of the linear program min{ 1 M c Cx C : 1 M A Cx C = b, x 0} where M Z + such that for any E E, MA 1 E aj Z m for all j I. Theorem (Jeroslow Formula) There is a g G m such that z(d) = min E E g( d E ) + v E(d d E ) d R m with S(d), where for E E, d E = A E A 1 E d and v E is the corresponding basic feasible solution. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

24 The Single Constraint Case Let us now consider an MILP with a single constraint for the purposes of illustration: min cx, x S (P1) c R n, S = {x Z r + R n r + a x = b} with a Q n, b R. The value function of (P) is z(d) = min x S(d) cx, where for a given d R, S(d) = {x Z r + R n r + a x = d}. Assumptions: Let I = {1,...,r}, C = {r + 1,...,n}, N = I C. z(0) = 0 = z : R R {+ }, N + = {i N a i > 0} and N = {i N a i < 0}, r < n, that is, C 1 = z : R R. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

25 Example min 3x x + 3x 3 + 6x 4 + 7x 5 + 5x 6 s.t 6x 1 + 5x 4x 3 + x 4 7x 5 + x 6 = b and x 1, x, x 3 Z +, x 4, x 5, x 6 R +. (SP) z d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

26 Example (cont d) η = 1, ζ = 3 4, ηc = 3 and ζ C = 1: 18 z F U F L ζ C η C ζ η d {η = η C } {z(d) = F U (d) = F L (d) d R + } {ζ = ζ C } {z(d) = F U (d) = F L (d) d R } Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

27 Observations Consider d + U, d U, d+ L, d L : d L d L d L d U d + d + d + 3d + 4d + U L L L L d The relation between F U and the linear segments of z: {η C,ζ C } Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

28 Redundant Variables Let T C be such that t + T if and only if η C < and η C = c t + a t + t T if and only if ζ C > and ζ C = c t a t. and define and similarly, ν(d) = min s.t. c I x I + c T x T a I x I + a T x T = d x I Z I +, x T R T + Then ν(d) = z(d) for all d R. The variables in C\T are redundant. z can be represented with at most continuous variables. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

29 Example min x 1 3/4x + 3/4x 3 s.t 5/4x 1 x + 1/x 3 = b, x 1, x Z +, x 3 R +. η c = 3/, ζ C = For each discontinuous point d i, we have d i (5/4y i 1 y i ) = 0 and each linear segment has the slope of η C = 3/. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

30 Jeroslow Formula Let M Z + be such that for any t T, Maj a t Z for all j I. Then there is a Gomory function g such that z(d) = min t T {g( d t ) + c t a t (d d t )}, d t = a t M Md a t, d R Such a Gomory function can be obtained from the value function of a related PILP. For t T, setting ω t (d) = g( d t ) + c t a t (d d t ) d R, we can write z(d) = min t T ω t(d) d R Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

31 Piecewise Linearity and Continuity For t T, ω t is piecewise linear with finitely many linear segments on any closed interval and each of those linear segments has a slope of η C if t = t + or ζ C if t = t. ω t + is continuous from the right, ω t is continuous from the left. ω t + and ω t are both lower-semicontinuous. Theorem z is piecewise-linear with finitely many linear segments on any closed interval and each of those linear segments has a slope of η C or ζ C. (Meyer 1975) z is lower-semicontinuous. η C < if and only if z is continuous from the right. ζ C > if and only if z is continuous from the left. Both η C and ζ C are finite if and only if z is continuous everywhere. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

32 Maximal Subadditive Extension Let f : [0, h] R, h > 0 be subadditive and f(0) = 0. The maximal subadditive extension of f from [0, h] to R + is f(d) if d [0, h] f S (d) = inf f(ρ) if d > h, C C(d) ρ C C(d) is the set of all finite collections {ρ 1,..., ρ R} such that ρ i [0, h], i = 1,..., R and P R i=1 ρi = d. Each collection {ρ 1,..., ρ R} is called an h-partition of d. We can also extend a subadditive function f : [h, 0] R, h < 0 to R similarly. (Bruckner 1960) f S is subadditive and if g is any other subadditive extension of f from [0, h] to R +, then g f S (maximality). Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

33 Extending the Value Function Lemma Suppose we use z itself as the seed function. Observe that we can change the inf to min : Let the function f : [0, h] R be defined by f(d) = z(d) d [0, h]. Then, z(d) if d [0, h] f S (d) = min z(ρ) if d > h. C C(d) ρ C For any h > 0, z(d) f S (d) d R +. Observe that for d R +, f S (d) z(d) while h. Is there an h < such that f S (d) = z(d) d R +? Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

34 Extending the Value Function (cont.) Yes! For large enough h, maximal extension produces the value function itself. Theorem Let d r = max{a i i N} and d l = min{a i i N} and let the functions f r and f l be the maximal subadditive extensions of z from the intervals [0, d r ] and [d l, 0] to R + and R, respectively. Let { fr (d) d R F(d) = + f l (d) d R then, z = F. Outline of the Proof. z F : By construction. z F : Using MILP duality, F is dual feasible. In other words, the value function is completely encoded by the breakpoints in [d l, d r ] and slopes. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

35 General Procedure We will construct the value function in two steps Construct the value function on [d l, d r]. Extend the value function to the entire real line from [d l, d r]. Assumptions We assume η C < and ζ C <. We construct the value function over R + only. These assmuptions are only needed to simplify the presentation. Constructing the Value Function on [0, d r ] If both η C and ζ C are finite, the value function is continuous and the slopes of the linear segments alternate between η C and ζ C. For d 1, d [0, d r], if z(d 1) and z(d ) are connected by a line with slope η C or ζ C, then z is linear over [d 1, d ] with the respective slope (subadditivity). With these observations, we can formulate a finite algorithm to evaluate z in [d l, d r]. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

36 Example (cont d) d r = 6: η C ζ C Figure: Evaluating z in [0, 6] Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

37 Example (cont d) d r = 6: η C ζ C Figure: Evaluating z in [0, 6] Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

38 Example (cont d) d r = 6: η C ζ C Figure: Evaluating z in [0, 6] Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

39 Example (cont d) d r = 6: η C ζ C Figure: Evaluating z in [0, 6] Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

40 Example (cont d) d r = 6: η C ζ C Figure: Evaluating z in [0, 6] Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

41 Example (cont d) Extending the value function of (SP) from [0, 1] to [0, 18] z F U F L Υ d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

42 Example (cont d) Extending the value function of (SP) from [0, 1] to [0, 18] z F U F L Υ d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

43 Example (cont d) Extending the value function of (SP) from [0, 18] to [0, 4] z F U F L Υ d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

44 Example (cont d) Extending the value function of (SP) from [0, 18] to [0, 4] z F U F L Υ d Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

45 General Case For E E, setting we can write ω E (d) = g( d E ) + v E (d d E ) d R m with S(d), z(d) = min E E ω E(d) d R m with S(d). Many of our previous results can be extended to general case in the obvious way. Similarly, we can use maximal subadditive extensions to construct the value function. However, an obvious combinatorial explosion occurs. Therefore, we consider using single row relaxations to get a subadditive approximation. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

46 Gomory s Procedure There is a Chvátal function that is optimal to the subadditive dual of an ILP with RHS b Ω IP and z IP (b) >. The procedure: In iteration k, we solve the following LP z IP (b) k 1 = min s.t. cx Ax = b n j=1 f i (a j )x j f i (b) i = 1,..., k 1 x 0 The k th cut, k > 1, is dependent on the RHS and written as: m k 1 f k (d) = λ k 1 i d i + λ k 1 m+i f i (d) where λ k 1 = (λ k 1 1,...,λ k 1 m+k 1 ) 0 i=1 i=1 Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

47 Gomory s Procedure (cont.) Assume that b Ω IP, z IP (b) > and the algorithm terminates after k + 1 iterations. If u k is the optimal dual solution to the LP in the final iteration, then F k (d) = m k u k i d i + u k m+if i (d), i=1 i=1 is a Chvátal function with F k (b) = z IP (b) and furthermore, it is optimal to the subadditive ILP dual problem. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

48 Gomory s Procedure (cont.) Example: Consider an integer program with two constraints. Suppose that the following is the first constraint: x 1 + x + x 3 x 4 = 3 At the first iteration, Gomory s procedure can be used to derive the valid inequality / x 1 + / x + 1/ x 3 + 1/ x 4 3/ by scaling the constraint with λ = 1/. In other words, we obtain x 1 x + x 3. Then the corresponding function is: F 1 (d) = d 1 / + 0d = d 1 / By continuing Gomory s procedure, one can then build up an optimal dual function in this way. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

49 Branch-and-Bound Method Then, Assume that the primal problem is solved to optimality. Let T be the set of leaf nodes of the search tree. Thus, we ve solved the LP relaxation of the following problem at node t T z t (b) = min cx s.t x S t (b), where S t (b) = {Ax = b, x l t, x u t, x Z n } and u t, l t Z r are the branching bounds applied to the integer variables. Let (v t, v t, v t ) be the dual feasible solution used to prune node t, if t is feasibly pruned a dual feasible solution (that can be obtained from it parent) to node t, if t is infeasibly pruned F BB (d) = min t T {vt d + v t l t v t u t } is an optimal solution to the generalized dual problem. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

50 Branch-and-Cut Method It is thus easy to get a strong dual function from branch-and-bound. Note, however, that it s not subadditive in general. To obtain a subaditive function, we can include the variable bounds explicitly as constraints, but then the function may not be strong. For branch-and-cut, we have to take care of the cuts. Case 1: Do we know the subadditive representation of each cut? Case : Do we know the RHS dependency of each cut? Case 3: Otherwise, we can use some proximity results or the variable bounds. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

51 Case 1 If we know the subadditive representation of each cut: At a node t, we solve the LP relaxation of the following problem z t (b) = min cx s.t Ax b x l t x g t H t x h t x Z r + R n r + where g t, l t R r are the branching bounds applied to the integer variables and H t x h t is the set of added cuts in the form Fk(a t k j )x j + F k(a t k j )x j Fk(σ t k (b)) k = 1,...,ν(t), j I j N\I ν(t): the number of cuts generated so far, a k j, j = 1,..., n: the columns of the problem that the kth cut is constructed from, σ k (b): is the mapping of b to the RHS of the corresponding problem. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

52 Case 1 Let T be the set of leaf nodes, u t, u t, u t and w t be the dual feasible solution used to prune t T. Then, ν(t) F(d) = min t T {ut d + u t l t u t g t + w t kfk(σ t k (d))} is an optimal dual function, that is, z(b) = F(b). Again, we obtain a subaddite function if the variables are bounded. However, we may not know the subadditive representation of each cut. k=1 Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

53 Case If we know the RHS dependencies of each cut: We know for Gomory fractional cuts. Knapsack cuts Mixed-integer Gomory cuts? Then, we do the same analysis as before. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

54 Case 3 In the absence of subadditive representations or RHS dependencies: For each node t T, let ĥt be such that n { l ĥ t k = h t t kjˆx j with ˆx j = j if h t kj 0, k = 1,...,ν(t) otherwise j=1 where h t kj is the kth entry of column h t j. Furthermore, define n { l h t = h t t j x j with x j = j if w t h t j 0 otherwise Then the function j=1 F(d) = min t T {ut d + u t l t u t g t + max{w t ht, w t ĥ t }} g t j g t j. is a feasible dual solution and hence F(b) yields a lower bound. This approach is the easiest way and can be used for bounded MILPs (binary programs), however it is unlikely to get an effective dual feasible solution for general MILPs. Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

55 Conclusions It is possible to generalize the duality concepts that are familiar to us from linear programming. However, it is extremely difficult to make any of it practical. There are some isolated cases where this theory has been applied in practice, but they are far and few between. We have a number of projects aimed in this direction, so stay tuned... Ralphs and Güzelsoy (Lehigh University) Duality for MILP 18 February / 56

DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING

DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING by Menal Guzelsoy Presented to the Graduate and Research Committee of Lehigh University in Candidacy for the Degree of Doctor of Philosophy in Industrial

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Integrating Benders decomposition within Constraint Programming

Integrating Benders decomposition within Constraint Programming Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

Equilibrium computation: Part 1

Equilibrium computation: Part 1 Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Chapter 13: Binary and Mixed-Integer Programming

Chapter 13: Binary and Mixed-Integer Programming Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Optimization in R n Introduction

Optimization in R n Introduction Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

Linear Programming: Chapter 11 Game Theory

Linear Programming: Chapter 11 Game Theory Linear Programming: Chapter 11 Game Theory Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Rock-Paper-Scissors

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

An Introduction to Linear Programming

An Introduction to Linear Programming An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

More information

Chapter 11 Monte Carlo Simulation

Chapter 11 Monte Carlo Simulation Chapter 11 Monte Carlo Simulation 11.1 Introduction The basic idea of simulation is to build an experimental device, or simulator, that will act like (simulate) the system of interest in certain important

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

More information

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem

A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem A Branch and Bound Algorithm for Solving the Binary Bi-level Linear Programming Problem John Karlof and Peter Hocking Mathematics and Statistics Department University of North Carolina Wilmington Wilmington,

More information

Noncommercial Software for Mixed-Integer Linear Programming

Noncommercial Software for Mixed-Integer Linear Programming Noncommercial Software for Mixed-Integer Linear Programming J. T. Linderoth T. K. Ralphs December, 2004. Revised: January, 2005. Abstract We present an overview of noncommercial software tools for the

More information

Zeros of Polynomial Functions

Zeros of Polynomial Functions Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of

More information

PYTHAGOREAN TRIPLES KEITH CONRAD

PYTHAGOREAN TRIPLES KEITH CONRAD PYTHAGOREAN TRIPLES KEITH CONRAD 1. Introduction A Pythagorean triple is a triple of positive integers (a, b, c) where a + b = c. Examples include (3, 4, 5), (5, 1, 13), and (8, 15, 17). Below is an ancient

More information

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach Outline Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach The University of New South Wales SPOM 2013 Joint work with V. Jeyakumar, B.S. Mordukhovich and

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

Max-Min Representation of Piecewise Linear Functions

Max-Min Representation of Piecewise Linear Functions Beiträge zur Algebra und Geometrie Contributions to Algebra and Geometry Volume 43 (2002), No. 1, 297-302. Max-Min Representation of Piecewise Linear Functions Sergei Ovchinnikov Mathematics Department,

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

Solving Quadratic Equations

Solving Quadratic Equations 9.3 Solving Quadratic Equations by Using the Quadratic Formula 9.3 OBJECTIVES 1. Solve a quadratic equation by using the quadratic formula 2. Determine the nature of the solutions of a quadratic equation

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Module1. x 1000. y 800.

Module1. x 1000. y 800. Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

56:171 Operations Research Midterm Exam Solutions Fall 2001

56:171 Operations Research Midterm Exam Solutions Fall 2001 56:171 Operations Research Midterm Exam Solutions Fall 2001 True/False: Indicate by "+" or "o" whether each statement is "true" or "false", respectively: o_ 1. If a primal LP constraint is slack at the

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Full and Complete Binary Trees

Full and Complete Binary Trees Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

THE SCHEDULING OF MAINTENANCE SERVICE

THE SCHEDULING OF MAINTENANCE SERVICE THE SCHEDULING OF MAINTENANCE SERVICE Shoshana Anily Celia A. Glass Refael Hassin Abstract We study a discrete problem of scheduling activities of several types under the constraint that at most a single

More information

Integer Programming Formulation

Integer Programming Formulation Integer Programming Formulation 1 Integer Programming Introduction When we introduced linear programs in Chapter 1, we mentioned divisibility as one of the LP assumptions. Divisibility allowed us to consider

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen

Minimizing costs for transport buyers using integer programming and column generation. Eser Esirgen MASTER STHESIS Minimizing costs for transport buyers using integer programming and column generation Eser Esirgen DepartmentofMathematicalSciences CHALMERS UNIVERSITY OF TECHNOLOGY UNIVERSITY OF GOTHENBURG

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

A Note on the Bertsimas & Sim Algorithm for Robust Combinatorial Optimization Problems

A Note on the Bertsimas & Sim Algorithm for Robust Combinatorial Optimization Problems myjournal manuscript No. (will be inserted by the editor) A Note on the Bertsimas & Sim Algorithm for Robust Combinatorial Optimization Problems Eduardo Álvarez-Miranda Ivana Ljubić Paolo Toth Received:

More information

Adaptive Linear Programming Decoding

Adaptive Linear Programming Decoding Adaptive Linear Programming Decoding Mohammad H. Taghavi and Paul H. Siegel ECE Department, University of California, San Diego Email: (mtaghavi, psiegel)@ucsd.edu ISIT 2006, Seattle, USA, July 9 14, 2006

More information

Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA

Nan Kong, Andrew J. Schaefer. Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA A Factor 1 2 Approximation Algorithm for Two-Stage Stochastic Matching Problems Nan Kong, Andrew J. Schaefer Department of Industrial Engineering, Univeristy of Pittsburgh, PA 15261, USA Abstract We introduce

More information

Scheduling a sequence of tasks with general completion costs

Scheduling a sequence of tasks with general completion costs Scheduling a sequence of tasks with general completion costs Francis Sourd CNRS-LIP6 4, place Jussieu 75252 Paris Cedex 05, France Francis.Sourd@lip6.fr Abstract Scheduling a sequence of tasks in the acceptation

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

On Minimal Valid Inequalities for Mixed Integer Conic Programs

On Minimal Valid Inequalities for Mixed Integer Conic Programs On Minimal Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç Karzan June 27, 2013 Abstract We study mixed integer conic sets involving a general regular (closed, convex, full dimensional,

More information

Dynamic TCP Acknowledgement: Penalizing Long Delays

Dynamic TCP Acknowledgement: Penalizing Long Delays Dynamic TCP Acknowledgement: Penalizing Long Delays Karousatou Christina Network Algorithms June 8, 2010 Karousatou Christina (Network Algorithms) Dynamic TCP Acknowledgement June 8, 2010 1 / 63 Layout

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

More information

Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4. Duality. 4.1 A Graphical Example Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

More information

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems

A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Abstract A Constraint Programming based Column Generation Approach to Nurse Rostering Problems Fang He and Rong Qu The Automated Scheduling, Optimisation and Planning (ASAP) Group School of Computer Science,

More information

Real Roots of Univariate Polynomials with Real Coefficients

Real Roots of Univariate Polynomials with Real Coefficients Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

the points are called control points approximating curve

the points are called control points approximating curve Chapter 4 Spline Curves A spline curve is a mathematical representation for which it is easy to build an interface that will allow a user to design and control the shape of complex curves and surfaces.

More information

Basic Components of an LP:

Basic Components of an LP: 1 Linear Programming Optimization is an important and fascinating area of management science and operations research. It helps to do less work, but gain more. Linear programming (LP) is a central topic

More information

Two-Stage Stochastic Linear Programs

Two-Stage Stochastic Linear Programs Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic

More information

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York. 1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ]

1 Homework 1. [p 0 q i+j +... + p i 1 q j+1 ] + [p i q j ] + [p i+1 q j 1 +... + p i+j q 0 ] 1 Homework 1 (1) Prove the ideal (3,x) is a maximal ideal in Z[x]. SOLUTION: Suppose we expand this ideal by including another generator polynomial, P / (3, x). Write P = n + x Q with n an integer not

More information

Discuss the size of the instance for the minimum spanning tree problem.

Discuss the size of the instance for the minimum spanning tree problem. 3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

More information

T ( a i x i ) = a i T (x i ).

T ( a i x i ) = a i T (x i ). Chapter 2 Defn 1. (p. 65) Let V and W be vector spaces (over F ). We call a function T : V W a linear transformation form V to W if, for all x, y V and c F, we have (a) T (x + y) = T (x) + T (y) and (b)

More information

Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

More information