ISyE 6661: Topics Covered

Size: px
Start display at page:

Download "ISyE 6661: Topics Covered"

Transcription

1 ISyE 6661: Topics Covered 1. Optimization fundamentals: 1.5 lectures 2. LP Geometry (Chpt.2): 5 lectures 3. The Simplex Method (Chpt.3): 4 lectures 4. LP Duality (Chpt.4): 4 lectures 5. Sensitivity Analysis (Chpt.5): 3 lectures 6. Large-scale LP (Chpt.6): 1.5 lectures 7. Computational complexity and the Ellipsoid method (Chpt. 8): 2 lectures 8. Interior Point Algorithms (Chpt. 9): 5 lectures 1

2 1. Fundamentals of Optimization The generic optimization problem: (P ) : min{f(x) : x X}. Weirstrass Theorem: If f is continuous and X is compact then problem (P ) has an optimal solution. If f is a convex function and X is a convex set, then (P ) is a convex program. Theorem: If x is a local optimal solution of the convex program (P ) then it is also a global optimal solution. 2

3 2. Linear Programming Geometry LP in standard form (P ) : min{c T x : Ax = b, x 0}. LP involves involves minimizing a linear function over the polyhderal set X = {x : Ax = b, x 0}. Basic building blocks of a polyhedral set: Extreme points and Extreme rays. Theorem: (Algebraic characterization of Extreme pts.) A vector x is an extreme point of X iff it is a Basic Feasible Solution, i.e., a partitioning of A = [B N] (with B square and nonsingular) such that x B = B 1 b and x N = 0. Theorem: (Algebraic characterization of Extreme rays.) A vector d 0 is an extreme ray of X iff if it is a Non-negative Basic Direction, i.e., a partitioning of A = [B N] (with B square and nonsingular) s.t. [ B d = α 1 A j e j for some A j N and α > 0. ] 0 3

4 2. Linear Programming Geometry (contd.) The Representation Theorem: Let x 1,..., x k and d 1,..., d l be the extreme points and extreme rays of X respectively. Then X = x : x = k i=1 k i=1 λ i x i + l j=1 µ j d j λ i = 1, λ i 0 i, µ j 0 j. To prove the above result, we used: The Separation Theorem: Let S be a non-empty closed convex set, and x S. Then a vector c s.t. c T x < c T x x S. Theorem: (Cor. of Rep. Thm.) (a) An LP min{c T x : x X} has an optimal solution iff c T d j 0 for all extreme rays d j j = 1,..., l. (b) Extreme point optimality: If an LP has an optimal solution then there exists an extreme point that is optimal. 4

5 3. The Simplex Method Basic idea: Move from one extreme point (bfs) to another while improving the objective. Given a bfs x k with basis B move along one of the j-th Basic Directions (j N) [ d j B = 1 ] A j. e j If x is non-degenerate then d j is a feasible direction, i.e., allows a positive step move. If c T d j < 0 then d j is an improving direction. Note c T d j = c j c T B B 1 A j = c j (the reduced cost). If no improving direction exists, i.e. c j 0 for all j N, the current solution is optimal, Stop. Chose an improving basic direction d j from j N, and move to x k+1 x k + αd j where α 0 is such that x k+1 0. If d j 0 then α = + implying that the problem is unbounded, Stop. 5

6 3. The Simplex Method (contd.) Theorem: x k+1 is an adjacent bfs to x k with basis ˆB = B + {A j } {A l } where l is some basic variable that becomes nonbasic. Degeneracy, i.e., when a basic variable has a value of zero, is a problem. If x k is a degenerate, α could be zero, i.e., the basis changes from B to ˆB but x k+1 = x k and cause Stalling or Cycling. Can be dealt with by properly choosing j and l (e.g. Lexicographic rule). Theorem: The Simplex method (with proper pivot rules) solves LP in a finite number of iterations. 6

7 3. The Simplex Method (contd.) Revised Simplex and Tableau implementations. Initializing the Simplex method Two-phase Simplex Big-M method 7

8 4. Duality Standard form Primal-dual LP pairs: v P = min c T x v D = max b T y s.t. Ax = b s.t. A T y c. x 0 Recipe for writing dual problem for general LPs. Weak Duality Theorem: v D v P. Proof of WD: By construction of the dual problem. Strong Duality Theorem: If either problem has a finite optimal value then v D = v P. Proof 1 of SD: From the Simplex Method. (c T B B 1 are the optimal dual variables). Proof 2 of SD: From the theorems of alternatives (Farkaas Lemma). 8

9 4. Duality (Contd.) Fakaas Lemma: Let A R m n and b R m then exactly one of the following two systems (a or b) is feasible: (a) Ax = b (b) A T y 0 x 0 b T y < 0. Proof: Use Separating Hyperplane theorem. See different forms of Farkaas Lemma. From Duality to Polyhedral theory: An immediate proof of Farkaas Lemma. A simple proof of the Representation Thm. Converse to Rep. Thm.: Convex hull of a finite number of points is a polytope. 9

10 4. Duality (Contd.) LP Optimality Conditions (Cor. (x, y ) is primal-dual optimal iff of SD) A pair Ax = b, x 0 Primal Feasibility A T y c Dual Feasibility x j (c j A T j y ) = 0 j Complementary Slackness. Relation between non-degeneracy and uniqueness amongst primal and dual optimal solutions. The Dual Simplex Algorithm: A basis B is primal feasible (PF) if B 1 b 0 and dual feasible (DF) if c T c T B B 1 A 0. Start with a basis that is DF but not PF. Select a variable (< 0) to leave the basis (move towards PF). Select an entering variable to maintain DF. 10

11 4. Duality (Contd.) Dual Simplex is not analogous to applying Primal Simplex to the Dual problem. When to use Dual Simplex over Primal Simplex? Generalized Duality: The dual of v P = min{c T : Ax b, x X} is v D = max{l(y) : y 0} where L(y) := min{c T x + y T (b Ax) : x X}. 11

12 5. Sensitivity Analysis Consider the LP z = min{c T x : Ax = b, x 0}. An instance of the LP is given by the data (n, m, c, A, b). If the optimal solution x is non-degenerate then the i-th dual variable represents yi = z b i x i = 1,..., m. Local Sensitivity Analysis: (a) How doe the optimal solution x and the optimal value z behave under small perturbations of the problem data (n, m, c, A, b)? (b) How to efficiently recover the new optimal solution and optimal value after the perturbation? 12

13 5. Sensitivity Analysis (contd.) Adding a new variable: Current basis remains PF. So check DF (reduced cost of the new variable) and use Primal Simplex to optimize if needed. Adding a new constraint: Current basis remain DF. Check PF, and use Dual Simplex to optimize if needed. Perturbing b b + δd: Current basis remain DF and PF over a computable range of δ. Outside this range, we have DF but not PF, so use Dual Simplex to optimize. Perturbing c c + δd: Current basis remain DF and PF over a computable range of δ. Outside this range, we have PF but not DF, so use Primal Simplex to optimize. 13

14 5. Sensitivity Analysis (contd.) Perturbing A j A j + δd where j N: Current basis remain DF and PF over a computable range of δ. Outside this range, we have PF but not DF, so use Primal Simplex to optimize. Perturbing A j A j + δd where j B: Current basis remain DF and PF over a computable range of δ. Outside this range, both PF and DF maybe affected. Global behavior of value functions: (a) F (b) = min{c T x : Ax = b, x 0} is a convex function of b, and the dual solution y is a subgradient of F (b) at b. (b) G(c) = min{c T x : Ax = b, x 0} is a concave function of c, and x is a subgradient of G(c) at c. 14

15 6. Large-Scale LP Column Generation: The Cutting Stock Problem Dantzig-Wolfe decomposition. Row Generation: Benders decomposition. 15

16 7. Computational Complexity of LP A problem (class) is easy if there exists an algorithm whose computational effort required to solve any instance of the problem is bounded by some polynomial of the size of that instance (i.e. if there exists a polynomial time algorithm for the problem). Is LP easy? The Simplex method may require an exponential number (in the number of variables) of iterations! Klee-Minty (1972). Yudin and Nemirovskii (1977) developed Ellipsoid method and showed that general convex programs are easy and Khachian (1979) used it show that LP is indeed easy. 16

17 7. The Ellipsoid Method for LP The Ellipsoid method answers the following question Is X = {x R n Ax b} =? Assume: if X then 0 < v vol(x) V. We have a Separation Oracle S(x, X) which returns 0 if x X, otherwise it returns a vector a 0 such that a T y > a T x for all y X. 0. Find an ellipsoid E 0 (x 0 ) X. Set k = If S(x k, X) = 0 stop X. If vol(e k (x k )) v Stop X =. 2. If S(x k, X) = a k, then X H k := {x : a T k x a T k xk }. Find such that E k+1 (x k+1 ) E k (x k ) H k X vol(e k+1 (x k+1 )) vol(e k (x k )) < e 1/2(n+1). 3. Set k k + 1 and go to step 1. 17

18 7. The Ellipsoid Method for LP (contd.) The numbers v and V depend on n and U (the largest number in the data (A, b)). Theorem: The Ellipsoid method answers the question Is X = {x R n Ax b} =? in O(n 6 log(nu)) iterations. 18

19 7. The Ellipsoid Method for LP (contd.) Easily modified for optimization of a linear function over polyhedra. Polynomial complexity is preserved. Note the complexity does not depend on the number of constraints in X. Equivalence of Separation and Optimization: The description of X maybe involve an exponential number of constraints. However as long as we have a polynomial time Separation Oracle then the Ellipsoid algorithm guarantees that optimization of a linear function over X is still polynomial time! 19

20 8. Interior Point Methods min{c T x : x X} Basic idea: Given x k int(x), find a direction d k and a step size α k s.t. x k + α k d k =: x k+1 int(x) and c T x k+1 < c T x k. Continue until some termination criteria is met. The algorithms differ w.r.t choice of d k, α k and the termination criteria. May need some preprocessing to guarantee that an optimal solution exists. The algorithms are convergent lim k xk = x. A good criteria for finite termination is needed. 20

21 8. Interior Point Methods: The Affine Scaling Method Basic idea: Given x k int(x), construct an Ellipsoid E k (x k ) int(x). Choose x k+1 = argmin{c T x : x E k }. Based on the fact that the minimizer of a linear form over an Ellipsoid can be found analytically. Not proven to be polynomial time. 21

22 8. Interior Point Methods: The Primal path following (Barrier) method We want to solve P : min{c T x : Ax = b, x 0}. Use a penalty function to prevent iterates from approaching the boundary of the polyhedron. Reduce penalty as the iterates approach an optimal solution (on the boundary). Given µ > 0, the barrier problem is P (µ) : min{f µ (x) := c T x µ n j=1 log(x j ) : Ax = b}. 22

23 8. Interior Point Methods: The Barrier method For any µ > 0 the function f µ (x) is strictly convex the problem P (µ) has a unique optimal solution x(µ). For any µ > 0, x(µ) int(x), where X = {x : Ax = b, x 0}. For µ = +, x(µ) is the analytic center of X. As µ 0, x(µ) x. The set of solutions {x(µ) : µ (0, )} is known as the Central Path. How to find x(µ) (at least approximately)? 23

24 Aside: NLP Optimality Conditions NLP : min{f(x) : Ax = b, x 0} LP (x ) : min{ f(x ) T x : Ax = b, x 0} Theorem: If x is an optimal solution of NLP then x is an optimal solution of LP (x ). Theorem: If f is convex, then x is an optimal solution of NLP iff x is an optimal solution of LP (x ). Theorem: If x is an optimal solution of NLP then x solves the KKT system Ax = b, x 0 A T y + s = f(x ), s 0 x j s j = 0 j = 1,..., n. Theorem: If f is convex, then x is an optimal solution of NLP iff x solves the KKT system Ax = b, x 0 A T y + s = f(x ), s 0 x j s j = 0 j = 1,..., n. 24

25 8. Interior Point Methods: The Barrier method (contd.) x(µ) is a solution of the KKT system for the Barrier problem Ax = b, x > 0 A T y + s = c, s > 0 x j s j = µ j = 1,..., n. The system is nonlinear difficult to solve. We are content with β-approximate solutions (0 < β < 1) Ax = b, x > 0 A T y + s = c, s > 0 nj=1 ( x js j µ 1)2 β 2 For fixed β, lim µ 0 x β (µ) = lim µ 0 x(µ) = x. 25

26 8. Interior Point Methods: The Barrier method (contd.) Let β = 1/2. Start with some µ k > 0 and a β- approximation x k of x(µ k ). Linearize the KKT system around x k and solve it to get the new solution x k+1. It can be shown that x k+1 is a β-approximation of x(µ k+1 ) with µ k+1 = ( n )µ k. Continue until the duality gap (x k ) T s k ɛ. 26

27 8. Interior Point Methods: The Barrier Method (contd.) Theorem: The barrier algorithm reduces the duality gap from ɛ 0 to ɛ in O( n log ɛ 0 ɛ ) iterations. 27

28 Not covered: Network Flow Problems A very important class of problems. Constraint matrix has a very special structure, called a Network matrix. Specialized Simplex type algorithm is strongly polynomial time. E.g. Transportation and Assignment Problems. 28

29 What s next? Optimization Courses in SP 04 ISyE 6662: Optimization II. Ph.D. level class on Integer Programming and Network Flows. Offered by Prof. Ergun. ISyE 8871: Integer Programming. Advanced Ph.D. level class on Integer Programming. Offered by Prof. Nemhauser. ISyE 6663: Optimization III. Nonlinear Programming programming theory for Ph.D. students. Offered by Prof. Nemirovskii. ISyE 8813: Advanced Ph.D. class on Interior Point Methods. Offered by Prof. Nemirovskii. ISyE 6669: Deterministic optimization (MS level). ISyE 6673: Financial optimization models (MS level). Offered by Prof. Sokol. ISyE 6679: Computational Methods in Optimization. Offered by Prof. Barnes. 29

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

Degeneracy in Linear Programming

Degeneracy in Linear Programming Degeneracy in Linear Programming I heard that today s tutorial is all about Ellen DeGeneres Sorry, Stan. But the topic is just as interesting. It s about degeneracy in Linear Programming. Degeneracy? Students

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough. 1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

More information

Equilibrium computation: Part 1

Equilibrium computation: Part 1 Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

Optimization Theory for Large Systems

Optimization Theory for Large Systems Optimization Theory for Large Systems LEON S. LASDON CASE WESTERN RESERVE UNIVERSITY THE MACMILLAN COMPANY COLLIER-MACMILLAN LIMITED, LONDON Contents 1. Linear and Nonlinear Programming 1 1.1 Unconstrained

More information

arxiv:1203.1525v1 [math.co] 7 Mar 2012

arxiv:1203.1525v1 [math.co] 7 Mar 2012 Constructing subset partition graphs with strong adjacency and end-point count properties Nicolai Hähnle haehnle@math.tu-berlin.de arxiv:1203.1525v1 [math.co] 7 Mar 2012 March 8, 2012 Abstract Kim defined

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control

Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control Optimization of Communication Systems Lecture 6: Internet TCP Congestion Control Professor M. Chiang Electrical Engineering Department, Princeton University ELE539A February 21, 2007 Lecture Outline TCP

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

Simplex method summary

Simplex method summary Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

More information

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

More information

Two-Stage Stochastic Linear Programs

Two-Stage Stochastic Linear Programs Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

Max-Min Representation of Piecewise Linear Functions

Max-Min Representation of Piecewise Linear Functions Beiträge zur Algebra und Geometrie Contributions to Algebra and Geometry Volume 43 (2002), No. 1, 297-302. Max-Min Representation of Piecewise Linear Functions Sergei Ovchinnikov Mathematics Department,

More information

An Introduction on SemiDefinite Program

An Introduction on SemiDefinite Program An Introduction on SemiDefinite Program from the viewpoint of computation Hayato Waki Institute of Mathematics for Industry, Kyushu University 2015-10-08 Combinatorial Optimization at Work, Berlin, 2015

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Integrating Benders decomposition within Constraint Programming

Integrating Benders decomposition within Constraint Programming Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

OPTIMIZATION. Schedules. Notation. Index

OPTIMIZATION. Schedules. Notation. Index Easter Term 00 Richard Weber OPTIMIZATION Contents Schedules Notation Index iii iv v Preliminaries. Linear programming............................ Optimization under constraints......................3

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING

DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING by Menal Guzelsoy Presented to the Graduate and Research Committee of Lehigh University in Candidacy for the Degree of Doctor of Philosophy in Industrial

More information

Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

An interval linear programming contractor

An interval linear programming contractor An interval linear programming contractor Introduction Milan Hladík Abstract. We consider linear programming with interval data. One of the most challenging problems in this topic is to determine or tight

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology

More information

Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Minimally Infeasible Set Partitioning Problems with Balanced Constraints Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook

Dantzig-Wolfe bound and Dantzig-Wolfe cookbook Dantzig-Wolfe bound and Dantzig-Wolfe cookbook thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline LP strength of the Dantzig-Wolfe The exercise from last week... The Dantzig-Wolfe

More information

Optimization in R n Introduction

Optimization in R n Introduction Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

More information

NP-Hardness Results Related to PPAD

NP-Hardness Results Related to PPAD NP-Hardness Results Related to PPAD Chuangyin Dang Dept. of Manufacturing Engineering & Engineering Management City University of Hong Kong Kowloon, Hong Kong SAR, China E-Mail: mecdang@cityu.edu.hk Yinyu

More information

Definition 11.1. Given a graph G on n vertices, we define the following quantities:

Definition 11.1. Given a graph G on n vertices, we define the following quantities: Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define

More information

Completely Positive Cone and its Dual

Completely Positive Cone and its Dual On the Computational Complexity of Membership Problems for the Completely Positive Cone and its Dual Peter J.C. Dickinson Luuk Gijben July 3, 2012 Abstract Copositive programming has become a useful tool

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

56:171 Operations Research Midterm Exam Solutions Fall 2001

56:171 Operations Research Midterm Exam Solutions Fall 2001 56:171 Operations Research Midterm Exam Solutions Fall 2001 True/False: Indicate by "+" or "o" whether each statement is "true" or "false", respectively: o_ 1. If a primal LP constraint is slack at the

More information

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York. 1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many

More information

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1

9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1 9th Max-Planck Advanced Course on the Foundations of Computer Science (ADFOCS) Primal-Dual Algorithms for Online Optimization: Lecture 1 Seffi Naor Computer Science Dept. Technion Haifa, Israel Introduction

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1 Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

More information

On the representability of the bi-uniform matroid

On the representability of the bi-uniform matroid On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

The Equivalence of Linear Programs and Zero-Sum Games

The Equivalence of Linear Programs and Zero-Sum Games The Equivalence of Linear Programs and Zero-Sum Games Ilan Adler, IEOR Dep, UC Berkeley adler@ieor.berkeley.edu Abstract In 1951, Dantzig showed the equivalence of linear programming problems and two-person

More information

4.1 Learning algorithms for neural networks

4.1 Learning algorithms for neural networks 4 Perceptron Learning 4.1 Learning algorithms for neural networks In the two preceding chapters we discussed two closely related models, McCulloch Pitts units and perceptrons, but the question of how to

More information

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U. Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

More information

Minkowski Sum of Polytopes Defined by Their Vertices

Minkowski Sum of Polytopes Defined by Their Vertices Minkowski Sum of Polytopes Defined by Their Vertices Vincent Delos, Denis Teissandier To cite this version: Vincent Delos, Denis Teissandier. Minkowski Sum of Polytopes Defined by Their Vertices. Journal

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4. Duality. 4.1 A Graphical Example Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Optimization Methods in Finance

Optimization Methods in Finance Optimization Methods in Finance Gerard Cornuejols Reha Tütüncü Carnegie Mellon University, Pittsburgh, PA 15213 USA January 2006 2 Foreword Optimization models play an increasingly important role in financial

More information

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)! Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

More information

COMPUTING EQUILIBRIA FOR TWO-PERSON GAMES

COMPUTING EQUILIBRIA FOR TWO-PERSON GAMES COMPUTING EQUILIBRIA FOR TWO-PERSON GAMES Appeared as Chapter 45, Handbook of Game Theory with Economic Applications, Vol. 3 (2002), eds. R. J. Aumann and S. Hart, Elsevier, Amsterdam, pages 1723 1759.

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Computational aspects of simplex and MBU-simplex algorithms using different anti-cycling pivot rules

Computational aspects of simplex and MBU-simplex algorithms using different anti-cycling pivot rules Computational aspects of simplex and MBU-simplex algorithms using different anti-cycling pivot rules Tibor Illés, Adrienn Nagy October 8, 2012 Abstract Several variations of index selection rules for simplex

More information