M AT H E M AT I C S O F O P - E R AT I O N S R E S E A R C H
|
|
- Cameron Rogers
- 7 years ago
- Views:
Transcription
1 A N D R E W T U L L O C H M AT H E M AT I C S O F O P - E R AT I O N S R E S E A R C H T R I N I T Y C O L L E G E T H E U N I V E R S I T Y O F C A M B R I D G E
2
3 Contents 1 Generalities 5 2 Constrained Optimization Lagrangian Multipliers Lagrange Dual Supporting Hyperplanes 8 3 Linear Programming Convexity and Strong Duality Linear Programs Linear Program Duality Complementary Slackness 13 4 Simplex Method Basic Solutions Extreme Points and Optimal Solutions The Simplex Tableau The Simplex Method in Tableau Form 16
4 4 andrew tulloch 5 Advanced Simplex Procedures The Two-Phase Simplex Method Gomory s Cutting Plane Method 17 6 Complexity of Problems and Algorithms Asymptotic Complexity 19 7 The Complexity of Linear Programming A Lower Bound for the Simplex Method The Idea for a New Method 21 8 Graphs and Flows Introduction Minimal Cost Flows 23 9 Transportation and Assignment Problems Non-Cooperative Games Strategic Equilibrium Bibliography 31
5 1 Generalities
6
7 2 Constrained Optimization Minimize f (x) subject to h(x) = b, x X. Objective function f : R n R Vector x R n of decision variables, Functional constraint where h : R n > R m, b R m Regional constraint where X R n. Definition 2.1. The feasible set is X(b) = {x X : h(x) = b}. An inequality of the form g(x) b can be written as g(x) + z = b, where z R m called a slack variable with regional constraint z Lagrangian Multipliers Definition 2.2. Define the Lagrangian of a problem as L(x, λ) = f (x) λ T (h(x) b) (2.1) where λ R m is a vector of Lagrange multipliers Theorem 2.3 (Lagrangian Sufficiency Theorem). Let x X and λ R m such that and h(x) = b. Then x is optimal for P. L(x, λ) = inf x X L(x, λ) (2.2)
8 8 andrew tulloch Proof. min f x X(b) (x ) = min [ f (x) x X(b) λt (h(x ) b)] (2.3) min x X [ f (x ) λ T (h(x ) b)] (2.4) = f (x) λ T (h(x) b) (2.5) = f (x) (2.6) 2.2 Lagrange Dual Definition 2.4. Let φ(b) = inf f (x). (2.7) x X(b) Define the Lagrange dual function g : R m R with g(λ) = in f x X L(x, λ) (2.8) Then, for all λ R m, inf f (x) = inf L(x, λ) inf L(x, λ) = g(λ) (2.9) x X(b) x X(b) x X That is, g(λ) is a lower bound on our optimization function. This motivates the dual problem to maximize g(λ) subject to λ Y, where Y = {λ R m : g(λ) > }. Theorem 2.5 (Duality). From (2.9), we see that the optimal value of the primal is always greater than the optimal value of the dual. This is weak duality. 2.3 Supporting Hyperplanes Fix b R m and consider φ as a function of c R m. Further consider the hyperplane given by α : R m R with α(c) = β λ T (b c) (2.10) Now, try to find φ(b) as follow.
9 mathematics of operations research 9 (i) For each λ, find β λ = max{β : α(c) φ(c), c R m } (2.11) (ii) Choose λ to maximize β λ Definition 2.6. Call α : R m R a supporting hyperplane to φ at b if α(c) = φ(b) λ T (b c) (2.12) and φ(c) φ(b) λ T (b c) (2.13) for all c R m. Theorem 2.7. The following are equivalent (i) There exists a (non-vertical) supporting hyperplane to φ at b, (ii) XXXXXXXXXXXXXXXXXXXXXXXXXX
10
11 3 Linear Programming 3.1 Convexity and Strong Duality Definition 3.1. Let S R n. S is convex if for all δ [0, 1], x, y S implies that δx + (1 δ)y S. f : S R is convex if for all x, y S and δ [0, 1], δ f (x) + (1 δ) f (y) f (δx + (1 δ)y. Visually, the area under the function is a convex set. Definition 3.2. x S is an interior point of S if there exists ɛ > 0 such {y : y x 2 ɛ} S. x S is an extreme point of S if for all y, z S and S (0, 1), x = δy + (1 δ)z implies x = y = z. Theorem 3.3 (Supporting Hyperplane). Suppose that our function φ is convex and b lies in the interior of the set of points where φ is finite. Then there is a (non-vertical) supporting hyperplane to φ at b. Theorem 3.4. Let X(b) = {x X : h(x) b}, φ(b) = inf x X(b) f (x). Then φ is convex if X, f, and h are convex. Proof. Let b 1, b 2 R m such that φ(b 1 ), φ(b 2 ) are defined. Let δ [0, 1] and b = δb 1 + (1 δ)b 2. Consider x 1 X(b 1 ), x 2 X(b 2 ) and let x = δx 1 + (1 δ)x 2.
12 12 andrew tulloch By convexity of Y, x X. By convexity of h, h(x) = h(δx 1 + (1 δ)x 2 ) δh(x 1 ) + (1 δ)h(x 2 ) δb 1 + (1 δ)b 2 = b Thus x X(b), and by convexity of f, φ(b) f (x) = f (δx 1 + (1 δ)x 2 ) δ f (x 1 ) + (1 δ) f (x 2 ) δφ(b 1 ) + (1 δ)φ(b 2 ) 3.2 Linear Programs Definition 3.5. General form of a linear program is min{c T x : Ax b, x 0} (3.1) Standard form of a linear program is min{c T x : Ax = b, x 0} (3.2) 3.3 Linear Program Duality Introduce slack variables to turn problem P into the form min{c T x : Ax z = b, x, z 0} (3.3) We have X = {(x, z) : x, z 0} R m+n. The Lagrangian is L((x, z), λ) = c T x λ T (Ax z b) (3.4) = (c T λ T A)x + λ T z + λ T b (3.5)
13 mathematics of operations research 13 and has a finite minimum if and only if λ Y = {λ : c T λ T A 0, λ 0} (3.6) For a fixed λ Y, the minimum of L is satisfied when (c T λ T A)x = 0 and λ T z = 0, and thus g(λ) = We obtain that the dual problem inf L((x, z), λ) = λ T b (3.7) (x,z) X max{λ T b : A T λ c, λ 0} (3.8) and it can be shown (exercise) that the dual of the dual of a linear program is the original program. 3.4 Complementary Slackness Theorem 3.6. Let x and λ be feasible for P and its dual. Then x and λ are optimal if and only if (c T λ T A)x = 0 (3.9) λ T (Ax b) = 0 (3.10) Proof. If x, λ are optimal, then c T x = λ T b }{{} by strong duality (3.11) = inf x X (ct x λ T (Ax b)) c T x λ T (Ax b) }{{} c T x primal and dual optimality Then the inequalities must be equalities. Thus (3.12) λ T b = c T x λ T (Ax b) = (c T λ T A)x +λ T b (3.13) }{{} =0 and c T x λ T (Ax b) = c T x (3.14) }{{} =0
14 14 andrew tulloch If (c T λ T A)x = 0 and λ T (Ax b) = 0, then c T x = c T λ T (Ax b) = (c T λ T a)x + λ T b = λ T b (3.15) and so by weak duality, x and λ are optimal.
15 4 Simplex Method 4.1 Basic Solutions Maximize c T x subject to Ax = b, x 0, A R m n. Call a solution x R n of Ax = b basic if it has at most m non-zero entries, that is, there exists B {1,..., n} with B = m and x i = 0 if i / B. A basic solution x with x 0 is called a basic feasible solution (bfs). 4.2 Extreme Points and Optimal Solutions We make the following assumptions: (i) The rows of A are linearly independent (ii) Every set of m columns of A are linearly independent. (iii) Every basic solution is non-degenerate - that is, it has exactly m non-zero entries. Theorem 4.1. x is a bfs of Ax = b if and only if it is an extreme point of the set X(b) = {x : Ax = b, x 0}. Theorem 4.2. If the problem has a finite optimum (feasible and bounded), then it has an optimal solution that is a bfs.
16 16 andrew tulloch 4.3 The Simplex Tableau Let A R m n, b R m. Let B be a basis (in the bfs sense), and x R n, such that Ax = b. Then A B x B + A N x N = b (4.1) where A B R m m and A N R m (n m) respectively consist of the columns of A indexed by B and those not indexed by B. Moreover, if x is a basic solution, then there is a basic B such that x N = 0 and A B x B = b, and if x is a basic feasible solution, there is a basis B such that x n = 0, A B x B = b, and x B 0. For every x with Ax = b and every basis B, we have x B = A 1 B (b A Nx N ) (4.2) as we assume that A B has full rank. Thus, f (x) = c T x = c T B x B + c T N x N (4.3) = c T B A 1 B (b A NX N ) + c T N x N (4.4) = CB T A 1 B b + (ct N ct B A 1 B A N)x N (4.5) Assume we can guarantee that A 1 B and x N = 0 is a bfs with b = 0. Then x with x B = A 1 B b f (x ) = C T B A 1 B b (4.6) Assume that we are maximizing c T x. There are two different cases: (i) If C T N CT B A 1 B x is optimal. A N 0, then f (x) c T B A 1 B b for every feasible x, so (ii) If (c T N ct B A 1 B A N) i > 0, then we can increase the objective value by increasing the corresponding row of (x N ) i. 4.4 The Simplex Method in Tableau Form
17 5 Advanced Simplex Procedures 5.1 The Two-Phase Simplex Method Finding an initial bfs is easy if the constraints have the form Ax = b where b 0, as Ax + z = b, (x, z) = (0, b) (5.1) 5.2 Gomory s Cutting Plane Method Used in integer programming (IP). This is a linear program where in addition some of the variables are required to be integral. Assume that for a given integer program we have found an optimal fractional solution x with basis B and let a ij = (A 1 B A j) and a i0 = (A 1 B b) be the entries of the final tableau. If x is not integral, the for some row i, a i0 is not integral. For every feasible solution x, x i = a ij x j x i + a ij x j = a i0. (5.2) j N j N If x is integral, then the left hand side is integral as well, and the inequality must still hold if the right hand side is rounded down. Thus, x i + a ij x j a i0. (5.3) j N Then, we can add this constraint to the problem and solve the augmented program. One can show that this procedure converges in a finite number of steps.
18
19 6 Complexity of Problems and Algorithms 6.1 Asymptotic Complexity We measure complexity as a function of input size. The input of a linear programming problem: c R n, A R m n, b R m is represented in (n + m n + m) k bits if we represent each number using k bits. For two functions f : R R and g : R R write f (n) = O(g(n)) (6.1) if there exists c, n 0 such that for all n n 0, f (n) c g(n) (6.2),... (similarly for Ω, and Θ (Ω + O))
20
21 7 The Complexity of Linear Programming 7.1 A Lower Bound for the Simplex Method Theorem 7.1. There exists a LP of size O(n 2 ), a pivoting rule, and an initial BFS such that the simplex method requires 2 n 1 iterations. Proof. Consider the unit cube in R n, given by constraints 0 x i 1 for i = 1,..., n. Define a spanning path inductively as follows. In dimension 1, go from x 1 = 0 to x 1 = 1. In dimension k, set x k = 0 and follow the path for dimension 1,..., k 1. Then set x = 1, and follow the path for dimension 1,..., k 1 backwards. The objective x n currently increases only once. Instead consider the perturbed unit cube given by the constraints ɛ x 1 1, ɛx i 1 x i 1 ɛx i 1 with ɛ (0, 1 2 ). 7.2 The Idea for a New Method min{c T x : Ax = b, x 0} (7.1) max{b T λ : A T λ c} (7.2) By strong duality, each of these problems has a bounded optimal
22 22 andrew tulloch solution if and only if the following set of constraints is feasible: c T x = b T λ (7.3) Ax = b (7.4) x 0 (7.5) A T λ c (7.6) It is thus enough to decide, for a given A R m n and b R m, whether {x R n : Ax b} =. Definition 7.2. A symmetric matrix D R n n is called positive definite if x T Dx > 0 for every x R n. Alternatively, all eigenvalues of the matrix are strictly positive. Definition 7.3. A set E R n given by E = E(z, D) = {x R n : (x z) T D(x z) 1} (7.7) for a positive definite symmetric D R n n and z R n is called an ellipsoid with center z. Let P = {x R n : Ax b} for some A R m n and b R m. To decide whether P =, we generate a sequence {E t } of ellipsoids E t with centers x t. If x t P, then P is non-empty and the method stops. If x t / P, then one of the constraints is violated - so there exists a row j of A such that a T j x t < b j. Therefore, P is contained in the half-space {x R n : a T j x at j x t, and in particular the intersection of this half-space with E t, which we will call a half-ellipsoid. Theorem 7.4. Let E = E(z, D) be an ellipsoid in R n and a R n = 0. Consider the half-space H = {x R n a T x a T z}, and let z = z + 1 n + 1 D = n2 n 2 1 Da a T Da ( D 2 n + 1 Daa T ) D a T Da (7.8) (7.9) Then D is symmetric and positive definite, and therefore E = E(z, D ) is an ellipsoid. Moreover, E H E and Vol(E ) < e 2(n+1) 1 Vol(E).
23 8 Graphs and Flows 8.1 Introduction Consider a directed graph (network) G = (V, E), Vthe set of vertices, E V V a set of edges. Undirected if E is symmetric. 8.2 Minimal Cost Flows Fill in this stuff from lectures
24
25 9 Transportation and Assignment Problems
26
27 10 Non-Cooperative Games Theorem 10.1 (von Neumann, 1928). Let P R m n. Then max min x X y Y p(x, y) = min max y Y x X p(x, y) (10.1)
28
29 11 Strategic Equilibrium Definition x X is a best response to y Y if p(x, y) = max x X p(x, y). (x, y) X Y is a Nash equilibrium if x is a best response to y and y is a best response to x. Theorem (x, y) X Y is an equilibrium of the matrix game P if and only if min p(x, y ) = max min p(x, y ) (11.1) y Y x X y Y and max p(x, y) = min max p(x, y ). (11.2) x X y Y x X Theorem Let (x, y), (x, y ) X Y be equilibria of the matrix game with payoff matrix P. Then p(x, y) = p(x, y ) and (x, y ) and (x, y) are equilibria as well. Theorem 11.4 (Nash, 1951). Every bimatrix game has an equilibrium.
30
31 12 Bibliography
4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationLecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method
Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming
More informationLECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationMathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
More informationProximal mapping via network optimization
L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More informationLinear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
More information1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
More informationNonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
More information5.1 Bipartite Matching
CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson
More informationEquilibrium computation: Part 1
Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium
More information. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2
4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the
More informationLinear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
More informationInternational Doctoral School Algorithmic Decision Theory: MCDA and MOO
International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire
More informationThis exposition of linear programming
Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George
More informationLinear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More information9.4 THE SIMPLEX METHOD: MINIMIZATION
SECTION 9 THE SIMPLEX METHOD: MINIMIZATION 59 The accounting firm in Exercise raises its charge for an audit to $5 What number of audits and tax returns will bring in a maximum revenue? In the simplex
More informationDuality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
More information26 Linear Programming
The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory
More informationCHAPTER 9. Integer Programming
CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral
More informationOptimization in R n Introduction
Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing
More informationApproximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
More information1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since
More informationCan linear programs solve NP-hard problems?
Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve
More informationChapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints
Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and
More informationDuality in Linear Programming
Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow
More information5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationFurther Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
More informationLecture 2: August 29. Linear Programming (part I)
10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.
More informationA NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION
1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of
More informationRecovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach
MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical
More informationLinear Programming: Theory and Applications
Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationIEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
More informationLinear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
More informationInterior Point Methods and Linear Programming
Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either
More informationSome representability and duality results for convex mixed-integer programs.
Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer
More informationConvex Programming Tools for Disjunctive Programs
Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible
More information24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
More information3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max
SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,
More informationSupport Vector Machine (SVM)
Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationOperation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1
Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear
More informationLECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005
LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 DAVID L. BERNICK dbernick@soe.ucsc.edu 1. Overview Typical Linear Programming problems Standard form and converting
More informationLinear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
More informationNonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,
More information1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.
1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities
More informationOPTIMIZATION. Schedules. Notation. Index
Easter Term 00 Richard Weber OPTIMIZATION Contents Schedules Notation Index iii iv v Preliminaries. Linear programming............................ Optimization under constraints......................3
More informationSolutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR
Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision
More informationSolving Linear Programs
Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,
More informationOPRE 6201 : 2. Simplex Method
OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2
More information10. Proximal point method
L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing
More information3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
More informationLinear Programming Problems
Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number
More informationLecture 11: 0-1 Quadratic Program and Lower Bounds
Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite
More informationWalrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
More informationArrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
More informationSECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA
SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the
More informationDantzig-Wolfe bound and Dantzig-Wolfe cookbook
Dantzig-Wolfe bound and Dantzig-Wolfe cookbook thst@man.dtu.dk DTU-Management Technical University of Denmark 1 Outline LP strength of the Dantzig-Wolfe The exercise from last week... The Dantzig-Wolfe
More informationClassification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
More informationSupport Vector Machines
Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More informationDate: April 12, 2001. Contents
2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........
More informationChapter 19. General Matrices. An n m matrix is an array. a 11 a 12 a 1m a 21 a 22 a 2m A = a n1 a n2 a nm. The matrix A has n row vectors
Chapter 9. General Matrices An n m matrix is an array a a a m a a a m... = [a ij]. a n a n a nm The matrix A has n row vectors and m column vectors row i (A) = [a i, a i,..., a im ] R m a j a j a nj col
More informationLinear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More information56:171 Operations Research Midterm Exam Solutions Fall 2001
56:171 Operations Research Midterm Exam Solutions Fall 2001 True/False: Indicate by "+" or "o" whether each statement is "true" or "false", respectively: o_ 1. If a primal LP constraint is slack at the
More information6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation
6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation Daron Acemoglu and Asu Ozdaglar MIT November 2, 2009 1 Introduction Outline The problem of cooperation Finitely-repeated prisoner s dilemma
More informationTwo-Stage Stochastic Linear Programs
Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic
More information! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
More informationAdvanced Lecture on Mathematical Science and Information Science I. Optimization in Finance
Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology
More informationNP-Hardness Results Related to PPAD
NP-Hardness Results Related to PPAD Chuangyin Dang Dept. of Manufacturing Engineering & Engineering Management City University of Hong Kong Kowloon, Hong Kong SAR, China E-Mail: mecdang@cityu.edu.hk Yinyu
More informationThe Multiplicative Weights Update method
Chapter 2 The Multiplicative Weights Update method The Multiplicative Weights method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game
More informationa 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
More informationMethods for Finding Bases
Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,
More informationDefinition 11.1. Given a graph G on n vertices, we define the following quantities:
Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define
More informationChapter 4. Duality. 4.1 A Graphical Example
Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates
More informationLinear Programming: Chapter 11 Game Theory
Linear Programming: Chapter 11 Game Theory Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Rock-Paper-Scissors
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More informationSimplex method summary
Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for
More informationNotes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand
Notes V General Equilibrium: Positive Theory In this lecture we go on considering a general equilibrium model of a private ownership economy. In contrast to the Notes IV, we focus on positive issues such
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationLinear Programming. Solving LP Models Using MS Excel, 18
SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting
More informationby the matrix A results in a vector which is a reflection of the given
Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that
More information1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
More informationMATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.
MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column
More information! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
More informationThe Graphical Method: An Example
The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,
More informationActually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.
1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many
More informationOptimal shift scheduling with a global service level constraint
Optimal shift scheduling with a global service level constraint Ger Koole & Erik van der Sluis Vrije Universiteit Division of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The
More informationMATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.
MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α
More informationThe Max-Distance Network Creation Game on General Host Graphs
The Max-Distance Network Creation Game on General Host Graphs 13 Luglio 2012 Introduction Network Creation Games are games that model the formation of large-scale networks governed by autonomous agents.
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationSpecial Situations in the Simplex Algorithm
Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the
More informationLinear Programming Notes VII Sensitivity Analysis
Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization
More information