Linear Programming I



Similar documents
Linear Programming Notes V Problem Transformations

Can linear programs solve NP-hard problems?

Linear Programming. March 14, 2014

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

1 Solving LPs: The Simplex Algorithm of George Dantzig

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

What is Linear Programming?

Practical Guide to the Simplex Method of Linear Programming

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Linear Programming. April 12, 2005

Lecture 2: August 29. Linear Programming (part I)

3. Linear Programming and Polyhedral Combinatorics

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

OPRE 6201 : 2. Simplex Method

Linear Programming Problems

Special Situations in the Simplex Algorithm

4.6 Linear Programming duality

3.1 Solving Systems Using Tables and Graphs

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Largest Fixed-Aspect, Axis-Aligned Rectangle

Standard Form of a Linear Programming Problem

Transportation Polytopes: a Twenty year Update

Chapter 6: Sensitivity Analysis

26 Linear Programming

Linear Programming. Solving LP Models Using MS Excel, 18

This exposition of linear programming

DATA ANALYSIS II. Matrix Algorithms

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

2.3 Convex Constrained Optimization Problems

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Systems of Linear Equations

Mathematical finance and linear programming (optimization)

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Arrangements And Duality

Simplex method summary

Linear Programming: Theory and Applications

Social Media Mining. Network Measures

Several Views of Support Vector Machines

Applied Algorithm Design Lecture 5

Module1. x y 800.

Nonlinear Programming Methods.S2 Quadratic Programming

Chapter 2 Solving Linear Programs

Linear Programming Notes VII Sensitivity Analysis

Proximal mapping via network optimization

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Solving Systems of Linear Equations

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

1 VECTOR SPACES AND SUBSPACES

Linear Equations and Inequalities

24. The Branch and Bound Method

5.1 Bipartite Matching

EdExcel Decision Mathematics 1

Jim Lambers MAT 169 Fall Semester Lecture 25 Notes

The Graphical Method: An Example

1 Portfolio mean and variance

Approximation Algorithms

SENSITIVITY ANALYSIS AS A MANAGERIAL DECISION

Solving Simultaneous Equations and Matrices

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

On the Interaction and Competition among Internet Service Providers

Optimization in R n Introduction

2013 MBA Jump Start Program

Sensitivity Analysis with Excel

Lecture 1: Systems of Linear Equations

GENERALIZED INTEGER PROGRAMMING

Support Vector Machines

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

NP-Hardness Results Related to PPAD

Duality in Linear Programming

Solving Linear Programs

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

We shall turn our attention to solving linear systems of equations. Ax = b

1 Determinants and the Solvability of Linear Systems

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

Computer Graphics. Geometric Modeling. Page 1. Copyright Gotsman, Elber, Barequet, Karni, Sheffer Computer Science - Technion. An Example.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

Linear programming and reductions

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Chapter 6. Cuboids. and. vol(conv(p ))

Equilibrium computation: Part 1

The Multiplicative Weights Update method

Chapter 2 Portfolio Management and the Capital Asset Pricing Model

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Linear Programming. Before studying this supplement you should know or, if necessary, review

Transcription:

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins (having 24 million penguins under its control) has to decide how to allocate his empire resources to the maximal benefits of his penguins. In particular, she has to decide how to allocate the money for the next year budget. For example, buying a nuclear bomb has a tremendous positive effect on security (the ability to destruct yourself completely together with your enemy is considered to induce a peaceful security feeling in most people). Guns on the other hand has lesser effect. Piguina (the state) has to supply a certain level of security. Thus, the allocation should be such that: x gun + 1000 x nuclear bomb 1000 where x guns is the number of guns constructed, and x nuclear bomb is the number of NB constructed. On the other hand, 1000 x gun + 1000000 x nuclear bomb x security where x security is the total Piguina is willing to spend on security, and 1000 is the price of producing a single gun, and 1000000 is the price of manufacturing one nuclear bomb. There are a lot of other constrains of this type, and Bigus Piguinus would like to solve them, while minimizing the total money allocated for such spending (the less spent on budget, the larger the tax cut). More formally, we have a large number of variables: x 1,..., x n and a large system of linear inequalities: a 11 x 1 +... + a 1n x n b 1 a 21 x 1 +... + a 2n x n b 2... a m1 x 1 +... + a mn x n b m 1

we would like to decide if there is an assignment of values to x 1,..., x n where those inequalities are satisfied. Since there might be infinite number of such solutions, we decide that we want the solution that maximizes the quantity: c 1 x 1 +... + c n x n. This quantity is known as the objective function of the linear program. Question: How do we compute such a solution? This is known as linear programming. 1.1 History Linear programming can be traced back to the early 19th century. It really started in 1939 when L.V. Kantorovich noticed the importance of certain type of Linear Programming problems. Unfortunately, for several years, Kantorovich work was unknown in the west and unnoticed in the east. Dantzig, in 1947, invented the simplex method for solving LP problems for the US Air force planning problems. T.C. Koopmans, in 1947, showed that LP provide the right model for the analysis of classical economic theories. In 1975, both Koopmans and Kantorovich got the Nobel prize of economics. Dantzig probably did not get it because his work was too mathematical. 2 On the geometry of the simplex algorithm Let us assume, that we have an LP, and we have a set of n variables and m constraints. Furthermore, we know a vertex of the feasible region, and we have the direction of the objective function. For example, if we are on p in the following drawing, we can improve the situation by moving to... p q...to q. In general, a vertex of a polytope is defined by n constraints (since we are in n dimensions). Every two vertices u and v that have d 1 common constraints, are adjacent in the sense that there is an edge on the boundary of the polytope that connects them. To see that, let l be the intersection of the boundaries of those d 1 constraints. Clearly, l is a line, and its intersection with the feasible region is a segment, where u and v are its 2

endpoints. As such, if we are at u but the vertex v has a better value for the objective function, we move our current solution there, updating the current set of constraints to reflect this. So essentially, we have a graph G which is the set of all vertices in the polytope, where two vertices are connected if they have d 1 common constraints, and we walk on this graph, all the time moving from the current vertex to one of the neighbors with a better value. In the end we will get to the global minimum. 3 The simplex algorithm 3.1 LP - All variables being positive We are given an LP: subject to Maximize c j x j j=1 a ij x j b i for i = 1, 2,..., m j=1 here any variable can have any real value. We would like to rewrite it such that every variable is non-negative. This is easy to do, by replacing a variable x i by x i x i. Namely, x i = x i x i where x i 0 and x i 0. Thus, the (trivial) LP 2x + y 5 would be replaced by Thus, we have 2x 2x + y y 5 x 0 y 0 x 0 y 0 Lemma 3.1 Given an instance I of LP, one can rewrite it into an equivalent LP, such that all the variables must be non-negative. 3

3.2 Standard form Maximize c j x j j=1 subject to a ij x j b i for i = 1, 2,..., m j=1 Setting c = and x = Then: c 1. c n x 1 x 2. x n 1 x n, A =. x j 0 for j = 1,..., n. a 11 a 12... a 1(n 1) a 1n a 21 a 22... a 2(n 1) a 2n........... a (m 1)1 a (m 1)2... a (m 1)(n 1) a (m 1)n a m1 a m2... a m(n 1) a mn subject to Maximize Ax b x 0 c T x In the following, we are going to do a long long sequence of rewritings. Slack Form One can rewrite an LP instance, so that every inequality becomes equality, and all variables must be positive: subject to Maximize Ax = b x 0 c T x To do that, we introduce new variables (slack variables) rewriting an inequality: a i x i b i=1 4

by x n+1 = b x n+1 0 a i x i i=1 In fact, now we have a special variable for each equality inequality in the LP program. This variables are special, and would be called basic variables. The variable on the right side are nonbasic variables (original isn t it?). This form, is called SLACK FORM. It is defined by the following parameters: B - Set of indices of basic variables N - Set of indices of nonbasic variables n = N - number of original variables b, c - two vectors of constants m = B - number of basic variables (i.e., number of inequalities) A - The matrix of coefficients N B = {1,..., n + m} And the slack form itself is: A tuple (N,B,A,b,c,v) Where the objective function is to maximize: z = v + c j x j j N And the constraints are: x i = b i a ij x j j N for i B, of course, all variables x i must be non-negative (i.e., x i 0). Exercise 3.2 Show that any linear program can be transformed into equivalent slack form. Example 3.3 In the slack form, we have z = 28 x 3 6 x 5 6 2x 6 3 x 1 = 8 + x 3 6 + x 5 6 x 6 3 x 2 = 4 8x 3 3 2x 5 3 + x 6 3 5

Here we have: x 4 = 18 x 3 2 + x 5 2 b = A = b 1 b 2 b 4 B = {1, 2, 4}, N = {3, 5, 6} a 13 a 15 a 16 a 23 a 25 a 26 a 43 a 45 a 46 = 8 4 18 = c = 1/6 1/6 1/3 8/3 2/3 1/3 1/2 1/2 0 c 3 c 5 c 6 = 1/6 1/6 2/3 v = 28. Note that indices depend on N and B. Note also that A entires are negation of what they appear in the slack form. 3.3 Geometric interpretation of linear programming Each inequality defines a half space. The feasible region is the intersection of those halfspaces. This region is known as a convex polytope or just polytope. 3.4 A concrete example maximize 5x 1 + 4x 2 + 3x 3 subject to 2x 1 + 3x 2 + x 3 5 4x 1 + x 2 + 2x 3 11 3x 1 + 4x 2 + 2x 3 8 x 1, x 2, x 3 0 Next, we introduce slack variables, for example, rewriting 2x 1 + 3x 2 + x 3 5 as the constraints: w 1 0 and w 1 = 5 2x 1 3x 2 x 3. maximize z = 5x 1 + 4x 2 + 3x 3 subject to w 1 = 5 2x 1 3x 2 x 3 w 2 = 11 4x 1 x 2 2x 3 w 3 = 8 3x 1 4x 2 2x 3 x 1, x 2, x 3, w 1, w 2, w 3 0 where w 1, w 2, w 3 are the slack variables. Note also that they are currently also the basic variables. We start from the slack representation trivial solution: x 1 = x 2 = x 3 = 0 then w 1 = 5, w 2 = 11 and w 3 = 8. This is a feasible solution, and z = 0. 6

How to improve this solution? max z = 5x 1 + 4x 2 + 3x 3 subject to w 1 = 5 2x 1 3x 2 x 3 w 2 = 11 4x 1 x 2 2x 3 w 3 = 8 3x 1 4x 2 2x 3 x 1, x 2, x 3, w 1, w 2, w 3 0 For x 1 = x 2 = x 3 = 0 then w 1 = 5, w 2 = 11 and w 3 = 8 and z = 0. How to improve the value of the objective function? Let increase the value of x 1! Good things: objective function increases. Bad things: the solution might stop being feasible. Solution: Let us increase x 1 as much as possible... So: x 2 = x 3 = 0 and w 1 = 5 2x 1 3x 2 x 3 = 5 2x 1 w 2 = 11 4x 1 x 2 2x 3 = 11 4x 1 w 3 = 8 3x 1 4x 2 2x 3 = 8 3x 1 We want to increase x 1 as much as possible, as long as w 1, w 2, w 3 are non-negative. Formally, w 1 = 5 2x 1 0, w 2 = 11 4x 1 0 and w 3 = 8 3x 1 0. This implies that: x 1 2.5 x 1 11/4 = 2.75 and x 1 8/3 = 2.66. So, by how much can we increase the value of x 1? We must take the strictest condition. Namely, x 1 = 2.5. Putting it into the system, we now have, that the current solution is: x 1 = 2.5, x 2 = 0, x 3 = 0 and w 1 = 0, w 2 = 1, w 3 = 0.5 and the objective function is z = 5x 1 + 4x 2 + 3x 3 = 12.5. Namely, we did find a better solution. What happens? One zero nonbasic variable (i.e., x 1 ) became non-zero, and one basic variable became zero (i.e., w 1 ): maximize z = 5x 1 + 4x 2 + 3x 3 subject to w 1 = 5 2x 1 3x 2 x 3 w 2 = 11 4x 1 x 2 2x 3 w 3 = 8 3x 1 4x 2 2x 3 x 1, x 2, x 3, w 1, w 2, w 3 0 7

Simplex( LP : linear system of inequalities ) Transform LP into slack form. Let L be the resulting slack form. Compute L F easible(l) (as described above) x StartSolution(L ) x SimplexInner(L, x) (*) if objective function value of x is > 0 then return No solution x SimplexInner(L, x ) return x Figure 1: The simplex algorithm Let us rewrite the first rule for w 1 so that x 1 is on the left side: x 1 = 2.5 0.5w 1 1.5x 2 0.5x 3 The problem is that x 1 still appears in the right size of the equations for w 2 and w 3. How to solve this? We substitute x 1 into We get: z = 12.5 2.5w 1 + 3.5x 2 + 0.5x 3 x 1 = 2.5 0.5w 1 1.5x 2 0.5x 3 w 2 = 1 + 2w 1 + 5x 2 w 3 = 0.5 + 1.5w 1 + 0.5x 2 0.5x 3 Note that we get the solution, by putting w 1 = x 2 = x 3 = 0. We repeat the above procedure for x 3. Why x 3? Because the coefficient of x 3 in the objective function is positive.). Checking carefully, it follows that the maximum we can increase x 3 is to 1 (why?). We rewrite the last equality for x 3 and we get: x 3 = 1 + 3w 1 + x 2 2w 3 Substituting this into the LP, we get: z = 13 w 1 3x 2 w 3 x 1 = 2 2w 1 2x 2 + w 3 w 2 = 1 + 2w 1 + 5x 2 x 3 = 1 + 3w 1 + x 2 2w 3 What should we do next? Nothing. We had reached the optimal solution. Indeed, all the coefficients in the objective function are negative (or zero). As such, the trivial solution (all non-basic variables get zero) is maximal, as they must all be non-negative, and increasing their value decreases the value of the objective function. So we better stop. 8

3.5 Starting somewhere OK. We had transformed a linear programming problem into slack form. Intuitively, what the Simplex algorithm is going to do, is to start from a feasible solution and start walking around in the feasible region till it reaches the best possible point as far as the objective function is concerned. But maybe the linear program L is not feasible at all (i.e., no solution exists.). Let L be (in slack form): z = v + j N c j x j x i = b i j N a ij x j for i B, where all variables x i must be non-negative (i.e., x i 0). Clearly, if we set all x i = 0 if i N then this determines the values of the basic variables. If they are all positive, we are done, as we found a feasible solution. The problem is that they might be negative. We generate a new LP problem L = F easible(l): Minimize x 0 x i = x 0 + b i j N a ij x j for i B, where all variables x i must be non-negative (i.e., x i 0). Clearly, if we pick x j = 0 for all j N (all the nonbasic variables), and a value large enough for x 0 then all the basic variables would be non-negatives, and as such, we have found a feasible solution for L. Let StartSolution(L ) denote this easily computable feasible solution. We can now use the SimplexInner algorithm to find this optimal solution to L (because we have a feasible solution to start from!). Lemma 3.4 L is feasible if and only if the optimal objective value of L is zero. Proof: A feasible solution to L is immediately an optimal solution to L with x 0 = 0, and vice versa. Namely, given a solution to L with x 0 = 0 we can transform it to a feasible solution to L by removing x 0. The simplex algorithm is presented in Figure 1. Thus, in the following, we have to describe SimplexInner - a procedure to solve an LP in slack form, when we start from a feasible solution. One technicality that is ignored above, is that the starting solution we have for L, generated by StartSolution(L) is not legal as far as the slack form is concerned, because the non-basic variable x 0 is assigned a non-zero value. However, this can be easily resolve by immediately pivot on x 0 when we execute (*) in Figure 1. Namely, we first try to decrease 9

x 0 as much as possible. Details are delayed till the next lecture, when we describe in more detail the SimplexInner algorithm. 10