The Simplex Method. yyye

Similar documents
Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Linear Programming I

1 Solving LPs: The Simplex Algorithm of George Dantzig

Practical Guide to the Simplex Method of Linear Programming

What is Linear Programming?

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Linear Programming: Theory and Applications

4.6 Linear Programming duality

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Standard Form of a Linear Programming Problem

Special Situations in the Simplex Algorithm

Degeneracy in Linear Programming

Simplex method summary

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

OPRE 6201 : 2. Simplex Method

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Linear Programming in Matrix Form

Chapter 2 Solving Linear Programs

Duality in Linear Programming

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

Lecture 2: August 29. Linear Programming (part I)

Solving Linear Programs

Linear Programming. March 14, 2014

Linear Programming Problems

3. Linear Programming and Polyhedral Combinatorics

This exposition of linear programming

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

56:171 Operations Research Midterm Exam Solutions Fall 2001

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Linear Programming Notes V Problem Transformations

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Solving LP Models Using MS Excel, 18

5.1 Bipartite Matching

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

26 Linear Programming

Nonlinear Programming Methods.S2 Quadratic Programming

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Mathematical finance and linear programming (optimization)

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Sensitivity Analysis with Excel

24. The Branch and Bound Method

Module1. x y 800.

Linear Programming Notes VII Sensitivity Analysis

Largest Fixed-Aspect, Axis-Aligned Rectangle

Study Guide 2 Solutions MATH 111

Systems of Linear Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

9.4 THE SIMPLEX METHOD: MINIMIZATION

Chapter 6: Sensitivity Analysis

2.3 Convex Constrained Optimization Problems

Row Echelon Form and Reduced Row Echelon Form

Question 2: How do you solve a linear programming problem with a graph?

Transportation Polytopes: a Twenty year Update

0.1 Linear Programming

An Introduction to Linear Programming

Lecture Notes 2: Matrices as Systems of Linear Equations

Notes on Determinant

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Linear Programming. April 12, 2005

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Can linear programs solve NP-hard problems?

Linear Algebra Notes

EXCEL SOLVER TUTORIAL

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Solving Systems of Linear Equations

Optimization Modeling for Mining Engineers

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Numerical Analysis Lecture Notes

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

3.1 Solving Systems Using Tables and Graphs

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

University of Lille I PC first year list of exercises n 7. Review

Optimization in R n Introduction

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

Solutions to Math 51 First Exam January 29, 2015

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

7 Gaussian Elimination and LU Factorization

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

OPTIMIZATION. Schedules. Notation. Index

Applied Algorithm Design Lecture 5

Computational aspects of simplex and MBU-simplex algorithms using different anti-cycling pivot rules

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Approximation Algorithms

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Linearly Independent Sets and Linearly Dependent Sets

Direct Methods for Solving Linear Systems. Matrix Factorization

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

4 UNIT FOUR: Transportation and Assignment problems

Chapter 4. Duality. 4.1 A Graphical Example

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

DATA ANALYSIS II. Matrix Algorithms

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

Linear Programming II: Minimization 2006 Samuel L. Baker Assignment 11 is on page 16.

Transcription:

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 1 The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4)

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 2 Geometry of Linear Programming (LP) Consider the following LP problem: maximize x 1 +2x 2 subject to x 1 1 x 2 1 x 1 +x 2 1.5 x 1, x 2 0.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 3 LP Geometry depicted in two variable space a1 a2 a2 a3 c If the direction of c is contained by the norm cone of a corner point, then the point is optimal. a3 a4 Each corner point has a norm direction cone a4 Objective contour a5 Figure 1: Feasible region with objective contours

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 4 Solution (decision, point): any specification of values for all decision variables, regardless of whether it is a desirable or even allowable choice. Feasible Solution: a solution for which all the constraints are satisfied. Feasible Region (constraint set, feasible set): the collection of all feasible solution. Interior, Boundary, and Face Extreme Point or Corner Point or Vertex Objective Function Contour: iso-profit (or iso-cost) line. Optimal Solution: a feasible solution that has the most favorable objective value. Optimal Objective Value: the value of the objective function evaluated at an optimal solution. Active Constraint: binding constraint.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 5 Definition of Face and Extreme Points Let P be a polyhedron in R n, then F is a face of P if and only if there is a vector b for which F is the set of points attaining max{b T y : y P } provided this maximum is finite. A polyhedron has only finite many faces; each face is a nonempty polyhedron. A vector y P is an extreme point or a vertex of P if y is not a convex combination of two distinct feasible points.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 6 Theory of LP All LP problems fall into one of the following three classes: Problem is infeasible: feasible region is empty. Problem is unbounded: feasible region is unbounded towards the optimizing direction. Problem is feasible and bounded: There exists an optimal solution or optimizer. There may be a unique optimizer or multiple optimizers. All optimizers are on a face of the feasible region. There is always at least one corner (extreme) optimizer if the face has a corner. If a corner point is not worse than all its adjacent or neighboring corners, then it is optimal.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 7 History of the Simplex Method George B. Dantzig s Simplex Method for LP stands as one of the most significant algorithmic achievements of the 20th century. It is now over 50 years old and still going strong. The basic idea of the simplex method to confine the search to corner points of the feasible region (of which there are only finitely many) in a most intelligent way. In contrast, interior-point methods will move in the interior of the feasible region, hoping to by-pass many corner points on the boundary of the region. The key for the simplex method is to make computers see corner points; and the key for interior-point methods is to stay in the interior of the feasible region.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 8 From Geometry to Algebra How to make computer recognize a corner point? How to make computer tell that two corners are neighboring? How to make computer terminate and declare optimality? How to make computer identify a better neighboring corner?

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 9 LP Standard Form minimize c 1 x 1 + c 2 x 2 +... + c n x n Equivalently, subject to a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1, a 21 x 1 + a 22 x 2 +... + a 2n x n = b 2,. a m1 x 1 + a m2 x 2 +... + a mn x n = b m, x j 0, j = 1, 2,..., n. (LP ) minimize c T x subject to Ax = b, x 0.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 10 Reduction to the Standard Form Eliminating free variables: substitute with the difference of two nonnegative variables x = x + x where x +, x 0. Eliminating inequalities: add slack variables a T x b a T x + s = b, s 0, a T x b a T x s = b, s 0. Eliminating upper bounds: move them to constraints x 3 x + s = 3, s 0.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 11 Eliminating nonzezro lower bounds: shift the decision variables x 3 x := x 3. Change max c T x to min c T x.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 12 An LP Example in Standard Form minimize x 1 2x 2 subject to x 1 +x 3 = 1 x 2 +x 4 = 1 x 1 +x 2 +x 5 = 1.5 x 1, x 2, x 3, x 4, x 5 0.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 13 Basic and Basic Feasible Solution (BFS) In the LP standard form, let s assume that we selected m linearly independent columns, denoted by the index set B from A and solve A B x B = b for the m-vector x B. By setting the variables x N of x corresponding to the remaining columns of A equal to zero, we obtain a solution x of Ax = b. Then x is said to be a basic solution to (LP) with respect to basis A B. The components of x B are called basic variables and those of x N are called nonbasic variables. Two basic solutions are adjacent if they differ by exactly one basic (or nonbasic) variable. If a basic solution satisfies x B 0, then x is called a basic feasible solution (BFS), and it is an extreme point of the feasible region. If one or more components in x B has value zero, x is said to be degenerate.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 14 Geometry vs Algebra Theorem 1 Consider the polyhedron in the standard LP form. Then a basic feasible solution and a corner point are equivalent; the former is algebraic and the latter is geometric. In the LP example: Basis 3,4,5 1,4,5 3,4,1 3,2,5 3,4,2 1,2,3 1,2,4 1,2,5 Feasible? x 1, x 2 0, 0 1, 0 1.5, 0 0, 1 0, 1.5.5, 1 1,.5 1, 1

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 15 Theorem 2 (The Fundamental Theorem of LP in Algebraic form) a Given (LP) and (LD) where A has full row rank m, i) if there is a feasible solution, there is a basic feasible solution; ii) if there is an optimal solution, there is an optimal basic solution. The simplex method is to proceed from one BFS (a corner point of the feasible region) to an adjacent or neighboring one, in such a way as to continuously improve the value of the objective function. a Text p.38

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 16 Rounding to a Basic Feasible Solution for LP Step 1: Start with any feasible solution x 0 and without loss of generality, assume x 0 > 0. Let k = 0 and A 0 = A. Step 2: Find any A k d = 0, d 0, and let x k+1 := x k + αd where α is chosen so that x k+1 0 and at least one of x k+1 equals 0. Step 3: Eliminate the the variable(s) in x k+1 and column(s) in A k corresponding to x k+1 j = 0 and let the new matrix be A k+1. Step 4: Return to Step 2.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 17 Neighboring Basic Solutions Two basic solutions are neighboring or adjacent if they differ by exactly one basic (or nonbasic) variable. A basic feasible solution is optimal if no better neighboring basic feasible solution exists.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 18 Optimality test Consider a BFS (x 1, x 2, x 3, x 4, x 5 ) = (0, 0, 1, 1, 1.5). It s NOT optimal to the problem minimize x 1 2x 2 subject to x 1 +x 3 = 1 x 2 +x 4 = 1 x 1 +x 2 +x 5 = 1.5 x 1, x 2, x 3, x 4, x 5 0;

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 19 but it s optimal to the below problem minimize x 1 +2x 2 subject to x 1 +x 3 = 1 x 2 +x 4 = 1 x 1 +x 2 +x 5 = 1.5 x 1, x 2, x 3, x 4, x 5 0. At a BFS, when the objective coefficients to all the basic variables are zero and those to all the non-basic variables are non-negative, then the BFS is optimal.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 20 LP Canonical Form An LP in the standard form is said to be in canonical form at a BFS if The objective coefficients to all the basic variables are zero, that is, the objective function contains only non-basic variables and The constraint matrix for the basic variables form an identity matrix (with some permutation if necessary). It s easy to see whether or not the current BFS is optimal or not, if the LP is in canonical form. Is there a way to transform the original LP problem to an equivalent LP in the canonical form? That is, is there an algorithm that the feasible set and optimal solution set remain the same when compared to the original LP?

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 21 Transforming to canonical form Suppose the basis of a BFS is A B and the rest is A N. One can transform the equality constraint to A 1 B Ax = A 1 B b, (so that x B = A 1 B b A 1 B A Nx N ) That is, we express x B in terms of x N, the degree of freedom variables. Then the objective function becomes c T x = c T Bx B + c T Nx N = c T BA 1 B b ct BA 1 B A N x N + c T N x N = c T BA 1 B b + (ct N c T BA 1 B A N)x N. The transformed problem is then in canonical form with respect to the BFS A B ; and it s equivalent to the original LP. That is, its feasible set and optimal solution set don t change.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 22 Equivalent LP By ignoring the constant term in the objective function, we have an equivalent intermediate LP problem in canonical form: minimize subject to r T x Āx = b, x 0, where r B := 0, r N := c N A T N(A 1 B )T c B, Ā := A 1 B A, b := A 1 B b.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 23 Optimality Test Revisited The vector r R n defined by r = c ĀT c B = c A T (A 1 B )T c B is called the reduced cost coefficient vector. Often we represent (A 1 B )T c B by shadow price vector y. That is, A T By = c B, (so that r = c A T y). Theorem 3 If r N 0 ( equivalently r 0) at a BFS with basic variable set B, then the BFS x is an optimal basic solution and A B is an optimal basis.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 24 In the LP Example, let the basic variable set B = {1, 2, 3} so that 1 0 1 A B = 0 1 0 1 1 0 and A 1 B = 0 1 1 0 1 0 1 1 1 y T = (0, 1, 1) and r T = (0, 0, 0, 1, 1) The corresponding optimal corner solution is (x 1, x 2 ) = (0.5, 1) in the original problem.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 25 Canonical Tableau The intermediate canonical form data can be organized in a tableau: B r T c T B b Basis Indices Ā b The up-right corner quantity represents negative of the objective value at the current basic solution. With the initial basic variable set B = {3, 4, 5} for the example, the canonical tableau is B 1 2 0 0 0 0 3 1 0 1 0 0 1 4 0 1 0 1 0 1 5 1 1 0 0 1 3 2

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 26 Moving to a Better Neighboring Corner As we saw in the previous slides, the BFS given by the canonical form is optimal if the reduced-cost vector r 0. What to do if it is not optimal? An effort can be made to find a better neighboring basic feasible solution. That is, a new BFS that differs from the current one by exactly one new basic variable, as long as the reduced cost coefficient of the entering variable is negative.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 27 Changing Basis for the LP Example With the initial basic variable set B = {3, 4, 5}: B 1 2 0 0 0 0 3 1 0 1 0 0 1 4 0 1 0 1 0 1 5 1 1 0 0 1 3 2 Select the entering basic variable x 1 with r 1 = 1 < 0 and consider x 3 1 1 x 4 = 1 0 x 1. x 5 3 2 How much can we increase x 1 while the current basic variables remain feasible (non-negative)? This connects to the definition of minimum ratio test (MRT). 1

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 28 Minimum Ratio Test (MRT) Select the entering variable x e with its reduced cost r e < 0; If column Ā.e 0, then Unbounded Minimum Ratio Test (MRT): θ := min { } bi : Ā Āie > 0. ie

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 29 How Much Can the New Basic Variable be Increased? What is θ? It is the largest amount that x e can be increased before one (or more) of the current basic variables x i decreases to zero. For the moment, assume that the minimum ratio in the MRT is attained by exactly one basic variable index, o. Then x o is called the out-going basic variable: That is, in the basic variable set. x o = b o ā oe θ = 0 x i = b i ā ie θ > 0 i o. x o x e

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 30 Tie Breaking If the MRT does not result in a single index, but rather in a set of two or more indices for which the minimum ratio is attained, we select one of these indices as out-going arbitrarily. Again, when x e reaches θ we have generated a corner point of the feasible region that is adjacent to the one associated with the index set B. The new basic variable set associated with this new corner point is o e. But the new BFS is degenerate, one of its basic variable has value 0. Pretending that ɛ > 0 but arbitrarily small and continue the transformation process to the new canonical form.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 31 The Simplex Algorithm Step 0. Initialize with a minimization problem in feasible canonical form with respect to a basic index set B. Let N denote the complementary index set. Step 1. Test for termination: first find r e = min j N {r j }. If r e 0, stop. The solution is optimal. Otherwise determine whether the column of Ā.e contains a positive entry. If not, the objective function is unbounded below. Terminate. Let x e be the entering basic variable. Step 2. Determine the outgoing: execute the MRT to determine the outgoing variable x o. Step 3. Update basis: update B and A B and transform the problem to canonical form and return to Step 1.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 32 Transform to a New Canonical Form: Pivoting The passage from one canonical form to the next can be carried out by an algebraic process called pivoting. In a nutshell, a pivot on the nonzero (in our case, positive) pivot element ā oe amounts to (a) expressing x e in terms of all the non-basic variables; (b) replacing x e in all other formula by using the expression. In terms of Ā, this has the effect of making the column of x e look like the column of an identity matrix with 1 in the o th row and zero elsewhere.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 33 Pivoting: Gauss-Jordan Elimination Process Here is what happens to the current table: 1. All the entries in row o are divided by the pivot element ā oe (This produces a 1 (one) in the column of x s ). 2. For all i o, all other entries are modified according to the rule ā ij ā ij āoj ā oe ā ie. (When j = e, the new entry is just 0 (zero).) The right-hand side and the objective function row are modified in the same way.

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 34 The Initial Canonical Form of the Example Choose e = 2 and MRT would decide o = 4 with θ = 1: B 1 2 0 0 0 MRT 3 1 0 1 0 0 1 4 0 1 0 1 0 1 1 5 1 1 0 0 1 3 2 If necessary, deviding the pivoting row by the pivot value to make the pivot element equal to 1. 3 2

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 35 Transform to a New Canonical Form by Pivoting Objective row := Objective Row + 2 Pivot Row : B 1 0 0 2 0 2 3 1 0 1 0 0 1 2 0 1 0 1 0 1 5 1 1 0 0 1 3 2 Bottom row := Bottom Row - Pivot Row : This is a new canonical form! B 1 0 0 2 0 2 3 1 0 1 0 0 1 2 0 1 0 1 0 1 5 1 0 0 1 1 1 2

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 36 Let s Keep Going Choose e = 1 and MRT would decide o = 5 with θ = 1/2: B 1 0 0 2 0 2 MRT 3 1 0 1 0 0 1 1 2 0 1 0 1 0 1 5 1 0 0 1 1 1 2 1 2

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 37 Transform to a New Canonical Form by Pivoting Objective Row := Objective Row + Pivot Row : B 0 0 0 1 1 2 1 2 3 1 0 1 0 0 1 2 0 1 0 1 0 1 1 1 0 0 1 1 1 2 Second Row := Second Row - Pivot Row : Stop! It s optimal:) B 0 0 0 1 1 2 1 2 3 0 0 1 1 1 1 2 2 0 1 0 1 0 1 1 1 0 0 1 1 1 2

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 38 Detailed Canonical Tableau for Production If the original LP is the production problem: maximize c T x subject to Ax b(> 0), x 0. The initial canonical tableau for minimization would be B c T 0 0 Basis Indices A I b The intermediate canonical tableau would be B r T y T c T B b Basis Indices Ā A 1 B b