3.1 Simplex Method for Problems in Feasible Canonical Form

Similar documents
Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

1 Solving LPs: The Simplex Algorithm of George Dantzig

Practical Guide to the Simplex Method of Linear Programming

Linear Programming in Matrix Form

Systems of Linear Equations

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

4.6 Linear Programming duality

Special Situations in the Simplex Algorithm

Solving Linear Programs

Simplex method summary

Standard Form of a Linear Programming Problem

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Linear Programming Notes V Problem Transformations

Linear Programming. March 14, 2014

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Chapter 2 Solving Linear Programs

Duality in Linear Programming

OPRE 6201 : 2. Simplex Method

Solving Linear Systems, Continued and The Inverse of a Matrix

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Linear Programming Problems

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Solving Systems of Linear Equations

NOTES ON LINEAR TRANSFORMATIONS

Linear Programming: Theory and Applications

MATH APPLIED MATRIX THEORY

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Linear Programming I

What is Linear Programming?

Nonlinear Programming Methods.S2 Quadratic Programming

Linear Programming. April 12, 2005

Row Echelon Form and Reduced Row Echelon Form

1.2 Solving a System of Linear Equations

7 Gaussian Elimination and LU Factorization

Notes on Determinant

Question 2: How do you solve a matrix equation using the matrix inverse?

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Chapter 7. Matrices. Definition. An m n matrix is an array of numbers set out in m rows and n columns. Examples. (

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

Solutions to Math 51 First Exam January 29, 2015

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Vector and Matrix Norms

Math 312 Homework 1 Solutions

160 CHAPTER 4. VECTOR SPACES

Matrix Representations of Linear Transformations and Changes of Coordinates

LS.6 Solution Matrices

Methods for Finding Bases

Lecture 3: Finding integer solutions to systems of linear equations

University of Lille I PC first year list of exercises n 7. Review

Recall that two vectors in are perpendicular or orthogonal provided that their dot

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Linear Algebra Notes

Solving Systems of Linear Equations Using Matrices

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

1 Determinants and the Solvability of Linear Systems

Direct Methods for Solving Linear Systems. Matrix Factorization

Similarity and Diagonalization. Similar Matrices

This exposition of linear programming

Vector Spaces 4.4 Spanning and Independence

9.4 THE SIMPLEX METHOD: MINIMIZATION

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

56:171 Operations Research Midterm Exam Solutions Fall 2001

Linear Maps. Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (February 5, 2007)

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Lecture Notes 2: Matrices as Systems of Linear Equations

4.5 Linear Dependence and Linear Independence

LINEAR ALGEBRA W W L CHEN

1 VECTOR SPACES AND SUBSPACES

Au = = = 3u. Aw = = = 2w. so the action of A on u and w is very easy to picture: it simply amounts to a stretching by 3 and 2, respectively.

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

Linear Algebra Notes for Marsden and Tromba Vector Calculus

DATA ANALYSIS II. Matrix Algorithms

Matrix Algebra. Some Basic Matrix Laws. Before reading the text or the following notes glance at the following list of basic matrix algebra laws.

Mathematical finance and linear programming (optimization)

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Using row reduction to calculate the inverse and the determinant of a square matrix

Linear Programming. Solving LP Models Using MS Excel, 18

Lecture 4: Partitioned Matrices and Determinants

by the matrix A results in a vector which is a reflection of the given

0.1 Linear Programming

Name: Section Registered In:

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Orthogonal Projections

The Graphical Method: An Example

The Singular Value Decomposition in Symmetric (Löwdin) Orthogonalization and Data Compression

T ( a i x i ) = a i T (x i ).

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Factorization Theorems

I. GROUPS: BASIC DEFINITIONS AND EXAMPLES

Inner Product Spaces

The Characteristic Polynomial

1 Sets and Set Notation.

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

Approximation Algorithms

Rotation Matrices and Homogeneous Transformations

Introduction to Matrix Algebra

Transcription:

Chapter SIMPLEX METHOD In this chapter, we put the theory developed in the last to practice We develop the simplex method algorithm for LP problems given in feasible canonical form and standard form We also discuss two methods, the M-Method and the Two-Phase Method, that deal with the situation that we have an infeasible starting basic solution Simplex Method for Problems in Feasible Canonical Form The Simplex method is a method that proceeds from one BFS or extreme point of the feasible region of an LP problem expressed in tableau form to another BFS, in such a way as to continually increase (or decrease) the value of the objective function until optimality is reached The simplex method moves from one extreme point to one of its neighboring extreme point Consider the following LP in feasible canonical form, ie its right hand side vector b : max subject to x = c T x { Ax b, x Its initial tableau is x x x s x n x n+ x n+r x n+m b x n+ a a a s a n b x n+ a a a s a n b x n+r a r a r a rs a rn b r x n+m a m a m a ms a mn b r x c c c s c n Here x n+i, i =, m are the slack variables The original variables x i, i =,, n are called the structural or decision variables Since all b i, we can read off directly from the tableau a starting

Chapter SIMPLEX METHOD BFS given by [,,,, b, b,, b m ] T, ie all structural variables x j are set to zero Note that this corresponds to the origin of the n-dimensional subspace R n of R n+m In matrix form, the original constraint Ax b has be augmented to [ x [A I] = Ax + Ix xs] s = b () Here x s is the vector of slack variables Since the columns of the augmented matrix [A I] that correspond to the slack variables {x n+i } m i= is an identity matrix which is clearly invertible, the slack variables {x n+i } m i= are basic We denote by B the set of current basic variables, ie B = {x n+i} m i= The set of non-basic variables, ie {x i } n i= will be denoted by N Consider now the process of replacing an x r B by an x s N We say that x r is to leave the basis and x s is to enter the basis Consequently after this operation, x r becomes non-basic, ie x r N and x s becomes basic, ie x s B This of course amounts to a different (selection of columns of matrix A to give a different) basis B We shall achieve this change of basis by a pivot operation (or simply called a pivot) This pivot operation is designed to maintain an identity matrix as the basis in the tableau at all time Pivot Operation with Respect to the Element a rs Once we have decided to replace x r B by x s N, the a rs in the tableau will be called the pivot element We will see later that the feasibility condition implies that a rs > The r-th row and the s-th column of the tableau are called the pivot row and the pivot column respectively The rules to update the tableau are: (a) In pivot row, a rj a rj /a rs for j =, n + m (b) In pivot column, a rs, a is for i =, m, i r (c) For all other elements, a ij a ij a rj a is /a rs Graphically, we have j s i a ij a is r a rj a rs becomes j s i a ij a rj a is /a rs r a rj /a rs Or, simply, a b c d becomes a bc d c d Notice that this pivot operation is simply the Gaussian elimination such that variable x s is eliminated from all m + but the r-th equation, and in the r-th equation, the coefficient of x s is equal to In fact, Rule (a) above amounts to normalization of the pivot row such that the pivot element becomes Rule (b) above amounts to eliminations of all the entries in the pivot column except the pivot element Rule (c) is to compute the Schur s complement for the remaining entries in the tableau

Simplex Method for Problems in Feasible Canonical Form Example Consider The initial tableau is given by Tableau : x + x x + x 4 = x x + x + x = x + x x + x 6 = x x x x 4 x x 6 b x 4 x x 6 B The current basic solution is [,,,,, ] T which is clearly feasible Suppose we choose a, as our pivot element Then after one pivot operation, we have Tableau : x x x x 4 x x 6 b x x 7 x 6 6 B We note that the current basic solution is [,,,, 7, 6] T which is infeasible Using the new (, ) entry as pivot, we have Tableau : x x x x 4 x x 6 b x x x 6 8 7 9 B The current basic solution is [8/, 7/,,,, 9/] T and is feasible Finally, let us eliminate the last slack variable x 6 by replacing it by x Tableau 4: x x x x 4 x x 6 b x x 4 x 9 B 4 The current basic solution is [, 4, 9,,, ] T which is infeasible and degenerate Thus we see that one cannot choose the pivot arbitrarily It has to be chosen according to some feasibility criterion There are three important observations that we should note here First the pivot operations which amounts to elementary row operations on the tableaus, are being recorded in the tableaus at the columns that correspond to the slack variables In the example above, one can easily check that Tableau i is obtained from Tableau by pre-multiplying Tableau by the matrix formed by the columns of x 4, x and x 6 in Tableau i In the tableaus, the inverse of these matrices are computed

4 Chapter SIMPLEX METHOD and are denoted by B i Let the columns in Tableau be denoted as usual by a j and the columns in Tableau i be denoted by y j, then since [a, a,, a m+n ] = [A I] = Bi [B i A B i ] = B i [y, y,, y m+n ], it is clear that B i y j = a j Comparing this with equation (), we see that B i are the change of basis matrices from Tableau i to Tableau Our second observation is the following one Since the last column in Tableau is given by b, the last column in Tableau i, which we denote by y = (y,, y m ) T, will be given by B i y = b Since B i is invertible, y gives the basic variables of the current basic solution, ie the basic solution x i B corresponding to B i is given by x i B = y = B i b () For this reason, the last column of the tableau, ie y, is called the solution column The third observation is that the columns of B i are the columns of the initial tableau For example, the columns of B are the first, second and the sixth columns of Tableau In fact, Tableau is obtained by moving (via elementary row operations) the identity matrix in Tableau to the first, second and the sixth columns in Tableau It indicates that in each iteration of the simplex method, we are just choosing different selection of columns of the augmented matrix to give a different basic matrix B In particular, the solution obtained in each tableau is indeed the basic solution to our original augmented matrix system () In fact, Tableau means that 8/ 7/ [A I] = B 8/ 7/ = = b, 9/ 9/ ie the current solution is given by [8/, 7/,,,, 9/] T For Tableau 4, since B 4 = A, we have [I A ] [ ] A b = A b, which is equivalent to A (A b) + I = b, ie the current solution in Tableau 4 is given by [A b, ] T In the following, we consider the criteria that guarantee the feasibility and optimality of the solutions Feasibility Condition Suppose that the entering variable x s has been chosen according to some optimality conditions, ie the pivot column is the s-th column Then the leaving basic variable x r must be selected as the basic variable corresponding to the smallest positive ratio of the values of the current right hand side to the current positive constraint coefficients of the entering non-basic variable x s

Simplex Method for Problems in Feasible Canonical Form To determine row r x s y Ratio y r y rs = min i { y i y is y is > } y s y y y s y s y y y s y is y i y i y is y ms y m y m y ms This follows directly from equation (4) and the fact that the current basic solution x B defined in (4) is given by the solution column y here, see () above Optimality Condition For simplicity, we consider a maximization problem We first denote the entries in the row that correspond to x by y j The (m +, m + n + )-th entry in the tableau is denoted by y We will show in the next section that y j = (c j z j ), j =,, n + m, () the negation of the reduced cost coefficients that appeared in Theorem 6 Here z j is defined in () Moreover, we will show also that y = c T Bx B, (4) ie y is the current objective function value associated with the current BFS in the tableau Thus according to Theorem 6, the entering variable x s N can be selected as a non-basic variable x s having a negative coefficient Usual choices are the first negative y s or the most negative y s If all coefficients y j are non-negative, then by Theorem 7, an optimal solution has been reached 4 Summary of Computation Procedure Once the initial tableau has been constructed, the simplex procedure calls for the successive iteration of the following steps Testing of the coefficients of the objective function row to determine whether an optimal solution has been reached, ie, whether the optimality condition that all coefficients are nonnegative in that row is satisfied If not, select a currently non-basic variable x s to enter the basis For example, the first negative coefficient or the most negative one Then determine the currently basic variable x r to leave the basis using the feasibility condition, ie select x r where y r /y rs = min i {y i /y is y is > } 4 Perform a pivot operation with pivot row corresponding to x r and pivot column corresponding to x s Return to

6 Chapter SIMPLEX METHOD Example Consider the LP problem: Max x = x + x + x x + x + x x + x + x Subject to x + x + x 6 x, x, x By adding slack variables x 4, x and x 6, we have the following initial tableau Tableau : Initial tableau, current BFS is x = [,,,,, 6] T and x = x x x x 4 x x 6 b Ratio x 4 = x = x 6 6 6 = x We choose x as the entering variable to illustrate that any nonbasic variable with negative coefficient can be chosen as entering variable The smallest ratio is given by x 4 row Thus x 4 is the leaving variable Tableau : Current BFS is x = [,,,,, ] T and x = x x x x 4 x x 6 b Ratio x = x = x 6 x Tableau : Current BFS is x = [,,,,, ] T and x = 4 x x x x 4 x x 6 b Ratio x x x 6 4 x 7 4 Tableau 4: Optimal tableau, optimal BFS x = [/,, 8/,,, 4] T, x = 7/ x x x x 4 x x 6 b x x 8 x 6 4 7 6 x 7

Simplex Methods for Problems in Standard Form 7 We note that the extreme point sequence that the simplex method passes through are {x 4, x, x 6 } {x, x, x 6 } {x, x, x 6 } {x, x, x 6 } Simplex Methods for Problems in Standard Form Our previous method is based upon the existence of an initial BFS to the problem It is desirable to have an identity matrix as the initial basic matrix For LP in feasible canonical form, the initial basic matrix is the matrix associated with the slack variables, and is an identity matrix Consider an LP in standard form: max subject to x = c T x { Ax = b, x where we assume that b There is no obvious initial starting basis B such that B = I m For notational simplicity, assume that we pick B as the last m (linearly independent) columns of A, ie A is of the form A = [N B] We then have for the augmented system: { Nx N + Bx B = b Multiplying by B to the first equation yields, x c T N x N c T Bx B = B Nx N + x B = B b or Hence the x equation becomes x B = B b B Nx N x c T Nx N c T B(B b B Nx N ) = Thus we have { B Nx N x (c T N c T BB N)x N + x B = B b = c T BB b Denoting z T N = ct B B N (an (n m) row vector) gives { B Nx N + x B = B b x (c T N z T N)x N = c T BB b which is called the general representation of an LP in standard form with respect to the basis B Its initial simplex tableau is then x N x B b x B B N I B b x (c T N zt N ) ct B B b

8 Chapter SIMPLEX METHOD We note that the j-th entry of z N is given by c T BB N j = c T BB a j = c T By j = z j where z j is defined as in () Thus in the table, we see that the entries in the x row are given by (c j z j ) for x j N and zero for x j B Thus they are the negation of the reduced cost coefficients This verifies equation () that we have assumed earlier Moreover, by (), we see that y = c T BB b = c T Bx B, which is the same as (4) We remark that x is now expressed in terms of the non-basic variables, x = c T BB b + (c j z j )x j () Hence it is easy to see that for maximization problem, the current BFS is optimal when c j z j for all j For minimization problem, the current BFS will be optimal when c j z j for all j Example Consider the following LP x j N max x = x + x subject to x + x 4 x + x = 6 x, x Putting into standard form by adding the surplus variable x, the augmented system is: x + x x = 4 x + x = 6 x x x = The simplex tableau for the problem is: Tableau : x x x b 4 6 x Here we do not have a starting identity matrix Suppose we let x and x to be our starting basic variables, then [ ] [ ] [ B =, N = and c B = ] In this case B = [ ], x = B N = [ ] [ ] = b = B b = [ ] [ ] 4 = 6 [ [ ] 8 ],

Simplex Methods for Problems in Standard Form 9 It is also easily check that z = c T B a j = [, ] [ and the current value of the objective function is given by Hence the starting tableau is: Tableau : c T BB b = [, ] ] [ ] = [ ] [ 4 = / 6] x x x b x 8 x x Thus x = [/, 8/, ] T is an initial BFS We can now apply the simplex method as discussed in to find the optimal solution The next iteration gives: Tableau : x x x b x 6 x 8 x 6 Thus the optimal solution is x = [6,, 8] T with x = 6 We note that if we choose x and x as our starting basis variables, then we get Tableau immediately and no iteration is required However, if x and x are chosen as starting variables, then we have Tableau : x x x b x x x Hence the starting basic solution is not feasible and we cannot use the simplex method to find our optimal solution

Chapter SIMPLEX METHOD The M-Method Example illustrates that the starting basic solution may sometimes be infeasible The M- method and the Two-phase method discussed in this and the next sections are methods that can find a starting basic feasible solution whenever it exists Consider again an LPP where there is no desirable starting identity matrix max subject to x = c T x { Ax = b, x where b We may add suitable number of artificial variables x a, x a,, x am to it to get a starting identity matrix The corresponding prices for the artificial variables are M for maximization problem, where M is sufficiently large The effect of the constant M is to penalize any artificial variables that will occur with positive values in the final optimal solutions Using the idea, the LPP becomes max subject to z = c T x M T x a { Ax + Im x a = b x, where x a = (x a, x a,, x am ) T and is the vector of all ones We observe that x = and x a = b is a feasible starting BFS Moreover, any solution to Ax + I m x a = b which is also a solution to Ax = b must have x a = Thus, we have to drive x a = if possible Example 4 Consider the LP in Example again max x = x + x x + x 4 subject to x + x = 6 x, x Introducing surplus variable x and artificial variables x 4 and x yields, x + x x + x 4 = 4 x + x + x = 6 x x x + Mx 4 + Mx = Now the columns corresponding to x 4 and x form an identity matrix In tableau form, we have x x x x 4 x b x 4 4 x 6 x M M Notice that in the x row, the reduced cost coefficients that correspond to the basic variables x 4 and x are not zero These nonzero entries are to be eliminated first before we have our starting tableau After eliminations of those M, we have the initial tableau:

4 The Two-Phase Method x x x x 4 x b x 4 4 x 6 x ( + M) ( + M) M M We note that once an artificial variable becomes non-basic, it can be dropped from consideration in subsequent calculations x x x x b x x 4 x +M +M 4M After we eliminate all the artificial variables we have x x x b x 8 x x At this point all artificial variables are dropped from the problem, and x = [/, 8/, ] T is an initial BFS Notice that this is the same as Tableau in Example After one iteration, we get the final optimal tableau x x x b x 6 x 8 x 6 Thus the optimal solution is x = (6,, 8) T with x = 6 4 The Two-Phase Method The M-method is sensitive to round-off error when being implemented on computers The two-phase method is used to circumvent this difficulty Phase I: (Search for a Starting BFS) Instead of considering the actual objective function in the M-Method z = n m c i x i M x ai, i= i=

Chapter SIMPLEX METHOD we maximize the function z = m x ai Since b, the initial BFS satisfies x a Notice that z and the possible maximum value of z is zero Moreover, z will be zero only if each artificial variable is zero If the maximum of z is zero, we have driven all artificial variables to zero If the maximum of z is not zero, then the artificial variables cannot be driven to zero and the original problem has no feasible solution In Phase I, we stop as soon as z becomes zero, because we know that this is the maximum value of z We need not continue until the optimality criterion is satisfied if z becomes zero before this happens During Phase I, the sequence of vectors to enter and leave the basis is the same as the M-method except when the vectors are tied In fact, the reduced cost coefficients of both methods are given by i= M-method: z j c j = M r y rj + β Phase I: z j c j = r y rj where β is a small quantity compared with M At the end of Phase I, ie when the optimality condition is satisfied or z =, we have one of the following three possibilities: (i) max z <, in this case, no feasible solution exists for our original problem, see 4 (ii) max z = and no artificial variable appears in the basis, ie we have found a BFS to the original problem (iii) max z = and one or more artificial variables appear in the basis at zero level In this case, we have also found a basic degenerate feasible solution to the original problem Degenerate solutions are discussed in 4 Phase II: (Conclude with an Optimal BFS) When Phase I ends in (ii) or (iii), we go to Phase II to find an optimal solution In Phase II, we assign the actual price c j to each structural variable and a price of zero to any artificial variables which still appear in the basis at zero level Thus the objective function to be optimized in Phase II is the actual objective function z = n c i x i i= When Phase I ends in (ii), we are back to the situation discussed in, and there should be no problem When Phase I ends in (iii), we must give special attentions to the artificial variables which appear in the basis at zero level We must make sure that the artificial variables never become positive again in Phase II We will return to this case in 46 Example Consider the following LP min x = x + 4x + 7x + x 4 + x x + x + x + x 4 + x = 7 x + x + x + x 4 + x = 6 subject to x + x + x + x 4 + x = 4 x free, x, x, x 4, x Since x is free, it can be eliminated by solving for x in terms of the other variables from the first equation and substituting everywhere else This can be done nicely using our pivot operation on the following simplex tableau:

4 The Two-Phase Method x x x x 4 x b 7 6 4 4 7 Initial tableau We select any non-zero element in the first column as our pivot element this will eliminate x from all other rows:- x x x x 4 x b 7 ( ) 4 Equivalent Problem Saving the first row ( ) for future reference only, we carry on only the sub-tableau with the first row and the first column deleted There is no obvious basic feasible solution, so we use the two-phase method: After making b, we introduce artificial variables y and y to give the artificial problem:- x x x 4 x y y b c B = Initial Tableau for Phase I The cost coefficients of the artificial variables are + because we are dealing with a minimization problem Transforming (by adding the first two rows to the last row) the last row to give a tableau in canonical form, we get x x x 4 x y y b y y x 4 First tableau for Phase I

4 Chapter SIMPLEX METHOD which is in canonical form Recall that this is a minimization problem, entering variable is chosen with positive entry (rather than negative) in the x -row We carry out the pivot operations with the indicated pivot elements:- x x x 4 x y y b x y x Second tableau for Phase I x x x 4 x y y b x x x Final tableau for Phase I At the end of Phase I, we go back to the equivalent reduced problem (ie discarding the artificial variables y, y ):- x x x 4 x b x x x 4 c B = Initial problem for Phase II This is transform into x x x 4 x b x x x c B = Initial tableau for Phase II Pivoting as shown gives x x x 4 x b x x x 9 Final tableau for Phase II

4 The Two-Phase Method The solution x =, x = can be inserted in the expression ( ) for x giving x = 7 + () + () = Thus the final solution is x = [,,,, ] T with x = 9