Initialization: The Big-M Formulation

Similar documents
Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 2 Solving Linear Programs

Special Situations in the Simplex Algorithm

Simplex method summary

1 Solving LPs: The Simplex Algorithm of George Dantzig

OPRE 6201 : 2. Simplex Method

Duality in Linear Programming

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Standard Form of a Linear Programming Problem

Solving Linear Programs

Practical Guide to the Simplex Method of Linear Programming

Linear Programming. March 14, 2014

Linear Programming in Matrix Form

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

Linear Programming Problems

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Nonlinear Programming Methods.S2 Quadratic Programming

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

9.4 THE SIMPLEX METHOD: MINIMIZATION

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes V Problem Transformations

Linear Programming. Solving LP Models Using MS Excel, 18

4.6 Linear Programming duality

The Graphical Method: An Example

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Linear Programming. April 12, 2005

56:171 Operations Research Midterm Exam Solutions Fall 2001

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Linear Programming: Theory and Applications

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

4 UNIT FOUR: Transportation and Assignment problems

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Chapter 6: Sensitivity Analysis

The Correlation Coefficient

A Labeling Algorithm for the Maximum-Flow Network Problem

LU Factorization Method to Solve Linear Programming Problem

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Notes on Factoring. MA 206 Kurt Bryan

Systems of Linear Equations

Linear Programming I

Network Models 8.1 THE GENERAL NETWORK-FLOW PROBLEM

Zeros of a Polynomial Function

What is Linear Programming?

Linear Programming Supplement E

Linear Programming II: Minimization 2006 Samuel L. Baker Assignment 11 is on page 16.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving Systems of Linear Equations

GENERALIZED INTEGER PROGRAMMING

Special cases in Transportation Problems

Simulating the Multiple Time-Period Arrival in Yield Management

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

MEP Y9 Practice Book A

Row Echelon Form and Reduced Row Echelon Form

Solving Linear Programs in Excel

Retirement Financial Planning: A State/Preference Approach. William F. Sharpe 1 February, 2006

0.1 Linear Programming

Dynamic Programming 11.1 AN ELEMENTARY EXAMPLE

Degeneracy in Linear Programming

Lecture 3: Finding integer solutions to systems of linear equations

3.1 Solving Systems Using Tables and Graphs

8 Primes and Modular Arithmetic

Solutions to Math 51 First Exam January 29, 2015

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Unit 1. Today I am going to discuss about Transportation problem. First question that comes in our mind is what is a transportation problem?

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

Chapter 13: Binary and Mixed-Integer Programming

7 Gaussian Elimination and LU Factorization

BX in ( u, v) basis in two ways. On the one hand, AN = u+

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Section 1.1 Linear Equations: Slope and Equations of Lines

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Decision Mathematics D1. Advanced/Advanced Subsidiary. Friday 12 January 2007 Morning Time: 1 hour 30 minutes. D1 answer book

Useful Number Systems

Determine If An Equation Represents a Function

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Review of Fundamental Mathematics

Study Guide 2 Solutions MATH 111

24. The Branch and Bound Method

c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint.

Direct Methods for Solving Linear Systems. Matrix Factorization

1.4 Compound Inequalities

Binary Adders: Half Adders and Full Adders

1 Mathematical Models of Cost, Revenue and Profit

Duality of linear conic problems

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Solving Linear Equations - General Equations

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

1 Portfolio mean and variance

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Chapter 27: Taxation. 27.1: Introduction. 27.2: The Two Prices with a Tax. 27.2: The Pre-Tax Position

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

by the matrix A results in a vector which is a reflection of the given

1.2 Linear Equations and Rational Equations

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

ROUTH S STABILITY CRITERION

Introduction to Matrix Algebra

Lecture 2: August 29. Linear Programming (part I)

Make Better Decisions with Optimization

Transcription:

Initialization: The Big-M Formulation Consider the linear program: Minimize 4x 1 +x 2 3x 1 +x 2 = 3 (1) 4x 1 +3x 2 6 (2) x 1 +2x 2 3 (3) x 1, x 2 0. Notice that there are several new features in this problem, namely: (i) the objective is to minimize; (ii) the first constraint is in equality form, but it does not have a candidate basic variable; and (iii) the second constraint is of the type. Two approaches are commonly adopted for the handling of minimization objective functions. The first is to convert the given objective function into one that is to be maximized. This is done by multiplying the original objective function by 1 and then maximizing the resulting expression. For this example, this means that we can replace the given objective function by: Maximize 4x 1 x 2. The second approach is to revise the optimality criterion in the Simplex algorithm. For maximization problems, recall that the optimality criterion is that if the coefficients of all nonbasic variables in the zeroth row of a Simplex tableau are nonnegative, then its associated basic feasible solution is optimal. If, instead, we are trying to minimize, then it is not difficult to see that we could simply revise the just-stated criterion into one that has the word nonnegative replaced by the word nonpositive. Moreover, if the current solution is not optimal, then we should select the nonbasic variable that has the mostpositive coefficient in the zeroth row as the entering variable. Note, however, that the remaining aspects of the Simplex algorithm, the ratio test in particular, do not require any revision. In our solution of this linear program, we will adopt the second approach. Hence, no action is necessary at this point. Suppose a given constraint is in equality form with a nonnegative right-hand-side constant, as in equation (1) above. (If the right-hand-side constant of an equation is negative, the entire equation can be multiplied by 1 to convert the constant to a positive number.) What we need to do is to find out whether or not the constraint contains a candidate basic variable. If such a variable can be found, we simply declare it as the basic variable associated with that equation and move on to other constraints. In the event that such a 1

variable does not exist, the idea is to artificially introduce a new nonnegative variable to serve as the basic variable associated with that equation. Such a variable will be called an artificial variable. Since constraint (1) above does not contain a candidate basic variable, we will revise it by introducing an artificial variable, which we denote by A 1 ; this results in 3x 1 +x 2 +A 1 = 3. We will also add A 1 0 into the set of nonnegativity constraints. It is important to realize that with the introduction of an artificial variable, the resulting new equation is not equivalent to the original. This is because if the artificial variable assumes a positive value, then any solution that satisfies the new equation won t satisfy the original equation. For example, if we let A 1 = 2 in the above equation, then, since 3x 1 + x 2 + A 1 = 3, we must have 3x 1 + x 2 = 1, which is in contradiction with the original equation 3x 1 + x 2 = 3. Now, as A 1 is one of the starting basic variables, it will (typically) assume a positive value at the beginning of the Simplex iterations. Hence, the starting basic feasible solution will not be a feasible solution to the original problem. Since our aim is to derive an optimal solution that satisfies the original equality constraint, not the revised constraint, we are, in the end, only interested in solutions that have A 1 = 0. Therefore, an important question is: How do we get rid of this artificial variable? One answer (another answer will be given a bit later) to this question is that we can introduce a new term MA 1, where M is a sufficiently large constant, into the objective function. The idea behind this approach, which is naturally called the big-m method, is that although the value of A 1 may be positive initially, but with this added term in the objective function, any solution that has a positive A 1 will have an associated objective-function value that is exceedingly large. Hence, as the Simplex algorithm performs its search for a solution that has the smallest objective function value, it will systematically discard or avoid solutions that have a positive A 1. In other words, the Simplex algorithm will, by design, attempt to converge to solutions that have A 1 = 0, and hence are feasible to the original problem. As a numerical example, consider the solutions (x 1, x 2, A 1 ) = (1/3, 2, 0) and (x 1, x 2, A 1 ) = (1/3, 1, 1), which satisfy the original and the revised equation (1), respectively. The corresponding objective-function values of these two solutions can be evaluated as: 4 (1/3) + 1 2 + M 0 = 10/3 and 4 (1/3) + 1 1 + M 1 = (7/3) + M. Notice that the outcome of the first evaluation does not involve M. It follows that the second evaluation will have a greater value, provided that M is sufficiently large (any M that is strictly greater than 1, specifically). Since a comparison between any solution with A 1 = 0 and any other solution with A 1 > 0 will always result in this order (when M is sufficiently large), the Simplex algorithm will attempt to weed out any solution with a positive A 1. 2

We now turn our attention to constraint (2), which is of the type. To create an equivalent equality, we will reverse what we do in the case of a constraint. That is, we will subtract a nonnegative surplus variable, denoted by s 1, from the left-hand side of that constraint and let the resulting expression equal to the right-hand-side constant. This yields 4x 1 +3x 2 s 1 = 6. Next, observe that none of the three variables on the left-hand side of this new equation can serve as a candidate basic variable. Therefore, similar to what we did in equation (1), we will further revise this equation by introducing another artificial variable, which we denote by A 2. This results in 4x 1 +3x 2 s 1 +A 2 = 6. In addition, we will also introduce a new term MA 2 into the objective function. Finally, since the last constraint is of the type, we simply add a slack variable, denoted by s 2, to its left-hand side to convert it into an equality. In summary, we have converted the given linear program into the following form: Minimize 4x 1 +x 2 +MA 1 +MA 2 3x 1 +x 2 +A 1 = 3 (1) 4x 1 +3x 2 s 1 +A 2 = 6 (2) x 1 +2x 2 +s 2 = 3 (3) x 1, x 2, A 1, s 1, A 2, s 2 0. At this point, the objective function is still not in conformance with the standard form. Following what we did in our first example, we now define a new variable z to serve as our objective function, which is to be minimized; and we will introduce a zeroth constraint, namely z 4x 1 x 2 MA 1 MA 2 = 0, into the constraint set. Furthermore, notice that the artificial variables A 1 and A 2, which are targeted to serve as basic variables in equations (1) and (2), also participate in this new constraint. Since this is not allowed in the standard form, we will have to eliminate them. This is done in two steps. First, we multiply equation (1) by M and add the outcome into equation (0); this eliminates MA 1. Next, we multiply equation (2) by M and add the outcome into the equation obtained in the first step. These two steps result in the new equation (0) below. z +(7M 4)x 1 +(4M 1)x 2 Ms 1 = 9M. 3

With these further revisions, we finally arrive at Minimize z z +(7M 4)x 1 +(4M 1)x 2 Ms 1 = 9M (0) 3x 1 +x 2 +A 1 = 3 (1) 4x 1 +3x 2 s 1 +A 2 = 6 (2) x 1 +2x 2 +s 2 = 3 (3) x 1, x 2, A 1, s 1, A 2, s 2 0, which is now ready for the Simplex algorithm. In tabular form, the above problem becomes: Basic z x 1 x 2 A 1 s 1 A 2 s 2 Variable 1 7M 4 4M 1 0 M 0 0 9M A 1 0 3 1 1 0 0 0 3 A 2 0 4 3 0 1 1 0 6 s 2 0 1 2 0 0 0 1 3 Notice that the introduction of the artificial variables allows us to conveniently declare the basis associated with this tableau as A 1, A 2, and s 2 (listed on the left margin). Therefore, the initial basic feasible solution is (x 1, x 2, A 1, s 1, A 2, s 2 ) = (0, 0, 3, 0, 6, 3), with a corresponding objective-function value of 9M. Since M is big, the coefficients of x 1 and x 2 in R 0, namely 7M 4 and 4M 1, are both positive, implying that the current solution is not optimal. Moreover, a big M also implies that 7M 4 is strictly larger than 4M 1. Hence, x 1 is the entering variable, and the x 1 -column is the pivot column. A comparison of the three ratios 3/3, 6/4, and 3/1 shows that R 1 is the pivot row, and hence A 1 is the leaving variable. This also identifies the entry 3, located at the intersection of the pivot column and the pivot row, as the pivot element. We now execute a pivot. An examination of the x 1 -column shows that we need to go through the following row operations: [ (7M 4)/3] R 1 + R 0, (1/3) R 1, ( 4/3) R 1 + R 2, and ( 1/3) R 1 + R 3. These four sets of operations will produce new versions of equations (0), (1), (2), and (3), respectively; and these equations constitute the new tableau below. Basic z x 1 x 2 A 1 s 1 A 2 s 2 Variable 1 0 (5M + 1)/3 (7M 4)/3 M 0 0 2M + 4 x 1 0 1 1/3 1/3 0 0 0 1 A 2 0 0 5/3 4/3 1 1 0 2 s 2 0 0 5/3 1/3 0 0 1 2 4

Since the coefficient of x 2 in R 0 is positive, this tableau is not optimal, and hence more iterations are necessary. We will not complete the remaining iterations, since they are now straightforward. Several comments are in order. At the end of the above iteration, the artificial variable A 1 is no longer in the basis; that is, its value has been driven to zero by the algorithm. Since artificial variables are not part of the original problem, they can be discarded from further consideration as soon as they leave the basis. This serves to reduce the amount of computation. In fact, we can do even better by discarding the leaving artificial variable (A 1, in this example) at the start (as opposed to the end) of the pivot. One should be careful not to discard any of the original variables, however. Any solution that contains a positive value for any of the artificial variable is not feasible to the original problem. For example, the basic feasible solution associated with the above tableau is (x 1, x 2, s 1, A 2, s 2 ) = (1, 0, 0, 2, 2), where we have removed A 1. Since A 2 = 2 is positive, the current solution is not feasible to the original problem. This will continue to be the case until all artificial variables are driven out. In general, it is possible for a given linear program not to have any feasible solution. In such a case, the Simplex algorithm will not be able to succeed in driving out all of the artificial variables. Thus, if the algorithm terminates with an optimal solution that has at least one of the artificial variables being positive, then the original problem is infeasible. Throughout our computation, we did not assign a specific value for M. We simply treated it as a large number operationally. This means that whenever M is compared against another number, we will let M be the larger of the two. This seems convenient, but can pose a challenge in a computer implementation of the algorithm. If our objective is to maximize 4x 1 + x 2, then, instead of introducing MA 1 and MA 2 into the objective function, we should introduce MA 1 and MA 2. 5