Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints



Similar documents
Chapter 2 Solving Linear Programs

Linear Programming Problems

Practical Guide to the Simplex Method of Linear Programming

Standard Form of a Linear Programming Problem

1 Solving LPs: The Simplex Algorithm of George Dantzig

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

9.4 THE SIMPLEX METHOD: MINIMIZATION

Simplex method summary

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Linear Programming. March 14, 2014

Solving Linear Programs

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Linear Programming Notes V Problem Transformations

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Special Situations in the Simplex Algorithm

Duality in Linear Programming

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Solving Linear Programs in Excel

OPRE 6201 : 2. Simplex Method

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Linear Programming II: Minimization 2006 Samuel L. Baker Assignment 11 is on page 16.

56:171 Operations Research Midterm Exam Solutions Fall 2001

Nonlinear Programming Methods.S2 Quadratic Programming

Linear Programming in Matrix Form

Zeros of Polynomial Functions

USING EXCEL 2010 TO SOLVE LINEAR PROGRAMMING PROBLEMS MTH 125 Chapter 4

Linear Programming I

4.6 Linear Programming duality

Linear Programming. April 12, 2005

Section 8.2 Solving a System of Equations Using Matrices (Guassian Elimination)

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Linear Programming. Solving LP Models Using MS Excel, 18

Solving Systems of Linear Equations Using Matrices

1.2 Solving a System of Linear Equations

Linear Programming: Theory and Applications

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

LINEAR INEQUALITIES. less than, < 2x + 5 x 3 less than or equal to, greater than, > 3x 2 x 6 greater than or equal to,

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

0.1 Linear Programming

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

5.1 Radical Notation and Rational Exponents

Special cases in Transportation Problems

3.1 Solving Systems Using Tables and Graphs

Abstract: We describe the beautiful LU factorization of a square matrix (or how to write Gaussian elimination in terms of matrix multiplication).

Linearly Independent Sets and Linearly Dependent Sets

Solutions of Linear Equations in One Variable

Question 2: How do you solve a matrix equation using the matrix inverse?

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

Chapter 6: Sensitivity Analysis

Systems of Linear Equations

4 UNIT FOUR: Transportation and Assignment problems

5.5. Solving linear systems by the elimination method

7. Solving Linear Inequalities and Compound Inequalities

Row Echelon Form and Reduced Row Echelon Form

Linear Equations and Inequalities

Linear Programming Supplement E

Study Guide 2 Solutions MATH 111

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

What is Linear Programming?

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Zeros of a Polynomial Function

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Sensitivity Report in Excel

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Chapter 2: Linear Equations and Inequalities Lecture notes Math 1010

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

Decision Mathematics D1. Advanced/Advanced Subsidiary. Friday 12 January 2007 Morning Time: 1 hour 30 minutes. D1 answer book

This exposition of linear programming

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Notes on Determinant

x y The matrix form, the vector form, and the augmented matrix form, respectively, for the system of equations are

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Fractions and Linear Equations

Absolute Value Equations and Inequalities

LU Factorization Method to Solve Linear Programming Problem

Linear Programming Notes VII Sensitivity Analysis

Lecture Notes 2: Matrices as Systems of Linear Equations

MAT 200, Midterm Exam Solution. a. (5 points) Compute the determinant of the matrix A =

Methods for Finding Bases

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Network Models 8.1 THE GENERAL NETWORK-FLOW PROBLEM

Using EXCEL Solver October, 2000

Determinants can be used to solve a linear system of equations using Cramer s Rule.

Solving Linear Equations in One Variable. Worked Examples

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

A Labeling Algorithm for the Maximum-Flow Network Problem

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Section Continued

Math Lecture 33 : Positive Definite Matrices

Using CPLEX. =5 has objective value 150.

The Graphical Method: An Example

Creating a More Efficient Course Schedule at WPI Using Linear Optimization

Solving Linear Systems, Continued and The Inverse of a Matrix

information available. There is a tremendous amount of sensitivity information, or information

Linear Programming Refining Transportation

Transcription:

Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and minimization problems with any combination of,, = constraints Section 4 Maximization and Minimization with Problem Constraints 2 Maximize P = 2x 1 + x 2 subject to x 1 + x 2 < 10 x 1 + x 2 > 2 x 1, x 2 > 0 To form an equation out of the first inequality, we introduce a slack variable s 1, as before, and write x 1 + x 2 + s 1 = 10. To form an equation out of the second inequality we introduce a second variable s 2 and subtract it from the left side so that we can write x 1 + x 2 s 2 = 2. The variable s 2 is called a surplus variable, because it is the amount (surplus) by which the left side of the inequality exceeds the right side. 3 4

We now express the linear programming problem as a system of equations: x 1 + x 2 + s 1 = 10 x 1 + x 2 s 2 = 2 2x 1 x 2 + P = 0 x 1, x 2, s 1, s 2 > 0 1 2 1 2 It can be shown that a basic solution of a system is not feasible if any of the variables (excluding P) are negative. Thus a surplus variable is required to satisfy the nonnegative constraint. An initial basic solution is found by setting the nonbasic variables x 1 and x 2 equal to 0. That is, x 1 = 0, x 2,= 0,, s 1 = 10, s 2 = -2, P = 0. This solution is not feasible because the surplus variable s 2 is negative. 5 6 Artificial Variables In order to use the simplex method on problems with mixed constraints, we turn to a device called an artificial variable. This variable has no physical meaning in the original problem and is introduced solely for the purpose of obtaining a basic feasible solution so that we can apply the simplex method. An artificial variable is a variable introduced into each equation that has a surplus variable. To ensure that we consider only basic feasible solutions, an artificial variable is required to satisfy the nonnegative constraint. Returning to our example, we introduce an artificial variable a 1 into the equation involving i the surplus variable s 2 : x 1 + x 2 s 2 + a 1 = 2 To prevent an artificial variable from becoming part of an optimal solution to the original problem, a very large penalty is introduced into the objective function. This penalty is created by choosing a positive constant M so large that the artificial variable is forced to be 0 in any final optimal solution of the original problem. 7 8

We then add the term Ma 1 to the objective function: P =2x 1 + x 2 Ma 1 We now have a new problem, called the modified problem: Maximize P = 2x 1 + x 2 - Ma 1 subject to x 1 + x 2 + s 1 = 10 x 1 + x 2 s 2 + a 1 = 2 x 1, x 2, s 1, s 2, a 1 > 0 Big M Method: Form the Modified Problem If any problem constraints have negative constants on the right side, multiply l both sides by -1 to obtain a constraint t with a nonnegative constant. Remember to reverse the direction of the inequality if the constraint is an inequality. Introduce a slack variable for each constraint of the form. Introduce a surplus variable and an artificial variable in each constraint. Introduce an artificial variable in each = constraint. For each artificial variable a, add Ma to the objective function. Use the same constant M for all artificial variables. 9 10 Key Steps for Solving a Problem Using the Big M Method Now that we have learned the steps for finding the modified problem for a linear programming problem, we will turn our attention to the procedure for actually solving such problems. The procedure is called the Big M Method. The initial system for the modified problem is x 1 + x 2 + s 1 =10 x 1 + x 2 s 2 + a 1 = 2 2x 1 x 2 + Ma 1 + P = 0 x 1, x 2, s 1, s 2, a 1 > 0 We next write the augmented coefficient matrix for this system, which we call the preliminary simplex tableau for the modified problem. 11 12

Definition: Initial Simplex Tableau 1 1 1 0 0 0 10 1 1 0 1 1 0 2 2 1 0 0 M 1 0 To start the simplex process we require an initial simplex tableau, described on the next slide. The preliminary simplex tableau should either meet these requirements, or it needs to be transformed into one that does. For a system tableau to be considered an initial simplex tableau, it must satisfy the following two requirements: 1. The requisite number of basic variables must be selectable. Each basic variable must correspond to a column in the tableau with exactly one nonzero element. Different basic variables must have the nonzero entries in different rows. The remaining variables are then selected as non-basic variables. 2. The basic solution found by setting the non-basic variables equal to zero is feasible. 13 14 The preliminary simplex tableau from our example satisfies the first requirement, since s 1, s 2, and P can be selected as basic variables according to the criterion stated. However, it does not satisfy the second requirement since the basic solution is not feasible (s 2 = -2.) To use the simplex method, we must first use row operations to transform the tableau into an equivalent matrix that satisfies all initial simplex tableau requirements. This transformation is not a pivot operation. 1 1 1 0 0 0 10 1 1 0 1 1 0 2 2 1 0 0 M 1 0 If you inspect the preliminary tableau, you realize that the problem is that s 2 has a negative coefficient in its column. We need to replace s 2 as a basic variable by something else with a positive coefficient. We choose a 1. 15 16

We want to use a1 as a basic variable instead of s2. We proceed to eliminate M from the a 1 column using row operations: 1 1 1 0 0 0 10 (-M)R 2 + R 3 ->R 3 1 1 0 1 1 0 2 2 1 0 0 M 1 0 1 1 1 0 0 0 10 1 1 0 1 1 0 2 M 2 M 1 0 M 0 1 2M From the last matrix we see that the basic variables are s 1, a 1, and P because those are the columns that meet the requirements for basic variables. The basic solution found by setting the nonbasic variables x 1, x 2, and s 2 equal to 0 is x 1 = 0, x 2 = 0, s 1 = 10, s 2 = 0, a 1 =2, P = 2M. The basic solution is feasible (P can be negative) and all requirements are met. 17 18 We now continue with the usual simplex process, using pivot operations. When selecting the pivot columns, keep in mind that M is unspecified, but we know it is a very large positive number. 1 1 1 0 0 0 10 1 1 0 1 1 0 2 M 2 M 1 0 M 0 1 2M In this example, M 2 and M are positive, and M 1 is negative. The first pivot column is column 2. If we pivot on the second row, second column, and then on the first row, first column, we obtain: 1 1 1 x 1 0 0 4 1 2 2 2 x 2 1 1 1 0 1 0 6 2 2 2 3 1 1 0 0 M 1 14 P 2 2 2 Since all the indicators in the last row are nonnegative, we have the optimal solution: Max P = 14 at x 1 = 4, x 2 = 6, s 1 = 0, s 2 = 0, a 1 = 0. 19 20

Big M Method: Summary Big M Method: Summary To summarize: 1. Form the preliminary simplex tableau for the modified problem. 2. Use row operations to eliminate the Ms in the bottom row of the preliminary simplex tableau in the columns corresponding to the artificial variables. The resulting tableau is the initial simplex tableau. 3. Solve the modified problem by applying the simplex method to the initial simplex tableau found in the second step. 4. Relate the optimal solution of the modified problem to the original problem. A) if the modified problem has no optimal solution, the original problem has no optimal solution. B) if all artificial variables are 0 in the optimal solution to the modified problem, delete the artificial variables to find an optimal solution to the original problem C) if any artificial variables are nonzero in the optimal solution, the original problem has no optimal solution. 21 22