Linear Programming Notes V Problem Transformations



Similar documents
Special Situations in the Simplex Algorithm

1 Solving LPs: The Simplex Algorithm of George Dantzig

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Linear Programming. March 14, 2014

Linear Programming. Solving LP Models Using MS Excel, 18

What is Linear Programming?

Duality in Linear Programming

Linear Programming Problems

Linear Programming I

Lecture 2: August 29. Linear Programming (part I)

Solving Linear Programs

Linear Programming. April 12, 2005

Linear Programming in Matrix Form

Practical Guide to the Simplex Method of Linear Programming

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Nonlinear Programming Methods.S2 Quadratic Programming

Basic Components of an LP:

Linear Programming Notes VII Sensitivity Analysis

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Absolute Value Equations and Inequalities

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Chapter 2 Solving Linear Programs

Mathematical finance and linear programming (optimization)

7. Solving Linear Inequalities and Compound Inequalities

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Click on the links below to jump directly to the relevant section

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

4.6 Linear Programming duality

The Graphical Method: An Example

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

EXCEL SOLVER TUTORIAL

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

LINEAR INEQUALITIES. less than, < 2x + 5 x 3 less than or equal to, greater than, > 3x 2 x 6 greater than or equal to,

Determine If An Equation Represents a Function

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

No Solution Equations Let s look at the following equation: 2 +3=2 +7

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

5.1 Radical Notation and Rational Exponents

Solutions of Linear Equations in One Variable

Linear Equations and Inequalities

Chapter 9. Systems of Linear Equations

Constrained optimization.

Section 1. Inequalities

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication)

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

NUMBER SYSTEMS. William Stallings

Nonlinear Optimization: Algorithms 3: Interior-point methods

56:171 Operations Research Midterm Exam Solutions Fall 2001

c 2008 Je rey A. Miron We have described the constraints that a consumer faces, i.e., discussed the budget constraint.

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

In this section, we will consider techniques for solving problems of this type.

CHAPTER 5 Round-off errors

Practice with Proofs

Several Views of Support Vector Machines

OPRE 6201 : 2. Simplex Method

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

The Envelope Theorem 1

Stanford Math Circle: Sunday, May 9, 2010 Square-Triangular Numbers, Pell s Equation, and Continued Fractions

Inner Product Spaces

Equations, Inequalities & Partial Fractions

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

EQUATIONS and INEQUALITIES

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

3. Mathematical Induction

Linear Programming Supplement E

FRACTIONS OPERATIONS

2.3. Finding polynomial functions. An Introduction:

Inequalities - Absolute Value Inequalities

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

MA107 Precalculus Algebra Exam 2 Review Solutions

A Production Planning Problem

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

8 Divisibility and prime numbers

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

An Introduction to Linear Programming

24. The Branch and Bound Method

Zeros of a Polynomial Function

Module1. x y 800.

Chapter 4 Online Appendix: The Mathematics of Utility Functions

Math 4310 Handout - Quotient Vector Spaces

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Examples of Tasks from CCSS Edition Course 3, Unit 5

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Activity 1: Using base ten blocks to model operations on decimals

Solving Systems of Two Equations Algebraically

Simplex method summary

2.3 Convex Constrained Optimization Problems

Vieta s Formulas and the Identity Theorem

2.6 Exponents and Order of Operations

Zeros of Polynomial Functions

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

The last three chapters introduced three major proof techniques: direct,

Homework # 3 Solutions

1.6 The Order of Operations

Study Guide 2 Solutions MATH 111

Linear Programming: Theory and Applications

Transcription:

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material constraints are all of the form: linear expression constant (a i x b i ), and all variables are constrained to be non-negative. In symbols, this form is: max c x subject to Ax b, x 0. In the second form, the objective is to maximize, the material constraints are all of the form: linear expression = constant (a i x = b i ), and all variables are constrained to be non-negative. In symbols, this form is: max c x subject to Ax = b, x 0. In this formulation (but not the first) we can take b 0. Note: The c, A, b in the first problem are not the same as the c, A, b in the second problem. In order to rewrite the problem, I need to introduce a small number of transformations. I ll explain them in these notes. 2 Equivalent Representations When I say that I can rewrite a linear programming problem, I mean that I can find a representation that contains exactly the same information. For example, the expression 2x = 8 the expression x = 4. They both describe the same fact about x. In general, an equivalent representation of a linear programming problem will be one that contains exactly the same information as the original problem. Solving one will immediately give you the solution to the other. When I claim that I can write any linear programming problem in a standard form, I need to demonstrate that I can make several kinds of transformation: change a minimization problem to a maximization problem; replace a constraint of the form (a i x b i ) by an equation or equations; replace a constraint of the form (a i x b i ) by an equation or equations; replace a constraint of the form (a i x = b i ) by an inequality or inequalities; replace a variable that is not explicitly constrained to be non-negative by a variable or variables so constrained. If I can do all that, then I can write any problem in either of the desired forms. 1

3 Rewriting the Objective Function The objective will be either to maximize or to minimize. If you start with a maximization problem, then there is nothing to change. If you start with a minimization problem, say min f(x) subject to x S, then an equivalent maximization problem is max f(x) subject to x S. That is, minimizing f is the same as maximizing f. This trick is completely general (that is, it is not limited to LPs). Any solution to the minimization problem will be a solution to the maximization problem and conversely. (Note that the value of the maximization problem will be 1 times the value of the minimization problem.) In summary: to change a max problem to a min problem, just multiply the objective function by 1. 4 Rewriting a constraint of the form (a i x b i ) To transform this constraint into an equation, add a non-negative slack variable: a i x b i a i x + s i = b i and s i 0. We have seen this trick before. If x satisfies the inequality, then s i = b i a i x 0. Conversely, if x and s i satisfy the expressions in the second line, then the first line must be true. Hence the two expressions are equivalent. Note that by multiplying both sides of the expression a i x + s i = b i by 1 we can guarantee that the right-hand side is non-negative. 5 Rewriting a constraint of the form (a i x b i ) To transform this constraint into an equation, subtract a non-negative surplus variable: a i x b i a i x s i = b i and s i 0. The reasoning is exactly like the case of the slack variable. To transform this constraint into an inequality pointed in the other direction, multiply both sides by 1. a i x b i a i x + s i b i. 2

6 Rewriting a constraint of the form (a i x = b i ) To transform an equation into inequalities, note that w = z is exactly that same as w z and w z. That is, the one way for two numbers to be equal is for one to be both less than or equal to and greater than or equal to the other. It follows that a i x = b i a i x b i and a i x b i. By the last section, the second line : a i x b i and a i x b i. 7 Guaranteeing that All Variables are Explicitly Constrained to be Non-Negative Most of the problems that we look at requires the variables to be non-negative. The constraint arises naturally in many applications, but it is not essential. The standard way of writing linear programming problems imposes this condition. The section shows that there is no loss in generality in imposing the restriction. That is, if you are thinking about a linear programming problem, then I can think of a mathematically equivalent problem in which all of the variables must be non-negative. The transformation uses a simple trick. You replace an unconstrained variable x j by two variables u j and v j. Whenever you see x j in the problem, you replace it with u j v j. Furthermore, you impose the constraint that u j, v j 0. When you carry out the substitution, you replace x j by non-negative variables. You don t change the problem. Any value that x j can take, can be expressed as a difference (in fact, there are infinitely many ways to express it). Specifically, if x j 0, then you can let u j = x j and v j = 0; if x j < 0, then you can let u j = 0 and v j = x j. 8 What Is the Point? The previous sections simply introduce accounting tricks. There is no substance to the transformations. If you put the tricks together, they support the claim that I made in the beginning of the notes. Who cares? The form of the problem with equality constraints and non-negative variables is the form that the simplex algorithm uses. The inequality constraint form (with non-negative variables) is the form used in for the duality theorem. 3

Warnings: These transformations really are transformations. If you start with a problem in which x 1 is not constrained to be non-negative, but act as if it is so constrained, then you may not get the right answer (you ll be wrong if the solution requires that x 1 take a negative value). If you treat an equality constraint like an inequality constraint, then you ll get the wrong answer (unless the constraint binds at the solution). Similarly, you can t treat as inequality constraint as an equation in general. The transformations involve creating a new variable or constraint to compensate for the changing inequalities to equations, equations to inequalities, or whatever it is you do. 9 Example You know all the ideas. Let me show you how they work. Start with the problem: min 4x 1 + x 2 subject to 2x 1 + x 2 6 x 2 + x 3 = 4 x 1 4 x 2 x 3 0. and let s write it in either of the two standard forms. First, to get it into the form: max c x subject to Ax b, x 0 change the objective to maximize by multiplying by 1: max 4x 1 x 2. Next, change the constraints. Multiply the first constraint by 1: 2x 1 x 2 6; replace the second constraint by two inequalities: x 2 + x 3 4 and x 2 x 3 4; and replace the third constraint by the inequality: x 1 4. Finally, replace the unconstrained variable x 1 everywhere by u 1 v 1 and add the constraints that u 1, v 1 0. Putting these together leads to the reformulated problem 4

max 4u 1 + 4v 1 x 2 subject to 2u 1 2v 1 x 2 6 x 2 + x 3 4 x 2 x 3 4 u 1 + v 1 4 u 1 v 1 x 2 x 3 0. In notation, this problem is in the form: max c x subject to Ax b, x 0 with c = ( 4, 4, 1, 0), b = ( 6, 4, 4, 4) and 2 2 1 0 A = 0 0 1 1 0 0 1 1. 1 1 0 0 Next, to put the problem into the form: max c x subject to Ax = b, x 0, change the objective function to max as above; replace x 1 by u 1 v 1 as above; replace the first constraint with 2u 1 + 2v 1 + x 2 s 1 = 6 and s 1 0; leave the second constraint alone; and replace the third constraint with The problem then becomes u 1 + v 1 + s 3 = 4. max 4u 1 + 4v 1 x 2 subject to 2u 1 + 2v 1 + x 2 s 1 = 6 x 2 + x 3 = 4 u 1 + v 1 + s 3 = 4 u 1 v 1 x 2 x 3 s 1 s 3 0. In notation, this problem is in the form: max c x subject to Ax = b, x 0 with c = ( 4, 4, 1, 0, 0, 0), b = (6, 4, 4) and 2 2 1 0 1 0 A = 0 0 1 1 0 0. 1 1 0 0 0 1 In the two different transformations, the A, b, and c used in the representation differ. Indeed, the two descriptions of the problem have different numbers of variables and different numbers of constraints. It does not matter. If you solve either problem, you can substitute back to find values for the original variables, x 1,..., x 4, and the original objective function. 5

Computer programs that solve linear programming problems (like Excel) are smart enough to perform these transformations automatically. That is, you need not perform any transformations of this sort in order to enter an LP into Excel. The program asks you whether you are minimizing or maximizing, whether each constraint is an inequality or an equation, and whether the variables are constrained to be nonnegative. 6