What is Linear Programming?



Similar documents
. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

Linear Programming Notes V Problem Transformations

Linear Programming. March 14, 2014

1 Solving LPs: The Simplex Algorithm of George Dantzig

4.6 Linear Programming duality

Special Situations in the Simplex Algorithm

Module1. x y 800.

Linear Programming I

Linear Programming Notes VII Sensitivity Analysis

Systems of Linear Equations

3. Linear Programming and Polyhedral Combinatorics

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

24. The Branch and Bound Method

Linear Programming. April 12, 2005

Lecture 2: August 29. Linear Programming (part I)

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Row Echelon Form and Reduced Row Echelon Form

Simplex method summary

Solving Linear Programs

Linear Programming: Theory and Applications

Duality of linear conic problems

Largest Fixed-Aspect, Axis-Aligned Rectangle

Practical Guide to the Simplex Method of Linear Programming

Linear Programming. Solving LP Models Using MS Excel, 18

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

The Graphical Method: An Example

Solutions to Math 51 First Exam January 29, 2015

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

Linear Programming in Matrix Form

Unified Lecture # 4 Vectors

8.1. Cramer s Rule for Solving Simultaneous Linear Equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Solving Systems of Linear Equations

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Mathematical finance and linear programming (optimization)

OPRE 6201 : 2. Simplex Method

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

8.2. Solution by Inverse Matrix Method. Introduction. Prerequisites. Learning Outcomes

Methods for Finding Bases

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Nonlinear Programming Methods.S2 Quadratic Programming

Duality in Linear Programming

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

Linear Algebra I. Ronald van Luijk, 2012

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Linear Algebra Notes

Vector and Matrix Norms

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Lecture L3 - Vectors, Matrices and Coordinate Transformations

10.2 Series and Convergence

NOTES ON LINEAR TRANSFORMATIONS

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Solving simultaneous equations using the inverse matrix

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

Chapter 4. Duality. 4.1 A Graphical Example

3.1 Solving Systems Using Tables and Graphs

Inner Product Spaces

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

160 CHAPTER 4. VECTOR SPACES

Linear Equations and Inequalities

Standard Form of a Linear Programming Problem

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Proximal mapping via network optimization

2.3 Convex Constrained Optimization Problems

Linear Programming Problems

4 UNIT FOUR: Transportation and Assignment problems

26 Linear Programming

Continued Fractions and the Euclidean Algorithm

Chapter 2 Solving Linear Programs

EXCEL SOLVER TUTORIAL

Equations, Inequalities & Partial Fractions

Introduction to Matrix Algebra

An Introduction to Linear Programming

Date: April 12, Contents

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Chapter 2: Linear Equations and Inequalities Lecture notes Math 1010

EQUATIONS and INEQUALITIES

God created the integers and the rest is the work of man. (Leopold Kronecker, in an after-dinner speech at a conference, Berlin, 1886)

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

Math 4310 Handout - Quotient Vector Spaces

Lecture 7: Finding Lyapunov Functions 1

Linear Programming Supplement E

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

3. Mathematical Induction

1.2 Solving a System of Linear Equations

Systems of Equations

Similarity and Diagonalization. Similar Matrices

ALMOST COMMON PRIORS 1. INTRODUCTION

Notes on Determinant

This exposition of linear programming

Transportation Polytopes: a Twenty year Update

Transcription:

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to be optimized, and a set of constraints to be satisfied by x. A linear program is an optimization problem where all involved functions are linear in x; in particular, all the constraints are linear inequalities and equalities. Linear programming is the subject of studying and solving linear programs. Linear programming was born during the second World War out of the necessity of solving military logistic problems. It remains one of the most used mathematical techniques in today s modern societies. 1.1 A Toy Problem A local furniture shop makes chairs and tables. The projected profits for the two products are, respectively, $20 per chair and $30 per table. The projected demands for chairs and tables are 400 and 100, respectively. Each chair requires 2 cubic feet of wood while each table requires 4 cubic feet. The shop has a total amount of 1,000 cubic feet of wood in store. How many chairs and tables should the shop make in order to maximize its profit? Let x 1 be the number of chairs and x 2 the number of tables to be made. There are the two variables, or unknowns, for this problem. The shop wants to maximize its total profit, 20x 1 + 30x 2, subject to the constraints that (a) the total amount of wood used to make the two products can not exceed the 500 cubic feet available, and (b) the numbers of chairs and tables to be made 5

6 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? should not exceed the demands. In additional, we should not forget that the number of chairs and tables made need to be nonnegative. Putting all these together, we have an optimization problem: max 20x 1 + 30x 2 s.t. 2x 1 + 4x 2 1000 0 x 1 400 0 x 2 100 (1.1) where 20x 1 + 30x 2 is the objective function, s.t. is the shorthand for subject to which is followed by constraints of this problem. This optimization problem is clearly a linear program where all the functions involved, both in the objective and in the constraints, are linear functions of x 1 and x 2. 1.2 From Concrete to Abstract Let us look at an abstract production model. A company produces n products using m kinds of materials. For the next month, the unit profits for the n products are projected to be, respectively, c 1, c 2,, c n. The amounts of materials available to the company in the next month are, respectively, b 1, b 2,, b m. The amount of material i consumed by a unit of product j is given by a ij 0 for i = 1, 2,, m and j = 1, 2,, n (some a ij could be zero if prodict j does not consume material i). The question facing the company is, given the limited availability of materials, what is the quantity the company should produce in the next month for each product in order to achieve the maximum total profit? The decision variables are obviously the amounts of production for the n products in the next month. Let us call them x 1, x 2,, x n. The optimization model is to maximize the total profit, subject to the material availability constraints for all m materials, and the nonnegativity constraints on the n variables. In mathematical terms, the model is the following linear program:

1.2. FROM CONCRETE TO ABSTRACT 7 max c 1 x 1 + c 2 x 2 + + c n x n s.t. a 11 x 1 + a 12 x 2 + + a 1n x n b 1 a 21 x 1 + a 22 x 2 + + a 2n x n b 2. a m1 x 1 + a m2 x 2 + + a mn x n b m x 1, x 2,, x n 0 (1.2) The nonnegativity constraints can be vital, but are often forgotten by beginners. Why is nonnegativity important here? First, in the above context, it does not make sense to expect the company to produce a negative amount of a product. Moreover, if one product, say product k, is not profitable, corresponding to a c k < 0, without the nonnegativity constraints the model would produce a solution x k < 0 and generate a profit c k x k > 0. In fact, since a negative amount of product k would not consume any material but instead generate materials, one could drive the profit to infinity by forcing x k to go to negative infinity. Hence, the model would be totally wrong had one forgot nonnegativity. The linear program in (1.2) is tedious to write. One can shorten the expressions using the summation notation. For example, the total profit can be represented by the left-hand side instead of the right-hand side of the following identity n c i x i = c 1 x 1 + c 2 x 2 + + c n x n. i=1 However, a much more concise way is to use matrices and vectors. If we let c = (c 1, c 2,, c n ) T and x = (x 1, x 2,, x n ) T. Then the total profit becomes c T x. In a matrix-vector notation, the linear program (1.2) becomes max s.t. c T x Ax b x 0 (1.3) where A R m n and b R m. The inequalities are always understood as component-wise comparisons.

8 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? 1.3 A Standard Form A linear program can have an objective of either minimization or maximization, while its constraints can have any combination of linear inequalities and equalities. It is impractical to study linear programs and to design algorithms for them without introducing a so-called standard form a unifying framework encompassing most, if not all, individual forms of linear programs. Different standard forms exist in the literature that may offer different advantages under different circumstances. Here we will mostly use the following standard form: min s.t. c T x Ax = b x 0 (1.4) where the matrix A is assumed to be m n with m < n, b R m and c, x R n. The triple (A, b, c) represents problem data that need to be specified, and x is the variable to be determined. In fact, once the size of A is given, the sizes of all the other quantities follow accordingly from the rule of matrix multiplications. In plain English, a standard linear program is one that is a minimization problem with a set of equality constraints but no inequality constraints except nonnegativity on all variables. We will always assume that A has full rank, or in other words, the rows of A are linearly independent which ensures that the equations in Ax = b are consistent for any right-hand side b R m. We make this assumption to simplify the matter without loss of generality, because redundant or inconsistent linear equations can always be detected, and removed if so desired, through standard linear algebra techniques. Moreover, the requirement m < n ensures that in general there are infinite many solutions to the equation Ax = b, leaving degrees of freedom for nonnegative and optimal solutions. A linear program, say LP-A, is said to be equivalent to a linear program LP-B if an optimal solution of LP-A, if it exists, can be obtained from an optimal solution of LP-B through some simple algebraic operations. This equivalence allows one to solve LP-B and obtain a solution to LP-A. We claim that every linear program is equivalent to a standard-form linear program though a transformation, which usually requires adding extra variables and constraints. Obviously, maximizing a function is equivalent to minimizing the negative of the function. A common trick to transform an inequality a T x β into an equivalent equality is to add a so-called slack

1.3. A STANDARD FORM 9 variable η so that a T x β a T x + η = β, η 0. Let us consider the toy problem (1.1). With the addition of slack variables x 3, x 4, x 5 (we can name them anyway we want), we transform the linear program on the left to the one on the right: max 20x 1 + 30x 2 s.t. 2x 1 + 4x 2 1000 x 1 400 x 2 100 x 1, x 2 0 = max 20x 1 + 30x 2 s.t. 2x 1 + 4x 2 + x 3 = 1000 x 1 + x 4 = 400 x 2 + x 5 = 100 x 1, x 2, x 3, x 4, x 5 0 After switching to minimization, we turn the linear program on the right to an equivalent linear program of the standard form: min 20x 1 30x 2 + 0x 3 + 0x 4 + 0x 5 s.t. 2x 1 + 4x 2 + x 3 + 0x 4 + 0x 5 = 1000 x 1 + 0x 2 + 0x 3 + x 4 + 0x 5 = 400 0x 1 + x 2 + 0x 3 + 0x 4 + x 5 = 100 x 1, x 2, x 3, x 4, x 5 0 where c T = (20 30 0 0 0), b is unchanged and 2 4 1 0 0 A = 1 0 0 1 0. 0 1 0 0 1 (1.5) This new coefficient matrix is obtained by padding the 3-by-3 identity matrix to the right of the original coefficient matrix. Similarly, the general linear program (1.3) can be transformed into min s.t. c T x Ax + s = b x, s 0 (1.6) where s R m is the slack variable vector. This linear program is in the standard form because it is a minimization problem with only equality constraints and nonnegativity on all variables. The variable in (1.6) consists of x R n and s R m, and the new data triple is obtained by the construction ( ) c A [A I], b b, c, 0 where I is the m-by-m identity matrix and 0 R m.

10 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? 1.4 Feasibility and Solution Sets Let us call the following set F = {x R n : Ax = b, x 0} R n (1.7) the feasibility set of the linear program (1.4). Points in the feasibility set are called feasible points, out of which we seek an optimal solution x that minimizes the objective function c T x. With the help of the feasibility set notation, we can write our standard linear program into a concise form min{c T x : x F}. (1.8) A polyhedron in R n is a subset of R n defined by points satisfying a collection of linear inequalities and/or equalities. For example, a hyper-plane {x R n : a T x = β} is a polyhedron where a R n and β R, and a half-space {x R n : a T x β} is a polyhedron as well. In geometric terms, a polyhedron is nothing but the intersection of a collection of hyper-planes and half-spaces. In particular, the empty set and a singleton set (that contains only a single point) are polyhedra. Clearly, the feasibility set F in (1.7) for linear program (1.4) is a polyhedron in R n. It may be empty if some constraints are contradictory (say, x 1 1 and x 1 1), or it may be an unbounded set, say, F = {x R 2 : x 1 x 2 = 0, x 0} R 2, (1.9) which is the half diagonal-line emitting from the origin towards infinity. Most of times, F will be a bounded, nonempty set. A bounded polyhedron is also called a polytope. The optimal solution set, or just solution set for short, of a linear program consists of all feasible points that optimizes its objective function. In particular, we denote the solution set of the standard linear program (1.4) or (1.8) as S. Since S F, S is empty whenever F is empty. However, S can be empty even if F is not.

1.5. THREE POSSIBILITIES 11 The purpose of a linear programming algorithm is to determine where S is empty or not and, in the latter case, to find a member x of S. In case such an x exists, we can write S = {x F : c T x = c T x } or S = {x R n : c T x = c T x } F. (1.10) That is, S is the intersection of a hyper-plane with the polyhedron F. Hence, S itself is a polyhedron. If the intersection is a singleton S = {x }, then x is the unique optimal solution; otherwise, there must exist infinitely many optimal solutions to the linear program. 1.5 Three Possibilities A linear program is infeasible if its feasibility set is empty; otherwise, it is feasible. A linear program is unbounded if it is feasible but its objective function can be made arbitrarily good. For example, if a linear program is a minimization problem and unbounded, then its objective value can be made arbitrarily small while maintaining feasibility. In other words, we can drive the objective value to negative infinity within the feasibility set. The situation is similar for a unbounded maximization problem where we can drive the objective value to positive infinity. Clearly, a linear program is unbounded only if its feasibility set is a unbounded set. However, a unbounded feasibility set does not necessarily imply that the linear program itself is unbounded. To make it clear, let us formally define the term unbounded for a set and for a linear program. We say a set F is unbounded if there exists a sequence {x k } F such that x k as k. On the other hand, we say a linear program min{c T x : x F} is unbounded if there exists a sequence {x k } F such that c T x k as k. Hence, for a linear program the term unbounded means objective unbounded. When the feasibility set F is unbounded, whether or not the corresponding linear program is unbounded depends entirely on the objective function. For example, consider F given by (1.9). The linear program max{x 1 + x 2 : (x 1, x 2 ) F} = max{2x 1 : x 1 0} is unbounded. However, when the objective is changed to minimization instead, the resulting linear program has an optimal solution at the origin.

12 CHAPTER 1. WHAT IS LINEAR PROGRAMMING? If a linear program is feasible but not (objective) unbounded, then it must achieve a finite optimal value within its feasibility set; in other words, it has an optimal solution x S F. To sum up, for any given linear program there are three possibilities: 1. The linear program is infeasible, i.e., F =. In this case, S =. 2. The linear program is feasible but (objective) unbounded. In this case, F must be an unbounded set and S =. 3. The linear program is feasible and has an optimal solution x S F. In this case, the feasibility set F can be either unbounded or bounded. These three possibilities imply that if F is both feasible and bounded, then the corresponding linear program must have an optimal solution x S F, regardless of what objective it has.