SOLVING LINEAR SYSTEM OF INEQUALITIES WITH APPLICATION TO LINEAR PROGRAMS

Similar documents
Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Linear Programming I

Practical Guide to the Simplex Method of Linear Programming

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Nonlinear Programming Methods.S2 Quadratic Programming

Linear Programming. March 14, 2014

1 Solving LPs: The Simplex Algorithm of George Dantzig

Lecture 2: August 29. Linear Programming (part I)

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Special Situations in the Simplex Algorithm

Linear Programming Notes V Problem Transformations

OPRE 6201 : 2. Simplex Method

Largest Fixed-Aspect, Axis-Aligned Rectangle

3.1 Solving Systems Using Tables and Graphs

3. Linear Programming and Polyhedral Combinatorics

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

Linear Programming. Solving LP Models Using MS Excel, 18

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

CHAPTER 9. Integer Programming

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

What is Linear Programming?

Linear Programming: Theory and Applications

Transportation Polytopes: a Twenty year Update

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

24. The Branch and Bound Method

Module1. x y 800.

Solving Systems of Linear Equations

Linear Programming Problems

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

MA107 Precalculus Algebra Exam 2 Review Solutions

The Graphical Method: An Example

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Applied Algorithm Design Lecture 5

EdExcel Decision Mathematics 1

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

4.6 Linear Programming duality

Mathematical finance and linear programming (optimization)

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Introduction to Linear Programming (LP) Mathematical Programming (MP) Concept

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Linear Programming in Matrix Form

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Linear Programming. April 12, 2005

Study Guide 2 Solutions MATH 111

56:171 Operations Research Midterm Exam Solutions Fall 2001

2.3 Convex Constrained Optimization Problems

Can linear programs solve NP-hard problems?

How To Understand And Solve A Linear Programming Problem

EXCEL SOLVER TUTORIAL

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

1 Lecture: Integration of rational functions by decomposition

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Zachary Monaco Georgia College Olympic Coloring: Go For The Gold

Duality of linear conic problems

GENERALIZED INTEGER PROGRAMMING

Solving Systems of Linear Equations

CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY

DATA ANALYSIS II. Matrix Algorithms

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

Systems of Linear Equations

Polynomial and Rational Functions

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Math 120 Final Exam Practice Problems, Form: A

5.1 Bipartite Matching

160 CHAPTER 4. VECTOR SPACES

Lecture 5 Principal Minors and the Hessian

Vector and Matrix Norms

Approximation Algorithms

Standard Form of a Linear Programming Problem

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

0.1 Linear Programming

BX in ( u, v) basis in two ways. On the one hand, AN = u+

Determinants can be used to solve a linear system of equations using Cramer s Rule.

Row Echelon Form and Reduced Row Echelon Form

Solving Quadratic Equations

1.5 SOLUTION SETS OF LINEAR SYSTEMS

Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs

Linear Programming Notes VII Sensitivity Analysis

Orthogonal Diagonalization of Symmetric Matrices

Random graphs with a given degree sequence

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Simplex method summary

Duality in Linear Programming

Algebra 2 Year-at-a-Glance Leander ISD st Six Weeks 2nd Six Weeks 3rd Six Weeks 4th Six Weeks 5th Six Weeks 6th Six Weeks

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

Chapter 2 Solving Linear Programs

3. KINEMATICS IN TWO DIMENSIONS; VECTORS.

5.7 Maximum and Minimum Values

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

4 UNIT FOUR: Transportation and Assignment problems

This exposition of linear programming

An Introduction to Linear Programming

Optimization in R n Introduction

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Transcription:

SOLVING LINEAR SYSTEM OF INEQUALITIES WITH APPLICATION TO LINEAR PROGRAMS Hossein Arsham, University of Baltimore, (410) 837-5268, harsham@ubalt.edu Veena Adlakha, University of Baltimore, (410) 837-4969, vadlakha@ubalt.edu ABSTRACT In this paper we present an improved Algebraic Method to solve linear systems of inequalities with applications to linear optimization. The proposed method eliminates the need to manipulate linear inequalities to introduce additional variables. We present application of the proposed method to handle linear optimization with varying objective function. Numerical examples are provided to illustrate the concepts presented in the paper. INTRODUCTION There are numerous algorithms for solving systems of simultaneous linear equations with unrestricted variables in linear algebra. However, the problem of solving a system of simultaneous linear inequalities in which some variables must be non-negative is much harder and had to wait until the linear programming (LP) era for a resolution. The Graphical Method of solving LP problems provides a clear illustration of the feasible and non-feasible regions, as well as, vertices. Having a visual understanding of the problem helps practitioners with a more rational thought process. For example, the fact that the optimal solution of a linear program with a non-empty bounded feasible region always occurs at a vertex of the feasible region. However, the Graphical Method is limited in its applicability to solving LP problems having at most two decision variables and its appeal of human vision is restricted when there are many constraints present. The ordinary algebraic method is a complete enumerating algorithm to solve linear programs (LP) with bounded solutions. It converts all inequality constraints to equality constraints to obtain a system of equations by introducing slack/surplus variables, converts all non-restricted (in sign) variables to restricted ones by substituting the difference of two new variables, and finally solves all of its square subsystems of equations. This conversion of an LP into a pure algebraic version overlooks the original space of the decision variables and treats all variables alike throughout the process. In this paper, we propose an improved method to overcome these deficiencies. The Algebraic Method is designed to extend the graphical method results to a multi-dimensional LP problem. The proposed method uses Analytical Geometry concepts and overcomes the limitation of human vision of the Graphical Method. The algorithm initially concentrates on locating basic solutions by solving selected squared subsystems of equations of size dependent on the number of decision variables and constraints. Then, a feasibility test is performed on the obtained solution to be retained for further considerations.

THE LINEAR PROGRAMMING PROBLEM Linear Programming is a problem-solving approach developed to help managers make decisions. Numerous applications of linear programming can be found in today s competitive business environment. Consider the following standard LP formulation Problem P: Max (or Min) f(x) = CX Subject to AX a, BX b, DX = d, X i 0, i = 1,..., j X i 0, i = j+1,..., k X i unrestricted in sign, i = k+1,..., n where matrices A, B, and D have n columns with p, r, and q rows respectively and vectors C, a, b, and d have appropriate dimensions. Therefore, there are m = (p + r + q) constraints and n decision variables. It is assumed that m n in Problem P. Note that the main constraints have been separated into three subgroups. Without loss of generality we assume all RHS elements, a, b, and d, are non-negative. We do not deal with trivial cases where A = B = D = 0 (no constraints), or a = b = d = 0 (all boundaries pass through the origin point). SOLVING A SYSTEM OF LINEAR INEQUALITIES Generally speaking, the Simplex method for linear programming is strictly based on the theory and solution of system of linear inequalities. The basic solutions to a linear program are the solutions to the systems of equations consisting of constraints at binding position. Not all basic solutions satisfy all the problem constraints. Those that do meet all the constraint restrictions are called the basic feasible solutions. The basic feasible solutions correspond precisely to the vertices of the feasible region. Definition 1: A solution to any system of equations is called a Basic Solution (BS). Those Basic Solutions which are feasible are called Basic Feasible Solutions (BFS). The optimal solution of a bounded LP always occurs at a BFS, i.e., one of the vertices of the feasible region. The importance of this result is that it reduces the LP problem to a "combinatorial" problem that of determining which constraints out of many should be satisfied by the optimal solution. The algebraic simplex method keeps trying different combinations and computes the objective function for each trial until the best value is found. The Ordinary Algebraic Method Assuming the Problem P has a bounded solution; then the usual algebraic simplex method proceeds as follows: 1. Convert all inequality constraints into equalities by introducing slack/surplus variables.

2. Convert all non-restricted variables to restricted ones by substituting the difference of two new variables (this step is not necessary; however, for uniformity in feasibility testing, this conversion is always performed). 3. Calculate the difference between the number of variables and the number of equations and set that many variables to zero. 4. Determine the solution to all these systems of equations. Set up the basic solution (BS) table to find out which BS is feasible (not violating the non-negativity condition of slack/surplus). 5. Evaluate the objective function for each BFS and find the optimal solution with best value. The algebraic method is hard to apply because of the number of systems of equations involved: (# of var. + # of ineqs. )! ( # of consts.)!( # of vars.- # of equal.consts.)! = ( p ( 2n - j + p + r )! + r + q )!( 2n - j - q )! Each system of equations contains (p + r + q) constraints with (p + r + q) variables which include slacks/surplus and the additional variables introduced for the unrestricted and negative variables. The Improved Algebraic Method Now we present an improved Algebraic Method of solving a system of linear inequalities (SLI) that does not require the formulation of an auxiliary LP problem and LP solution algorithms such as Simplex. We provide a simple methodology to extend the solution of SLI of one or two dimensions to systems of higher dimensions. We are interested in finding the vertices of the feasible region of Problem P, expressed as system of linear equalities and inequalities: AX a, BX b, DX = d, where some X i 0, some X i 0, and some X i unrestricted in sign. Matrices A, B, and D as well as vectors a, b, and d have appropriate dimensions. The interior of the feasible region is defined by the full set of the vertices obtained. Other relevant domains, such as faces, edges, etc. of the feasible region are defined by appropriate subsets of these vertices. This is the basis of the proposed Algebraic Method. We present the following steps to identify all such subsets of vertices using a constraint-vertex table. Step 1. Convert all inequalities into equalities. Step 2. Calculate the difference between the number of equations and the number of variables (assuming m n) and set that many variables to zero. Step 3. Determine BS to this system of equations. Go back to Step 2 if any BS is left. Step 4. Check feasibility of all solutions obtained in Step 3 to determine BFSs.

The coordinates of vertices are the BFSs of the systems of equations obtained by setting some of the constraints at binding (i.e., equality) position. For a bounded feasible region, the number of vertices is at most combinatorial C m where m is the number of constraints and n is the number n of variables. Therefore, a BS is obtained by taking any set of n equations and solving them simultaneously. By plugging this BS in the constraints of other equations, one can check for feasibility of the BS. If it is feasible, then this solution is a BFS that provides the coordinates of a corner point of the feasible region. NUMERICAL EXAMPLES We provide an example to explain the proposed Algebraic Method and develop parametric representation of the feasible region for the given system of linear inequalities (SLI). Example 1: Suppose we wish to solve the following SLI to find the vertices of its feasible region: -3X 1 + X 2 6 X 1 + 2X 2 4 X 2-3 Step 1: Consider all of the constraints at binding position, i.e., all with equality (=) sign. This produces the following 3 equations in 2 unknowns: -3X 1 + X 2 = 6 X 1 + 2X 2 = 4 X 2 = -3 Steps 2-4: Here we have m = 3 equations with n = 2 unknowns. So there are at most 3 basic solutions. Solving the three resultant systems of equations, we have: X 1 X 2 Feasible? -3-3 Yes 10-3 Yes -8/7 18/7 Yes The solutions in the above table are all BFSs. Therefore, the vertices of the feasible region are: X 1 = -3 X 1 = 10 X 1 = -8/7 X 2 = -3 X 2 = -3 X 2 = 18/7

We extend this example to present a parametric representation of the feasible region of an SLI. Using the parameters λ 1, λ 2, and λ 3 for the first, second and third vertex, respectively, we get: X 1 = -3λ 1 + 10λ 2-8/7λ 3 X 2 = -3λ 1-3λ 2 + 18/7λ 3 for all parameters λ 1, λ 2, and λ 3 such that λ 1, λ 2, λ 3 0, and (λ 1 + λ 2 + λ 3 ) = 1. This representation is also known as the Convex Hull of the feasible region. By substituting suitable values for these λ values in the convex hull, one can generate any point of the feasible region. APPLICATIONS TO LINEAR PROGRAMS The proposed Algebraic Method can be used to solve linear programs as illustrated below. The following example is a continuation of Example 1. Example 2: The following LP is from Hillier and Lieberman [1, page 147]. Max -X 1 + 4X 2 subject to: -3X 1 + X 2 6 X 1 + 2X 2 4 X 2-3 X 1, X 2 unrestricted in sign As determined earlier, the parametric representation of the feasible region is: X 1 = -3λ 1 + 10λ 2-8/7λ 3 X 2 = -3λ 1-3λ 2 + 18/7λ 3 Substituting the parametric version of the feasible region into the objective function, we obtain: f(λ) = -X 1 + 4X 2 = -9λ 1-22λ 2 + 80/7λ 3, (1) over the closed domain λ 1, λ 2, λ 3 0, and (λ 1 + λ 2 + λ 3 ) = 1. This is a convex combination of three points on the real line R 1 ; namely the coefficients -9, -22, and 80/7. Clearly, the optimal solution occurs when λ 3 = 1 and all other λ's are set to 0, with maximum value of 80/7 at X 1 = - 8/7, X 2 = 18/7. Note that the optimal solution is one of the vertices. Proposition: The maximum (minimum) points of an LP with a bounded feasible region correspond to the maximization (minimization) of the parametric objective function f (λ). Let the terms with the largest (smallest) coefficients in f (λ) be denoted by λ L and λ S

respectively. Since f (λ) is a (linear) convex combination of its coefficients, the optimal solution of f (λ) is obtained by setting L or S equal to 1 and all other λ i = 0. The maximum and minimum points of an LP with a bounded feasible region correspond to λ L = 1 and λ S = 1, respectively. More on the Objective Function The parametric representation of the objective function provides a wealth of information. It can be easily seen that if the objective in the two examples presented above is changed to Min (instead of Max); a practitioner does not have to entirely resolve the problem. A look at the parametric representation f(λ) in equation (1) reveals that the minimum value of the objective function (-X 1 + 4X 2 ) is -22 when λ 2 = 1, representing the feasible point (X 1 = 10, X 2 = -3). Therefore, one can deduce that the value of the function (-X 1 + 4X 2 ) range from a low of -22 (the minimum value) to 80/7 (the maximum value). The parametric representation of the feasible region of an SLI is useful in solving the corresponding LP with varying objective. For any given objective function, one can easily obtain a parametric representation, f(λ), of the objective function to determine optimum value of the objective function, whether maximum or minimum. Further, bounds on the range of the objective function value can be obtained. CONCLUSIONS In this paper we present a new direct method of solving a linear system of inequalities (SLI) that does not require the formulation of an auxiliary LP problem and LP solution algorithms such as simplex. We provide a simple methodology to extend the solution of SLI of two dimensions to systems of higher dimensions. The proposed method can be used to optimize LP problems with varying objective function. Given a system of linear equalities and/or inequalities, the method provides all vertices of the feasible region. A parametric representation of the feasible region as a convex combination of the vertices is developed. This parametric representation of the feasible region enables a practitioner to solve linear optimization problems with varying objectives. REFERENCES [1] Hillier, F., Lieberman G. Introduction to Operations Research, New York, NY: McGraw- Hill, 1995.