The Graphical Method: An Example



Similar documents
OPRE 6201 : 2. Simplex Method

Special Situations in the Simplex Algorithm

Linear Programming. Solving LP Models Using MS Excel, 18

Question 2: How do you solve a linear programming problem with a graph?

Chapter 9. Systems of Linear Equations

LINEAR EQUATIONS IN TWO VARIABLES

1 Solving LPs: The Simplex Algorithm of George Dantzig

Question 2: How will changes in the objective function s coefficients change the optimal solution?

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b.

Solving Systems of Two Equations Algebraically

Linear Programming Notes V Problem Transformations

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

5 Systems of Equations

Module1. x y 800.

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

COMP 250 Fall 2012 lecture 2 binary representations Sept. 11, 2012

3.1 Solving Systems Using Tables and Graphs

EQUATIONS and INEQUALITIES

Section 7.2 Linear Programming: The Graphical Method

What is Linear Programming?

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

Grade 6 Mathematics Performance Level Descriptors

Determine If An Equation Represents a Function

Review of Fundamental Mathematics

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

Temperature Scales. The metric system that we are now using includes a unit that is specific for the representation of measured temperatures.

SYSTEMS OF LINEAR EQUATIONS

High School Algebra Reasoning with Equations and Inequalities Solve systems of equations.

1. Briefly explain what an indifference curve is and how it can be graphically derived.

Chapter 27: Taxation. 27.1: Introduction. 27.2: The Two Prices with a Tax. 27.2: The Pre-Tax Position

Linear Programming. March 14, 2014

EdExcel Decision Mathematics 1

PYTHAGOREAN TRIPLES KEITH CONRAD

10.1 Systems of Linear Equations: Substitution and Elimination

Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay. Lecture - 13 Consumer Behaviour (Contd )

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

LEARNING OBJECTIVES FOR THIS CHAPTER

1. Prove that the empty set is a subset of every set.

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

The Point-Slope Form

1. Graphing Linear Inequalities

Solving Systems of Linear Equations

Largest Fixed-Aspect, Axis-Aligned Rectangle

Practical Guide to the Simplex Method of Linear Programming

Session 7 Bivariate Data and Analysis

Objectives. Materials

Linear Programming. April 12, 2005

x 2 + y 2 = 1 y 1 = x 2 + 2x y = x 2 + 2x + 1

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

Systems of Linear Equations in Three Variables

CHAPTER 3 CONSUMER BEHAVIOR

Solutions of Linear Equations in One Variable

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

Performance Level Descriptors Grade 6 Mathematics

Lecture L3 - Vectors, Matrices and Coordinate Transformations

1 if 1 x 0 1 if 0 x 1

3. Mathematical Induction

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Solving Simultaneous Equations and Matrices

Arrangements And Duality

Chapter 2 Solving Linear Programs

Solutions to Math 51 First Exam January 29, 2015

Selected practice exam solutions (part 5, item 2) (MAT 360)

Chapter 4 One Dimensional Kinematics

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

Solving Quadratic Equations

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming Supplement E

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

7 Gaussian Elimination and LU Factorization

Unified Lecture # 4 Vectors

1.3 LINEAR EQUATIONS IN TWO VARIABLES. Copyright Cengage Learning. All rights reserved.

with functions, expressions and equations which follow in units 3 and 4.

Pennsylvania System of School Assessment

Practice with Proofs

Graphing Linear Equations

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

Solving Linear Programs

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Section 1.1 Linear Equations: Slope and Equations of Lines

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

1 Determine whether an. 2 Solve systems of linear. 3 Solve systems of linear. 4 Solve systems of linear. 5 Select the most efficient

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Systems of Linear Equations

A Detailed Price Discrimination Example

MATH2210 Notebook 1 Fall Semester 2016/ MATH2210 Notebook Solving Systems of Linear Equations... 3

7 Relations and Functions

TOPIC 4: DERIVATIVES

Chapter 25: Exchange in Insurance Markets

Standard Form of a Linear Programming Problem

Systems of Linear Equations: Two Variables

Let s explore the content and skills assessed by Heart of Algebra questions.

CRLS Mathematics Department Algebra I Curriculum Map/Pacing Guide

Reflection and Refraction

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

Answer Key for California State Standards: Algebra I

Transcription:

The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference, the four functional constraints have been labelled as (1), (2), (3), and (4). Our goal is to produce a pair of values for x 1 and x 2 that (i) satisfies all constraints and (ii) has the greatest objective-function value. A pair of specific values for (x 1, x 2 ) is said to be a feasible solution if it satisfies all constraints. For example, (x 1, x 2 ) = (0, 0) and (x 1, x 2 ) = (1, 1) are feasible solutions, while (x 1, x 2 ) = (1, 1) and (x 1, x 2 ) = (1, 2) are not. Note that the objective-function value (OFV) associated with solution (0, 0) is equal to 0 (= 4 0 + 3 0) and that the OFV for solution (1, 1) is equal to 7 (= 4 1 + 3 1). Hence, (1, 1) is the better of these two feasible solutions. Since it is impossible to generate and compare all feasible solutions one by one, we must develop a systematic method to identify the best, or optimal, solution. The basic idea behind the graphical method is that each pair of values (x 1, x 2 ) can be represented as a point in the two-dimensional coordinate system. With such a representation, we will be able to visualize the set of all feasible solutions as a graphical region, called the feasible region or the feasible set, and then to identify the optimal solution (assuming it exists). To construct the feasible region, we examine the constraints one at a time, starting with constraint (1). Suppose first that the inequality in constraint (1) is replaced by an equality. Then, it is easily seen that the set of points satisfying 2x 1 + 3x 2 = 6 corresponds to a straight line in the two-dimensional plane. To plot this line, we need to identify the coordinates of two distinct points on this line. The standard approach for doing this is to set, alternatingly, one of the two variables to the value 0 and then solve the resulting one equation with one unknown to obtain the corresponding value of the other variable. Thus, if we let x 1 = 0, then x 2 must equal to 2, since x 2 = (6 2x 1 )/3. This yields the point (0, 2). Similarly, by setting x 2 = 0, we obtain the point (3, 0) (since x 1 = (6 3x 2 )/2). The resulting line that 1

passes through these two points is shown in Figure LP-1. We shall refer to this line as line (1). Observe that line (1) cuts the plane into two half-planes. All points on one side of this line (including the line itself) satisfy 2x 1 + 3x 2 6; whereas on the other side, 2x 1 + 3x 2 6. Note that (x 1, x 2 ) = (0, 0) satisfies 2x 1 + 3x 2 6, and that (0, 0) lies in the lower-left direction of line (1). Therefore, we are only interested in the lower-left portion of the plane. (What if (x 1, x 2 ) = (0, 0) happens to satisfy a given constraint as an equality?) This is also shown in Figure LP-1. In this figure, next to line (1), we also placed two arrows that point toward the lower-left direction. This is intended to indicate what will be called the feasible direction. In summary, after discarding points that violate constraint (1), we are left with the half plane that sits at the lower-left side of the line defined by 2x 1 + 3x 2 = 6. Thus, each constraint makes a cut on the two-dimensional plane, and repeating this process (do this yourself) for the remaining five constraints (the lines x 1 = 0 and x 2 = 0 correspond to the x 2 -axis and the x 1 -axis, respectively) yields the feasible set. This is shown in Figure LP-2, where we have shaded the feasible region. Note that constraint (3) is redundant, in that its removal will not have an impact on the shape of the feasible set. The next question is: Which point within the feasible set is optimal? To answer this, we now look at the objective function. Observe that for any given constant c, the set of points satisfying 4x 1 + 3x 2 = c is again a straight line; and more importantly, that by varying c, we can generate a family of lines (one for each c) with the same slope (i.e., members of this family are parallel). Hence, we can choose an arbitrary c, draw its corresponding line 4x 1 +3x 2 = c, and then visualize the entire family by sliding this line in a parallel manner. In Figure LP-3, we show two such lines: 4x 1 + 3x 2 = c 1 and 4x 1 + 3x 2 = c 2, where c 1 12 and c 2 16. Since both lines do not intersect the feasible region, points on these two lines are not feasible. Note however that the line with the smaller c, namely c 1, is closer to the feasible set. Therefore, if we decrease c further, from c 1, then at some point the resulting parallel lines will eventually touch the feasible region at exactly one point. More precisely, this takes place when the family of objective-function lines hits the intersection of lines (1) and (4), a corner point (or an extreme point) of the feasible set. It is now clear that this corner point, marked as D in Figure LP-3, must be optimal, since we need to lower c still further to reach the other points within the feasible set. Finally, we note that point D has the coordinates (3/2, 1) (Why?) and therefore, it has an objective-function value of 9 (= 4 (3/2) + 3 1). This completes the solution of the problem. More generally, observe that regardless of the slope of the objective function, the above process will always lead to an optimal corner-point solution. (This is true even when the objective-function line first touches the feasible set at an entire edge; in such a case, we actually have two optimal corner-point solutions at the end points of that edge.) This 2

implies that we can be assured that an optimal solution (whenever it exists) can always be found at one of the corner points of the feasible set. This fact is remarkable, as it says that we have managed to reduce the task of finding an optimal solution within the entire feasible set, which is a set containing an uncountable number of points, to that of finding an optimal solution within the set of corner points, which is a set containing a finite (why?) number of points. In this particular example, there are only five corner-point solutions, namely points A, B, C, D, and E (see Figure LP-3). Therefore, the reduction in the search effort is truly significant. The above observation naturally suggests the following simple two-step procedure for solving a linear program: Procedure Search Step 1: Identify the coordinates of all corner points of the feasible set. Step 2: Evaluate the objective function at all of these corner points and pick the best solution. In applying Step 1 to the particular problem above, the location of the corner points can be identified via the graph. For problems with three decision variables, one can still attempt to draw three-dimensional graphs. With four or more variables, it becomes impossible to implement the procedure. Therefore, in order for this search procedure to work, we must develop a general method for determining the coordinates of the corner points without relying on a graphical representation of the feasible set; and this will be our next task. In Figure LP-3, the coordinates of all five corner points have been explicitly specified. For example, the coordinates of the optimal corner point, point D, is given as (3/2, 1). How does one determine these coordinates? Of course, we can attempt to determine the coordinates of a given point on a graph by visual inspection, but in general, this cannot be expected to yield accurate answers. More constructively, recall that point D is sitting at the intersection of lines (1) and (4). Thus, we can algebraically solve the system of two defining equations, 2x 1 +3x 2 = 6 (1) 2x 1 +x 2 = 4, (4) to obtain the coordinates of this intersection. (In other words, the coordinates of this intersection must satisfy both equation (1) and equation (4).) To do this, subtract equation (4) from equation (1) to yield 2x 2 = 2, implying that x 2 = 1. Next, substitution of x 2 = 1 into equation (1) (or alternatively, into equation (4)) shows that x 1 = 3/2. This verifies 3

that (x 1, x 2 ) = (3/2, 1) is indeed the precise location of point D. You should now verify the coordinates of points A, B, C, and E yourself. It is important to realize that this analysis still depends on the availability of the graph. Specifically, we need to know the identify of the two defining equations for each corner point (i.e., we need to know that equations (1) and (4) together define point D, equations (1) and (2) together define point C, equation (2) and x 1 = 0 together define point B, and so on). And we need to know that only these five (Out of how many? See below.) particular combinations of two defining equations give rise to corner points of the feasible region. Therefore, a fundamental question is: Can we free ourselves from the reliance on the graph? A little bit of reflection now leads us to the following procedure: Procedure Corner Points Step 1: From the given set of six equations (including x 1 = 0 and x 2 = 0), choose an arbitrary combination of two equations. Solve this chosen set of two equations to obtain the coordinates of their intersection. (A unique solution may not exist in general; see discussion below.) Step 2: If the solution in Step 1 satisfies all other constraints, then accept it as a cornerpoint solution; otherwise, discard the combination. Step 3: Repeatedly loop through Steps 1 and 2 until all possible combinations are exhausted. Despite its brutal flavor, it turns out that this procedure, indeed, will allow us to generate the coordinates of all corner-point solutions. To convince ourselves about the validity of this procedure, consider, for example, the choice of equations (1) and (3): 2x 1 +3x 2 = 6 (1) 2x 2 = 5. (3) From equation (3), we have x 2 = 5/2, which implies that x 1 = (6 3x 2 )/2 = 3/4. Since the resulting solution (x 1, x 2 ) = ( 3/4, 5/2) has a negative value for x 2, it is not feasible. Therefore, the combination of equations (1) and (3) does not lead to a corner-point solution. Now, with six equations, it is easily seen that the total number of subsets of two equations is equal to ( ) 6 = 6! 2 2!4! = 15. After cycling through all 15 of these combinations and discarding combinations that do not yield a feasible solution, it can be shown that only five combinations remain; moreover, 4

these five combinations correspond precisely to the pairs of defining equations for points A, B, C, D, and E. This establishes (empirically, at least) the validity of the above procedure. Procedure Search and Procedure Corner Points form the backbone of the Simplex method, which is an algebraic procedure for solving linear programs with any (finite) number of variables and constraints. We will return to a full development of this method in a later section. Discussion In general, the feasible region of a linear program may be empty. Procedure Search is meaningful only if the feasible set is not empty. Discovery of an empty feasible set will be discussed later. Consider the linear program: Maximize x 1 +x 2, subject to x 1, x 2 0. Since the values of x 1 and x 2 are allowed to be arbitrarily large, this problem does not have an optimal solution. Note that the feasible region has exactly one corner point, at (0, 0); and that this corner point is not optimal. This clearly is a contradiction to Procedure Search. A linear program of this type is said to be unbounded; we will refine the statement of Procedure Search later to deal with such examples. In general, a given pair of straight lines on the plane may not have a unique intersection point. This could occur if the two lines are parallel, in which case there is no intersection, or if the two lines coincide, in which case every point on the line is an intersection point. Algebraically, this means that a given pair of equations may not have a unique solution. For example, the equation pair has no solution; and the equation pair x 1 +2x 2 = 3 x 1 +2x 2 = 6 x 1 +2x 2 = 3 2x 1 +4x 2 = 6 has an infinite number of solutions. This issue is relevant in Step 1 of Procedure Corner Points, whose statement will also be refined later. 5