Linear Programming. Solving LP Models Using MS Excel, 18

Size: px
Start display at page:

Download "Linear Programming. Solving LP Models Using MS Excel, 18"

Transcription

1 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting Constraints, 7 Identifying the Feasible Solution Space, 10 Plotting the Objective Function Line, 7 Redundant Constraints, 14 Solutions and Corner Points, 14 Minimization, 15 Slack and Surplus, 17 The Simplex Method, 17 Computer Solutions, 17 Solving LP Models Using MS Excel, 18 Sensitivity Analysis, 20 Objective Function Coefficient Changes, 21 Changes in the Right-Hand Side Value of a Constraint, 22 Key Terms, 24 Solved Problems, 24 Discussion and Review Questions, 26 Problems, 26 Case: Son, Ltd., 31 Selected Bibliography and Further Reading, 31 LEARNING OBJECTIVES After completing this supplement, you should be able to: 1 Describe the type of problem that would lend itself to solution using linear programming. 2 Formulate a linear programming model from a description of a problem. 3 Solve simple linear programming problems using the graphical method. 4 Interpret computer solutions of linear programming problems. 5 Do sensitivity analysis on the solution of a linear programming problem. 1

2 2 PART THREE SYSTEM DESIGN Linear programming is a powerful quantitative tool used by operations managers and other managers to obtain optimal solutions to problems that involve restrictions or limitations, such as the available materials, budgets, and labour and machine time. These problems are referred to as constrained optimization problems. There are numerous examples of linear programming applications to such problems, including: Establishing locations for emergency equipment and personnel that will minimize response time Determining optimal schedules for airlines for planes, pilots, and ground personnel Developing financial plans Determining optimal blends of animal feed mixes Determining optimal diet plans Identifying the best set of worker job assignments Developing optimal production schedules Developing shipping plans that will minimize shipping costs Identifying the optimal mix of products in a factory Introduction Linear programming (LP) techniques consist of a sequence of steps that will lead to an optimal solution to problems, in cases where an optimum exists. There are a number of different linear programming techniques; some are special-purpose (i.e., used to find solutions for specific types of problems) and others are more general in scope. This supplement covers the two general-purpose solution techniques: graphical linear programming and computer solutions. Graphical linear programming provides a visual portrayal of many of the important concepts of linear programming. However, it is limited to problems with only two variables. In practice, computers are used to obtain solutions for problems, some of which involve a large number of variables. objective function Mathematical statement of profit (or cost, etc.) for a given solution. decision variables Amounts of either inputs or outputs. Linear Programming Models Linear programming models are mathematical representations of constrained optimization problems. These models have certain characteristics in common. Knowledge of these characteristics enables us to recognize problems that can be solved using linear programming. In addition, it also can help us formulate LP models. The characteristics can be grouped into two categories: components and assumptions. First, let s consider the components. Four components provide the structure of a linear programming model: 1. Objective. 2. Decision variables. 3. Constraints. 4. Parameters. Linear programming algorithms require that a single goal or objective, such as the maximization of profits, be specified. The two general types of objectives are maximization and minimization. A maximization objective might involve profits, revenues, efficiency, or rate of return. Conversely, a minimization objective might involve cost, time, distance travelled, or scrap. The objective function is a mathematical expression that can be used to determine the total profit (or cost, etc., depending on the objective) for a given solution. Decision variables represent choices available to the decision maker in terms of amounts of either inputs or outputs. For example, some problems require choosing a combination of inputs to minimize total costs, while others require selecting a combination of outputs to maximize profits or revenues.

3 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 3 Constraints are limitations that restrict the alternatives available to decision makers. The three types of constraints are less than or equal to ( ), greater than or equal to ( ), and simply equal to ( ). A constraint implies an upper limit on the amount of some scarce resource (e.g., machine hours, labour hours, materials) available for use. A constraint specifies a minimum that must be achieved in the final solution (e.g., must contain at least 10 percent real fruit juice, must get at least 30 km/l on the highway). The constraint is more restrictive in the sense that it specifies exactly what a decision variable should equal (e.g., make 200 units of product A). A linear programming model can consist of one or more constraints. The constraints of a given problem define the set of all feasible combinations of decision variables; this set is referred to as the feasible solution space. Linear programming algorithms are designed to search the feasible solution space for the combination of decision variables that will yield an optimum in terms of the objective function. An LP model consists of a mathematical statement of the objective and a mathematical statement of each constraint. These statements consist of symbols (e.g., x 1, ) that represent the decision variables and numerical values, called parameters. The parameters are fixed values; the model is solved given those values. Example S 1 illustrates the components of an LP model. constraints Limitations that restrict the available alternatives. feasible solution space The set of all feasible combinations of decision variables as defined by the constraints. parameters Numerical constants. Decision variables x 1 Quantity of product 1 to produce u Quantity of product 2 to produce x 3 Quantity of product 3 to produce Example S 1 Maximize 5x 1 8 4x 3 (profit) (Objective function) Subject to Labour 2x 1 4 8x hours Material 7x 1 6 5x kg (Constraints) Product 1 x 1 10 units x 1,, x 3 0 (Nonnegativity constraints) First, the model lists and defines the decision variables. These typically represent quantities. In this case, they are quantities of three different products that might be produced. Next, the model states the objective function. It includes every decision variable in the model and the contribution (profit per unit) of each decision variable. Thus, product x 1 has a profit of $5 per unit. The profit from product x 1 for a given solution will be 5 times the value of x 1 specified by the solution; the total profit from all products will be the sum of the individual product profits. Thus, if x 1 10, 0, and x 3 6, the value of the objective function would be: 5(10) 8(0) 4(6) 74 The objective function is followed by a list (in no particular order) of three constraints. Each constraint has a right-side numerical value (e.g., the labour constraint has a rightside value of 250) that indicates the amount of the constraint and a relation sign that indicates whether that amount is a maximum ( ), a minimum ( ), or an equality ( ). The left side of each constraint consists of the variables subject to that particular constraint and a coefficient for each variable that indicates how much of the right-side quantity one unit of the decision variable represents. For instance, for the labour constraint, one unit of x 1 will require two hours of labour. The sum of the values on the left side of each constraint represents the amount of that constraint used by a solution. Thus, if x 1 10, 0, and x 3 6, the amount of labour used would be: 2(10) 4(0) 8(6) 68 hours

4 4 PART THREE SYSTEM DESIGN Because this amount does not exceed the quantity on the right-hand side of the constraint, it is feasible. Note that the third constraint refers to only a single variable; x 1 must be at least 10 units. Its coefficient is, in effect, 1, although that is not shown. Finally, there are the nonnegativity constraints. These are listed on a single line; they reflect the condition that no decision variable is allowed to have a negative value. In order for linear-programming models to be used effectively, certain assumptions must be satisfied. These are: 1. Linearity: the impact of decision variables is linear in constraints and the objective function. 2. Divisibility: noninteger values of decision variables are acceptable. 3. Certainty: values of parameters are known and constant. 4. Nonnegativity: negative values of decision variables are unacceptable. MODEL FORMULATION An understanding of the components of linear programming models is necessary for model formulation. This helps provide organization to the process of assembling information about a problem into a model. Naturally, it is important to obtain valid information on what constraints are appropriate, as well as on what values of the parameters are appropriate. If this is not done, the usefulness of the model will be questionable. Consequently, in some instances, considerable effort must be expended to obtain that information. In formulating a model, use the format illustrated in Example 1. Begin by identifying the decision variables. Very often, decision variables are the quantity of something, such as x 1 the quantity of product 1. Generally, decision variables have profits, costs, times, or a similar measure of value associated with them. Knowing this can help you identify the decision variables in a problem. Constraints are restrictions or requirements on one or more decision variables, and they refer to available amounts of resources such as labour, material, or machine time, or to minimal requirements, such as make at least 10 units of product 1. It can be helpful to give a name to each constraint, such as labour or material 1. Let s consider some of the different kinds of constraints you will encounter. 1. A constraint that refers to one or more decision variables. This is the most common kind of constraint. The constraints in Example 1 are of this type. 2. A constraint that specifies a ratio. For example, the ratio of x 1 to must be at least 3 to 2. To formulate this, begin by setting up the ratio: x1 3 2 Then, cross multiply, obtaining 2x 1 3 This is not yet in a suitable form because all variables in a constraint must be on the left side of the inequality (or equality) sign, leaving only a constant on the right side. To achieve this, we must subtract the variable amount that is on the right side from both sides. That yields: 2x [Note that the direction of the inequality remains the same.] 3. A constraint that specifies a percentage for one or more variables relative to one or more other variables. For example, x 1 cannot be more than 20 percent of the mix. Suppose that the mix consists of variables x 1,, and x 3. In mathematical terms, this would be: x 1.20(x 1 x 3 )

5 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 5 As always, all variables must appear on the left side of the relationship. To accomplish that, we can expand the right side, and then subtract the result from both sides. Thus, x 1.20x x 3 Subtracting yields.80x x 3 0 Once you have formulated a model, the next task is to solve it. The following sections describe two approaches to problem solution: graphical solutions and computer solutions. Graphical Linear Programming Graphical linear programming is a method for finding optimal solutions to twovariable problems. This section describes that approach. OUTLINE OF GRAPHICAL PROCEDURE The graphical method of linear programming plots the constraints on a graph and identifies an area that satisfies all of the constraints. The area is referred to as the feasible solution space. Next, the objective function is plotted and used to identify the optimal point in the feasible solution space. The coordinates of the point can sometimes be read directly from the graph, although generally an algebraic determination of the coordinates of the point is necessary. The general procedure followed in the graphical approach is: 1. Set up the objective function and the constraints in mathematical format. 2. Plot the constraints. 3. Identify the feasible solution space. 4. Plot the objective function. 5. Determine the optimum solution. The technique can best be illustrated through solution of a typical problem. Consider the problem described in Example S 2. graphical linear programming Graphical method for finding optimal solutions to two-variable problems. General description: A firm that assembles computers and computer equipment is about to start production of two new types of microcomputers. Each type will require assembly time, inspection time, and storage space. The amounts of each of these resources that can be devoted to the production of the microcomputers is limited. The manager of the firm would like to determine the quantity of each microcomputer to produce in order to maximize the profit generated by sales of these microcomputers. Example S 2 Additional information: In order to develop a suitable model of the problem, the manager has met with design and manufacturing personnel. As a result of those meetings, the manager has obtained the following information: Type 1 Type 2 Profit per unit $60 $50 Assembly time per unit 4 hours 10 hours Inspection time per unit 2 hours 1 hour Storage space per unit 3 cubic feet 3 cubic feet The manager also has acquired information on the availability of company resources. These (daily) amounts are:

6 6 PART THREE SYSTEM DESIGN Resource Assembly time Inspection time Storage space Amount Available 100 hours 22 hours 39 cubic feet The manager met with the firm s marketing manager and learned that demand for the microcomputers was such that whatever combination of these two types of microcomputers is produced, all of the output can be sold. In terms of meeting the assumptions, it would appear that the relationships are linear: The contribution to profit per unit of each type of computer and the time and storage space per unit of each type of computer is the same regardless of the quantity produced. Therefore, the total impact of each type of computer on the profit and each constraint is a linear function of the quantity of that variable. There may be a question of divisibility because, presumably, only whole units of computers will be sold. However, because this is a recurring process (i.e., the computers will be produced daily, a noninteger solution such as 3.5 computers per day will result in 7 computers every other day), this does not seem to pose a problem. The question of certainty cannot be explored here; in practice, the manager could be questioned to determine if there are any other possible constraints and whether the values shown for assembly times, and so forth, are known with certainty. For the purposes of discussion, we will assume certainty. Last, the assumption of nonnegativity seems justified; negative values for production quantities would not make sense. Because we have concluded that linear programming is appropriate, let us now turn our attention to constructing a model of the microcomputer problem. First, we must define the decision variables. Based on the statement, The manager would like to determine the quantity of each microcomputer to produce, the decision variables are the quantities of each type of computer. Thus, x 1 quantity of type 1 to produce quantity of type 2 to produce Next, we can formulate the objective function. The profit per unit of type 1 is listed as $60, and the profit per unit of type 2 is listed as $50, so the appropriate objective function is Maximize Z 60x 1 50 where Z is the value of the objective function, given values of x 1 and. Theoretically, a mathematical function requires such a variable for completeness. However, in practice, the objective function often is written without the Z, as sort of a shorthand version. That approach is underscored by the fact that computer input does not call for Z: it is understood. The output of a computerized model does include a Z, though. Now for the constraints. There are three resources with limited availability: assembly time, inspection time, and storage space. The fact that availability is limited means that these constraints will all be constraints. Suppose we begin with the assembly constraint. The type 1 microcomputer requires 4 hours of assembly time per unit, whereas the type 2 microcomputer requires 10 hours of assembly time per unit. Therefore, with a limit of 100 hours available, the assembly constraint is 4x hours Similarly, each unit of type 1 requires 2 hours of inspection time, and each unit of type 2 requires 1 hour of inspection time. With 22 hours available, the inspection constraint is 2x (Note: The coefficient of 1 for need not be shown. Thus, an alternative form for this constraint is: 2x 1 22.) The storage constraint is determined in a similar manner: 3x

7 = 0 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 7 There are no other system or individual constraints. The nonnegativity constraints are x 1, 0 In summary, the mathematical model of the microcomputer problem is x 1 quantity of type 1 to produce quantity of type 2 to produce Maximize 60x 1 50 Subject to Assembly 4x hours Inspection 2x hours Storage 3x cubic feet x 1, 0 The next step is to plot the constraints. PLOTTING CONSTRAINTS Begin by placing the nonnegativity constraints on a graph, as in Figure 6S 1. The procedure for plotting the other constraints is simple: 1. Replace the inequality sign with an equal sign. This transforms the constraint into an equation of a straight line. 2. Determine where the line intersects each axis. a. To find where it crosses the axis, set x 1 equal to zero and solve the equation for the value of. b. To find where it crosses the x 1 axis, set equal to zero and solve the equation for the value of x Mark these intersections on the axes, and connect them with a straight line. (Note: If a constraint has only one variable, it will be a vertical line on a graph if the variable is x 1, or a horizontal line if the variable is.) 4. Indicate by shading (or by arrows at the ends of the constraint line) whether the inequality is greater than or less than. (A general rule to determine which side of the line satisfies the inequality is to pick a point that is not on the line, such as 0,0, and see whether it is greater than or less than the constraint amount.) 5. Repeat steps 1 4 for each constraint. Quantity of type 2 FIGURE 6S 1 Graph showing the nonnegativity constraints Area of feasibility Nonnegativity constraints x 1 0 = 0 x 1 Quantity of type 1

8 8 PART THREE SYSTEM DESIGN FIGURE 6S 2 Plot of the first constraint (assembly time) 10 4 x = x 1 Consider the assembly time constraint: 4x Removing the inequality portion of the constraint produces this straight line: 4x Next, identify the points where the line intersects each axis, as step 2 describes. Thus with 0, we find 4x 1 10(0) 100 Solving, we find that 4x 1 100, so x 1 25 when 0. Similarly, we can solve the equation for when x 1 0: 4(0) Solving for, we find 10 when x 1 0. Thus, we have two points: x 1 0, 10, and x 1 25, 0. We can now add this line to our graph of the nonnegativity constraints by connecting these two points (see Figure 6S 2). Next we must determine which side of the line represents points that are less than 100. To do this, we can select a test point that is not on the line, and we can substitute the x 1 and values of that point into the left side of the equation of the line. If the result is less than 100, this tells us that all points on that side of the line are less than the value of the line (e.g., 100). Conversely, if the result is greater than 100, this indicates that the other side of the line represents the set of points that will yield values that are less than 100. A relatively simple test point to use is the origin (i.e., x 1 0, 0). Substituting these values into the equation yields 4(0) 10(0) 0 Obviously this is less than 100. Hence, the side of the line closest to the origin represents the less than area (i.e., the feasible region). The feasible region for this constraint and the nonnegativity constraints then becomes the shaded portion shown in Figure 6S 3. For the sake of illustration, suppose we try one other point, say x 1 10, 10. Substituting these values into the assembly constraint yields 4(10) 10(10) 140 Clearly this is greater than 100. Therefore, all points on this side of the line are greater than 100 (see Figure 6S 4). Continuing with the problem, we can add the two remaining constraints to the graph. For the inspection constraint: 1. Convert the constraint into the equation of a straight line by replacing the inequality sign with an equality sign: 2x becomes 2x

9 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 9 FIGURE 6S 3 The feasible region, given the first constraint and the nonnegativity constraints 10 Assembly time Feasible region 0 25 x 1 FIGURE 6S 4 The point 10, 10 is above the constraint line 10 10,10 4 x = x 1 2. Set x 1 equal to zero and solve for : 2(0) 1 22 Solving, we find 22. Thus, the line will intersect the axis at Next, set equal to zero and solve for x 1 : 2x 1 1(0) 22 Solving, we find x Thus, the other end of the line will intersect the x 1 axis at Add the line to the graph (see Figure 6S 5). Note that the area of feasibility for this constraint is below the line (Figure 6S 5). Again the area of feasibility at this point is shaded in for illustration, although when graphing problems it is more practical to refrain from shading in the feasible region until all constraint lines have been drawn. However, because constraints are plotted one at a time, using a small arrow at the end of each constraint to indicate the direction of feasibility can be helpful. The storage constraint is handled in the same manner: 1. Convert it into an equality: 3x Set x 1 equal to zero and solve for : 3(0) 3 39 Solving, 13. Thus, 13 when x 1 0.

10 10 PART THREE SYSTEM DESIGN FIGURE 6S 5 Partially completed graph, showing the assembly, inspection, and nonnegativity constraints 22 Inspection Feasible for inspection but not for assembly 10 Feasible for assembly but not for inspection Feasible for both assembly and inspection Assembly x 1 FIGURE 6S 6 Completed graph of the microcomputer problem showing all constraints and the feasible solution space Inspection Storage 10 Feasible solution space Assembly x 1 3. Set equal to zero and solve for x 1 : 3x 1 3(0) 39 Solving, x Thus, x 1 13 when Add the line to the graph (see Figure 6S 6). IDENTIFYING THE FEASIBLE SOLUTION SPACE The feasible solution space is the set of all points that satisfies all constraints. (Recall that the x 1 and axes form nonnegativity constraints.) The shaded area shown in Figure 6S 6 is the feasible solution space for our problem. The next step is to determine which point in the feasible solution space will produce the optimal value of the objective function. This determination is made using the objective function. PLOTTING THE OBJECTIVE FUNCTION LINE Plotting an objective function line involves the same logic as plotting a constraint line: Determine where the line intersects each axis. Recall that the objective function for the microcomputer problem is 60x 1 50 This is not an equation because it does not include an equal sign. We can get around this by simply setting it equal to some quantity. Any quantity will do, although one that is evenly divisible by both coefficients is desirable.

11 Assembly SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING FIGURE 6S 7 Microcomputer problem with $300 profit line added Inspection 13 Storage 10 6 Profit = $ x 1 Suppose we decide to set the objective function equal to 300. That is, 60x We can now plot the line of our graph. As before, we can determine the x 1 and intercepts of the line by setting one of the two variables equal to zero, solving for the other, and then reversing the process. Thus, with x 1 0, we have 60(0) Solving, we find 6. Similarly, with 0, we have 60x 1 50(0) 300 Solving, we find x 1 5. This line is plotted in Figure 6S 7. The profit line can be interpreted in the following way. It is an isoprofit line; every point on the line (i.e., every combination of x 1 and that lies on the line) will provide a profit of $300. We can see from the graph many combinations that are both on the $300 profit line and within the feasible solution space. In fact, considering noninteger as well as integer solutions, the possibilities are infinite. Suppose we now consider another line, say the $600 line. To do this, we set the objective function equal to this amount. Thus, 60x Solving for the x 1 and intercepts yields these two points: x 1 intercept intercept x 1 10 x This line is plotted in Figure 6S 8, along with the previous $300 line for purposes of comparison. Two things are evident in Figure 6S 8 regarding the profit lines. One is that the $600 line is farther from the origin than the $300 line; the other is that the two lines are parallel. The lines are parallel because they both have the same slope. The slope is not affected by the right side of the equation. Rather, it is determined solely by the coefficients 60 and 50. It would be correct to conclude that regardless of the quantity we select for the value of the objective function, the resulting line will be parallel to these two lines. Moreover, if the amount is greater than 600, the line will be even farther away from the origin than the $600 line. If the value is less than 300, the line will be closer to the origin than the $300 line. And if the value is between 300 and 600, the line will fall between the $300 and $600 lines. This knowledge will help in determining the optimal solution.

12 Assembly 12 PART THREE SYSTEM DESIGN FIGURE 6S 8 Microcomputer problem with profit lines of $300 and $ Inspection Storage 6 Profit = $600 Profit = $ x 1 FIGURE 6S 9 Microcomputer problem with profit lines of $300, $600, and $ Inspection Storage $900 6 $600 $300 Assembly x 1 Consider a third line, one with the profit equal to $900. Figure 6S 9 shows that line along with the previous two profit lines. As expected, it is parallel to the other two, and even farther away from the origin. However, the line does not touch the feasible solution space at all. Consequently, there is no feasible combination of x 1 and that will yield that amount of profit. Evidently, the maximum possible profit is an amount between $600 and $900, which we can see by referring to Figure 6S 9. We could continue to select profit lines in this manner, and eventually, we could determine an amount that would yield the greatest profit. However, there is a much simpler alternative. We can plot just one line, say the $300 line. We know that all other lines will be parallel to it. Consequently, by moving this one line parallel to itself we can represent other profit lines. We also know that as we move away from the origin, the profits get larger. What we want to know is how far the line can be moved out from the origin and still be touching the feasible solution space, and the values of the decision variables at that point of greatest profit (i.e., the optimal solution). Locate this point on the graph by placing a straight edge along the $300 line (or any other convenient line) and sliding it away from the origin, being careful to keep it parallel to the line. This approach is illustrated in Figure 6S 10. Once we have determined where the optimal solution is in the feasible solution space, we must determine the values of the decision variables at that point. Then, we can use that information to compute the profit for that combination. Note that the optimal solution is at the intersection of the inspection boundary and the storage boundary (see Figure 6S 10). In other words, the optimal combination of x 1 and must satisfy both boundary (equality) conditions. We can determine those values by

13 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING FIGURE 6S 10 Finding the optimal solution to the microcomputer problem Inspection (last line) Optimal solution Assembly $300 Storage x 1 solving the two equations simultaneously. The equations are Inspection 2x Storage 3x The idea behind solving two simultaneous equations is to algebraically eliminate one of the unknown variables (i.e., to obtain an equation with a single unknown). This can be accomplished by multiplying the constants of one of the equations by a fixed amount and then adding (or subtracting) the modified equation from the other. (Occasionally, it is easier to multiply each equation by a fixed quantity.) For example, we can eliminate by multiplying the inspection equation by 3 and then subtracting the storage equation from the modified inspection equation. Thus, 3(2x ) becomes 6x Subtracting the storage equation from this produces 6x (3x ) 3x Solving the resulting equation yields x 1 9. The value of can be found by substituting x 1 9 into either of the original equations or the modified inspection equation. Suppose we use the original inspection equation. We have 2(9) 1 22 Solving, we find 4. Hence, the optimal solution to the microcomputer problem is to produce nine type 1 computers and four type 2 computers per day. We can substitute these values into the objective function to find the optimal profit: $60(9) $50(4) $740 Hence, the last line the one that would last touch the feasible solution space as we moved away from the origin parallel to the $300 profit line would be the line where profit equalled $740. In this problem, the optimal values for both decision variables are integers. This will not always be the case; one or both of the decision variables may turn out to be noninteger. In some situations noninteger values would be of little consequence. This would be true if the decision variables were measured on a continuous scale, such as the amount of water, sand, sugar, fuel oil, time, or distance needed for optimality, or if the contribution per unit (profit, cost, etc.) were small, as with the number of nails or ball bearings to

14 14 PART THREE SYSTEM DESIGN make. In some cases, the answer would simply be rounded down (maximization problems) or up (minimization problems) with very little impact on the objective function. Here, we assume that noninteger answers are acceptable as such. Let s review the procedure for finding the optimal solution using the objective function approach: 1. Graph the constraints. 2. Identify the feasible solution space. 3. Set the objective function equal to some amount that is divisible by each of the objective function coefficients. This will yield integer values for the x 1 and intercepts and simplify plotting the line. Often, the product of the two objective function coefficients provides a satisfactory line. Ideally, the line will cross the feasible solution space close to the optimal point, and it will not be necessary to slide a straight edge because the optimal solution can be readily identified visually. 4. After identifying the optimal point, determine which two constraints intersect there. Solve their equations simultaneously to obtain the values of the decision variables at the optimum. 5. Substitute the values obtained in the previous step into the objective function to determine the value of the objective function at the optimum. redundant constraint A constraint that does not form a unique boundary of the feasible solution space. REDUNDANT CONSTRAINTS In some cases, a constraint does not form a unique boundary of the feasible solution space. Such a constraint is called a redundant constraint. Two such constraints are illustrated in Figure 6S 11. Note that a constraint is redundant if it meets the following test: Its removal would not alter the feasible solution space. When a problem has a redundant constraint, at least one of the other constraints in the problem is more restrictive than the redundant constraint. SOLUTIONS AND CORNER POINTS The feasible solution space in graphical linear programming is a polygon. Moreover, the solution to any problem will be at one of the corner points (intersections of constraints) of the polygon. It is possible to determine the coordinates of each corner point of the feasible solution space, and use those values to compute the value of the objective function at those points. Because the solution is always at a corner point, comparing the values of the objective function at the corner points and identifying the best one (e.g., the maximum FIGURE 6S 11 Examples of redundant constraints Redundant = 25 2 x = 60 2 x 1 + = 20 0 Feasible solution space x 1

15 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 15 FIGURE 6S 12 Some LP problems have multiple optimal solutions Optimal line segment Objective function 0 x 1 value) is another way to identify the optimal corner point. Using the graphical approach, it is much easier to plot the objective function and use that to identify the optimal corner point. However, for problems that have more than two decision variables, and the graphical method isn t appropriate, this alternate approach is used to find the optimal solution. In some instances, the objective function will be parallel to one of the constraint lines that forms a boundary of the feasible solution space. When this happens, every combination of x 1 and on the segment of the constraint that touches the feasible solution space represents an optimal solution. Hence, there are multiple optimal solutions to the problem. Even in such a case, the solution will also be a corner point in fact, the solution will be at two corner points: those at the ends of the segment that touches the feasible solution space. Figure 6S 12 illustrates an objective function line that is parallel to a constraint line. MINIMIZATION Graphical minimization problems are quite similar to maximization problems. There are, however, two important differences. One is that at least one of the constraints must be of the or variety. This causes the feasible solution space to be away from the origin. The other difference is that the optimal point is the one closest to the origin. We find the optimal corner point by sliding the objective function (which is an isocost line) toward the origin instead of away from it. Solve the following problem using graphical linear programming. Example S 3 Minimize Z 8x 1 12 Subject to 5x x x 1, 0 1. Plot the constraints (shown in Figure 6S 13). a. Change constraints to equalities. b. For each constraint, set x 1 0 and solve for, then set 0 and solve for x 1. Solution

16 16 PART THREE SYSTEM DESIGN FIGURE 6S 13 The constraints define the feasible solution space x = 20 x 1 Feasible solution space = 24 2 = x 1 FIGURE 6S 14 The optimum is the last point the objective function touches as it is moved toward the origin Objective function x = $96 x Optimum x 1 c. Graph each constraint. Note that 2 is a horizontal line parallel to the x 1 axis and 2 units above it. 2. Shade the feasible solution space (see Figure 6S 13). 3. Plot the objective function. a. Select a value for the objective function that causes it to cross the feasible solution space. Try ; 8x (acceptable). b. Graph the line (see Figure 6S 14). 4. Slide the objective function toward the origin, being careful to keep it parallel to the original line. 5. The optimum (last feasible point) is shown in Figure 6S 14. The coordinate ( 2) can be determined by inspection of the graph. Note that the optimum point is at the intersection of the line 2 and the line 4x Substituting the value of 2 into the latter equation will yield the value of x 1 at the intersection: 4x 1 3(2) 24 x Thus, the optimum is x units and Compute the minimum cost: 8x (4.5) 12(2) 60

17 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 17 SLACK AND SURPLUS If a constraint forms the optimal corner point of the feasible solution space, it is called a binding constraint. In effect, it limits the value of the objective function; if the constraint could be relaxed (less restrictive), an improved solution would be possible. For constraints that are not binding, making them less restrictive will have no impact on the solution. If the optimal values of the decision variables are substituted into the left side of a binding constraint, the resulting value will exactly equal the right-hand value of the constraint. However, there will be a difference with a nonbinding constraint. If the left side is greater than the right side, we say that there is surplus; if the left side is less than the right side, we say that there is slack. Slack can only occur in a constraint; it is the amount by which the left side is less than the right side when the optimal values of the decision variables are substituted into the left side. And surplus can only occur in a constraint; it is the amount by which the left side exceeds the right side of the constraint when the optimal values of the decision variables are substituted into the left side. For example, suppose the optimal values for a problem are x 1 10 and 20. If one of the constraints is 3x substituting the optimal values into the left side yields 3(10) 2(20) 70 Because the constraint is, the difference between the values of 100 and 70 (i.e., 30) is slack. Suppose the optimal values had been x 1 20 and 20. Substituting these values into the left side of the constraint would yield 3(20) 2(20) 100. Because the left side equals the right side, this is a binding constraint; slack is equal to zero. Now consider this constraint: 4x 1 50 Suppose the optimal values are x 1 10 and 15; substituting into the left side yields binding constraint A constraint that forms the optimal corner point of the feasible solution space. surplus When the values of decision variables are substituted into a constraint and the resulting value exceeds the right-side value. slack When the values of decision variables are substituted into a constraint and the resulting value is less than the right-side value. 4(10) Because this is a constraint, the difference between the left- and right-side values is surplus. If the optimal values had been x 1 12 and 2, substitution would result in the left side being equal to 50. Hence, the constraint would be a binding constraint, and there would be no surplus (i.e., surplus would be zero). The Simplex Method The simplex method is a general-purpose linear programming algorithm widely used to solve large-scale problems. Although it lacks the intuitive appeal of the graphical approach, its ability to handle problems with more than two decision variables makes it extremely valuable for solving problems often encountered in operations management. Although manual solution of linear programming problems using simplex can yield a number of insights on how solutions are derived, space limitations preclude describing it here. However, it is available on the CD that accompanies this book. The discusion here will focus on computer solutions. simplex A linear programming algorithm that can solve problems having more than two decision variables. Computer Solutions The microcomputer problem will be used to illustrate computer solutions. We repeat it here for ease of reference.

18 18 PART THREE SYSTEM DESIGN Maximize 60x 1 50 where x 1 the number of type 1 computers the number of type 2 computers Subject to Assembly 4x hours Inspection 2x hours Storage 3x cubic feet x 1, 0 SOLVING LP MODELS USING MS EXCEL Solutions to linear programming models can be obtained from spreadsheet software such as Microsoft s Excel. Excel has a routine called Solver that performs the necessary calculations. To use Solver: 1. First, enter the problem in a worksheet, as shown in Figure 6S 15. What is not obvious from the figure is the need to enter a formula for each cell where there is a zero (Solver automatically inserts the zero after you input the formula). The formulas are for the value of the objective function and the constraints, in the appropriate cells. Before you enter the formulas, designate the cells where you want the optimal values of x 1 and. Here, cells D4 and E4 are used. To enter a formula, click on the cell that the formula will pertain to, and then enter the formula, starting with an equals sign. We want the optimal value of the objective function to appear in cell G4. For G4, enter the formula 60*D4 50*E4 The constraint formulas, in cells C7, C8, and C9, are for C7: 4*D4 10*E4 for C8: 2*D4 1*E4 for C9: 3*D4 3*E4 FIGURE 6S 15 MS Excel worksheet for microcomputer problem

19 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 19 FIGURE 6S 16 MS Excel Solver parameters for microcomputer problem 2. Now, click on Tools on the top of the worksheet, and in that menu, click on Solver. The Solver menu will appear as illustrated in Figure 6S 16. Begin by setting the Target Cell (i.e., indicating the cell where you want the optimal value of the objective function to appear). Note, if the activated cell is the cell designated for the value of Z when you click on the Tools menu, Solver will automatically set that cell as the target cell. Highlight Max if it isn t already highlighted. The Changing Cells are the cells where you want the optimal values of the decision variables to appear. Here, they are cells D4 and E4. We indicate this by the range D4:E4 (Solver will add the $ signs). Finally, add the constraints by clicking on Add When that menu appears, for each constraint, enter the cell that contains the formula for the left side of the constraint, then select the appropriate inequality sign, and then enter either the right-side amount or the cell that has the right-side amount. Here, the right-side amounts are used. After you have entered each constraint, click on Add, and then enter the next constraint. (Note, constraints can be entered in any order.) For the nonnegativity constraints, enter the range of cells designated for the optimal values of the decision variables, choose sign, and enter 0 for the right-hand side. Then, click on OK rather than Add, and you will return to the Solver menu. Click on Options, and in the Options menu, click on Assume Linear Model, and then click on OK. This will return you to the Solver Parameters menu. Click on Solve. 3. The Solver Results menu will then appear, indicating that a solution has been found, or that an error has occurred. If there has been an error, go back to the Solver Parameters menu and check to see that your constraints refer to the correct changing cells, and that the inequality directions are correct. Make the corrections and click on Solve. Assuming everything is correct, in the Solver Results menu, in the Reports box, highlight both Answer and Sensitivity, and then click on OK. 4. Solver will incorporate the optimal values of the decision variables and the objective function in your original layout on your worksheet (see Figure 6S 17). We can see that the optimal values are type 1 9 units and type 2 4 units, and the total profit is 740. The answer report will also show the optimal values of the decision variables (upper part of Figure 6S 18), and some information on the constraints (lower part of Figure 6S 18). Of particular interest here is the indication of which constraints have slack and how much slack. We can see that the constraint entered in cell C7 (assembly) has a slack of 24, and

20 20 PART THREE SYSTEM DESIGN FIGURE 6S 17 MS Excel worksheet solution to microcomputer problem FIGURE 6S 18 MS Excel Answer Report for microcomputer problem that the constraints entered in cells C8 (inspection) and C9 (storage) have slack equal to zero, indicating that they are binding constraints. sensitivity analysis Assessing the impact of potential changes to the numerical values of an LP model. Sensitivity Analysis Sensitivity analysis is a means of assessing the impact of potential changes to the parameters (the numerical values) of an LP model. Such changes may occur due to forces beyond a manager s control; or a manager may be contemplating making the changes, say, to increase profits or reduce costs.

21 SUPPLEMENT TO CHAPTER SIX LINEAR PROGRAMMING 21 There are three types of potential changes: 1. Objective function coefficients. 2. Right-hand values of constraints. 3. Constraint coefficients. We will consider the first two of these here. We begin with changes to objective function coefficients. OBJECTIVE FUNCTION COEFFICIENT CHANGES A change in the value of an objective function coefficient can cause a change in the optimal solution of a problem. In a graphical solution, this would mean a change to another corner point of the feasible solution space. However, not every change in the value of an objective function coefficient will lead to a changed solution; generally there is a range of values for which the optimal values of the decision variables will not change. For example, in the microcomputer problem, if the profit on type 1 computers increased from $60 per unit to, say, $65 per unit, the optimal solution would still be to produce nine units of type 1 and four units of type 2 computers. Similarly, if the profit per unit on type 1 computers decreased from $60 to, say, $58, producing nine of type 1 and four of type 2 would still be optimal. These sorts of changes are not uncommon; they may be the result of such things as price changes in raw materials, price discounts, cost reductions in production, and so on. Obviously, when a change does occur in the value of an objective function coefficient, it can be helpful for a manager to know if that change will affect the optimal values of the decision variables. The manager can quickly determine this by referring to that coefficient s range of optimality, which is the range in possible values of that objective function coefficient over which the optimal values of the decision variables will not change. Before we see how to determine the range, consider the implication of the range. The range of optimality for the type 1 coefficient in the microcomputer problem is 50 to 100. That means that as long as the coefficient s value is in that range, the optimal values will be 9 units of type 1 and 4 units of type 2. Conversely, if a change extends beyond the range of optimality, the solution will change. Similarly suppose instead the coefficient of type 2 computers were to change. Its range of optimality is 30 to 60. As long as the value of the change doesn t take it outside of this range, nine and four will still be the optimal values. Note, however, even for changes that are within the range of optimality, the optimal value of the objective function will change. If the type 1 coefficient increased from $60 to $61, and nine units of type 1 is still optimum, profit would increase by $9: nine units times $1 per unit. Thus, for a change that is within the range of optimality, a revised value of the objective function must be determined. Now let s see how we can determine the range of optimality using computer output. range of optimality Range of values over which the solution quantities of all the decision variables remain the same. Using MS Excel. There is a table for the Changing Cells (see Figure 6S 19). It shows the value of the objective function that was used in the problem for each type of computer (i.e., 60 and 50), and the allowable increase and allowable decrease for each coefficient. By subtracting the allowable decrease from the original value of the coefficient, and adding the allowable increase to the original value of the coefficient, we obtain the range of optimality for each coefficient. Thus, we find for type 1: and Hence, the range for the type 1 coefficient is 50 to 100. For type 2: and Hence the range for the type 2 coefficient is 30 to 60. In this example, both of the decision variables are basic (i.e., nonzero). However, in other problems, one or more decision variables may be nonbasic (i.e., have an optimal

22 22 PART THREE SYSTEM DESIGN FIGURE 6S 19 MS Excel sensitivity report for microcomputer problem value of zero). In such instances, unless the value of that variable s objective function coefficient increases by more than a certain amount called its reduced cost, it won t come into solution (i.e., become a basic variable). Hence, the range of optimality (sometimes referred to as the range of insignificance) for a nonbasic variable is from negative infinity to the sum of its current value and its reduced cost. Now let s see how we can handle multiple changes to objective function coefficients; that is, a change in more than one coefficient. To do this, divide each coefficient s change by the allowable change in the same direction. Thus, if the change is a decrease, divide that amount by the allowable decrease. Treat all resulting fractions as positive. Sum the fractions. If the sum does not exceed 1.00, then multiple changes are within the range of optimality and will not result in any change to the optimal values of the decision variables. shadow price Amount by which the value of the objective function would change with a one-unit change in the RHS value of a constraint. CHANGES IN THE RIGHT-HAND-SIDE (RHS) VALUE OF A CONSTRAINT In considering right-hand-side changes, it is important to know if a particular constraint is binding on a solution. A constraint is binding if substituting the values of the decision variables of that solution into the left side of the constraint results in a value that is equal to the RHS value. In other words, that constraint stops the objective function from achieving a better value (e.g., a greater profit or a lower cost). Each constraint has a corresponding shadow price, which is a marginal value that indicates the amount by which the value of the objective function would change if there were a one-unit change in the RHS value of that constraint. If a constraint is nonbinding, its shadow price is zero, meaning that increasing or decreasing its RHS value by one unit will have no impact on the value of the objective function. Nonbinding constraints have either slack (if the constraint is ) or surplus (if the constraint is ). Suppose a constraint has 10 units of slack in the optimal solution, which means 10 units that are unused. If we were to increase or decrease the constraint s RHS value by one unit, the only effect would be to increase or decrease its slack by one unit. But there is no profit associated with slack, so the value of the objective function wouldn t change. On the other hand, if the change is to the RHS value of a binding constraint, then the optimal value of the objective

Linear Programming Supplement E

Linear Programming Supplement E Linear Programming Supplement E Linear Programming Linear programming: A technique that is useful for allocating scarce resources among competing demands. Objective function: An expression in linear programming

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS Linear programming is a mathematical technique for finding optimal solutions to problems that can be expressed using linear equations and inequalities. If a real-world problem can be represented accurately

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

EdExcel Decision Mathematics 1

EdExcel Decision Mathematics 1 EdExcel Decision Mathematics 1 Linear Programming Section 1: Formulating and solving graphically Notes and Examples These notes contain subsections on: Formulating LP problems Solving LP problems Minimisation

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Question 2: How do you solve a linear programming problem with a graph?

Question 2: How do you solve a linear programming problem with a graph? Question 2: How do you solve a linear programming problem with a graph? Now that we have several linear programming problems, let s look at how we can solve them using the graph of the system of inequalities.

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

Session 7 Bivariate Data and Analysis

Session 7 Bivariate Data and Analysis Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares

More information

Sensitivity Analysis with Excel

Sensitivity Analysis with Excel Sensitivity Analysis with Excel 1 Lecture Outline Sensitivity Analysis Effects on the Objective Function Value (OFV): Changing the Values of Decision Variables Looking at the Variation in OFV: Excel One-

More information

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood PERFORMANCE EXCELLENCE IN THE WOOD PRODUCTS INDUSTRY EM 8720-E October 1998 $3.00 Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood A key problem faced

More information

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving

Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving Section 7 Algebraic Manipulations and Solving Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving Before launching into the mathematics, let s take a moment to talk about the words

More information

Section 1.1 Linear Equations: Slope and Equations of Lines

Section 1.1 Linear Equations: Slope and Equations of Lines Section. Linear Equations: Slope and Equations of Lines Slope The measure of the steepness of a line is called the slope of the line. It is the amount of change in y, the rise, divided by the amount of

More information

Determine If An Equation Represents a Function

Determine If An Equation Represents a Function Question : What is a linear function? The term linear function consists of two parts: linear and function. To understand what these terms mean together, we must first understand what a function is. The

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

6.4 Normal Distribution

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay. Lecture - 13 Consumer Behaviour (Contd )

Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay. Lecture - 13 Consumer Behaviour (Contd ) (Refer Slide Time: 00:28) Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay Lecture - 13 Consumer Behaviour (Contd ) We will continue our discussion

More information

EQUATIONS and INEQUALITIES

EQUATIONS and INEQUALITIES EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line

More information

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions. Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

Let s explore the content and skills assessed by Heart of Algebra questions.

Let s explore the content and skills assessed by Heart of Algebra questions. Chapter 9 Heart of Algebra Heart of Algebra focuses on the mastery of linear equations, systems of linear equations, and linear functions. The ability to analyze and create linear equations, inequalities,

More information

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL Chapter 6 LINEAR INEQUALITIES 6.1 Introduction Mathematics is the art of saying many things in many different ways. MAXWELL In earlier classes, we have studied equations in one variable and two variables

More information

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b.

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b. PRIMARY CONTENT MODULE Algebra - Linear Equations & Inequalities T-37/H-37 What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of

More information

Chapter 9. Systems of Linear Equations

Chapter 9. Systems of Linear Equations Chapter 9. Systems of Linear Equations 9.1. Solve Systems of Linear Equations by Graphing KYOTE Standards: CR 21; CA 13 In this section we discuss how to solve systems of two linear equations in two variables

More information

Solving Systems of Two Equations Algebraically

Solving Systems of Two Equations Algebraically 8 MODULE 3. EQUATIONS 3b Solving Systems of Two Equations Algebraically Solving Systems by Substitution In this section we introduce an algebraic technique for solving systems of two equations in two unknowns

More information

Linear Programming. Before studying this supplement you should know or, if necessary, review

Linear Programming. Before studying this supplement you should know or, if necessary, review S U P P L E M E N T Linear Programming B Before studying this supplement you should know or, if necessary, review 1. Competitive priorities, Chapter 2 2. Capacity management concepts, Chapter 9 3. Aggregate

More information

Linear Equations and Inequalities

Linear Equations and Inequalities Linear Equations and Inequalities Section 1.1 Prof. Wodarz Math 109 - Fall 2008 Contents 1 Linear Equations 2 1.1 Standard Form of a Linear Equation................ 2 1.2 Solving Linear Equations......................

More information

Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Year 9 set 1 Mathematics notes, to accompany the 9H book.

Year 9 set 1 Mathematics notes, to accompany the 9H book. Part 1: Year 9 set 1 Mathematics notes, to accompany the 9H book. equations 1. (p.1), 1.6 (p. 44), 4.6 (p.196) sequences 3. (p.115) Pupils use the Elmwood Press Essential Maths book by David Raymer (9H

More information

Chapter 4 Online Appendix: The Mathematics of Utility Functions

Chapter 4 Online Appendix: The Mathematics of Utility Functions Chapter 4 Online Appendix: The Mathematics of Utility Functions We saw in the text that utility functions and indifference curves are different ways to represent a consumer s preferences. Calculus can

More information

Standard Form of a Linear Programming Problem

Standard Form of a Linear Programming Problem 494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

More information

EXCEL SOLVER TUTORIAL

EXCEL SOLVER TUTORIAL ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

More information

3.1 Solving Systems Using Tables and Graphs

3.1 Solving Systems Using Tables and Graphs Algebra 2 Chapter 3 3.1 Solve Systems Using Tables & Graphs 3.1 Solving Systems Using Tables and Graphs A solution to a system of linear equations is an that makes all of the equations. To solve a system

More information

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite

ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite ALGEBRA Pupils should be taught to: Generate and describe sequences As outcomes, Year 7 pupils should, for example: Use, read and write, spelling correctly: sequence, term, nth term, consecutive, rule,

More information

Section 1. Inequalities -5-4 -3-2 -1 0 1 2 3 4 5

Section 1. Inequalities -5-4 -3-2 -1 0 1 2 3 4 5 Worksheet 2.4 Introduction to Inequalities Section 1 Inequalities The sign < stands for less than. It was introduced so that we could write in shorthand things like 3 is less than 5. This becomes 3 < 5.

More information

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach Chapter 5 Linear Programming in Two Dimensions: A Geometric Approach Linear Inequalities and Linear Programming Section 3 Linear Programming gin Two Dimensions: A Geometric Approach In this section, we

More information

Correlation key concepts:

Correlation key concepts: CORRELATION Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson s coefficient of correlation c) Spearman s Rank correlation coefficient d)

More information

Answer Key for California State Standards: Algebra I

Answer Key for California State Standards: Algebra I Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.

More information

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.

More information

Using EXCEL Solver October, 2000

Using EXCEL Solver October, 2000 Using EXCEL Solver October, 2000 2 The Solver option in EXCEL may be used to solve linear and nonlinear optimization problems. Integer restrictions may be placed on the decision variables. Solver may be

More information

LINEAR EQUATIONS IN TWO VARIABLES

LINEAR EQUATIONS IN TWO VARIABLES 66 MATHEMATICS CHAPTER 4 LINEAR EQUATIONS IN TWO VARIABLES The principal use of the Analytic Art is to bring Mathematical Problems to Equations and to exhibit those Equations in the most simple terms that

More information

IV. ALGEBRAIC CONCEPTS

IV. ALGEBRAIC CONCEPTS IV. ALGEBRAIC CONCEPTS Algebra is the language of mathematics. Much of the observable world can be characterized as having patterned regularity where a change in one quantity results in changes in other

More information

Solving Linear Programs using Microsoft EXCEL Solver

Solving Linear Programs using Microsoft EXCEL Solver Solving Linear Programs using Microsoft EXCEL Solver By Andrew J. Mason, University of Auckland To illustrate how we can use Microsoft EXCEL to solve linear programming problems, consider the following

More information

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

More information

Solving Quadratic Equations

Solving Quadratic Equations 9.3 Solving Quadratic Equations by Using the Quadratic Formula 9.3 OBJECTIVES 1. Solve a quadratic equation by using the quadratic formula 2. Determine the nature of the solutions of a quadratic equation

More information

Equations, Inequalities & Partial Fractions

Equations, Inequalities & Partial Fractions Contents Equations, Inequalities & Partial Fractions.1 Solving Linear Equations 2.2 Solving Quadratic Equations 1. Solving Polynomial Equations 1.4 Solving Simultaneous Linear Equations 42.5 Solving Inequalities

More information

Performance. 13. Climbing Flight

Performance. 13. Climbing Flight Performance 13. Climbing Flight In order to increase altitude, we must add energy to the aircraft. We can do this by increasing the thrust or power available. If we do that, one of three things can happen:

More information

Years after 2000. US Student to Teacher Ratio 0 16.048 1 15.893 2 15.900 3 15.900 4 15.800 5 15.657 6 15.540

Years after 2000. US Student to Teacher Ratio 0 16.048 1 15.893 2 15.900 3 15.900 4 15.800 5 15.657 6 15.540 To complete this technology assignment, you should already have created a scatter plot for your data on your calculator and/or in Excel. You could do this with any two columns of data, but for demonstration

More information

4 UNIT FOUR: Transportation and Assignment problems

4 UNIT FOUR: Transportation and Assignment problems 4 UNIT FOUR: Transportation and Assignment problems 4.1 Objectives By the end of this unit you will be able to: formulate special linear programming problems using the transportation model. define a balanced

More information

Temperature Scales. The metric system that we are now using includes a unit that is specific for the representation of measured temperatures.

Temperature Scales. The metric system that we are now using includes a unit that is specific for the representation of measured temperatures. Temperature Scales INTRODUCTION The metric system that we are now using includes a unit that is specific for the representation of measured temperatures. The unit of temperature in the metric system is

More information

Solving Quadratic & Higher Degree Inequalities

Solving Quadratic & Higher Degree Inequalities Ch. 8 Solving Quadratic & Higher Degree Inequalities We solve quadratic and higher degree inequalities very much like we solve quadratic and higher degree equations. One method we often use to solve quadratic

More information

with functions, expressions and equations which follow in units 3 and 4.

with functions, expressions and equations which follow in units 3 and 4. Grade 8 Overview View unit yearlong overview here The unit design was created in line with the areas of focus for grade 8 Mathematics as identified by the Common Core State Standards and the PARCC Model

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

1.3 LINEAR EQUATIONS IN TWO VARIABLES. Copyright Cengage Learning. All rights reserved.

1.3 LINEAR EQUATIONS IN TWO VARIABLES. Copyright Cengage Learning. All rights reserved. 1.3 LINEAR EQUATIONS IN TWO VARIABLES Copyright Cengage Learning. All rights reserved. What You Should Learn Use slope to graph linear equations in two variables. Find the slope of a line given two points

More information

Elasticity. I. What is Elasticity?

Elasticity. I. What is Elasticity? Elasticity I. What is Elasticity? The purpose of this section is to develop some general rules about elasticity, which may them be applied to the four different specific types of elasticity discussed in

More information

Chapter 10: Network Flow Programming

Chapter 10: Network Flow Programming Chapter 10: Network Flow Programming Linear programming, that amazingly useful technique, is about to resurface: many network problems are actually just special forms of linear programs! This includes,

More information

x x y y Then, my slope is =. Notice, if we use the slope formula, we ll get the same thing: m =

x x y y Then, my slope is =. Notice, if we use the slope formula, we ll get the same thing: m = Slope and Lines The slope of a line is a ratio that measures the incline of the line. As a result, the smaller the incline, the closer the slope is to zero and the steeper the incline, the farther the

More information

Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem.

Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Solve addition and subtraction word problems, and add and subtract within 10, e.g., by using objects or drawings to represent the problem. Solve word problems that call for addition of three whole numbers

More information

Systems of Linear Equations: Two Variables

Systems of Linear Equations: Two Variables OpenStax-CNX module: m49420 1 Systems of Linear Equations: Two Variables OpenStax College This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 4.0 In this section,

More information

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

More information

Linear programming. Learning objectives. Theory in action

Linear programming. Learning objectives. Theory in action 2 Linear programming Learning objectives After finishing this chapter, you should be able to: formulate a linear programming model for a given problem; solve a linear programming model with two decision

More information

5 Systems of Equations

5 Systems of Equations Systems of Equations Concepts: Solutions to Systems of Equations-Graphically and Algebraically Solving Systems - Substitution Method Solving Systems - Elimination Method Using -Dimensional Graphs to Approximate

More information

Answer: The relationship cannot be determined.

Answer: The relationship cannot be determined. Question 1 Test 2, Second QR Section (version 3) In City X, the range of the daily low temperatures during... QA: The range of the daily low temperatures in City X... QB: 30 Fahrenheit Arithmetic: Ranges

More information

Click on the links below to jump directly to the relevant section

Click on the links below to jump directly to the relevant section Click on the links below to jump directly to the relevant section What is algebra? Operations with algebraic terms Mathematical properties of real numbers Order of operations What is Algebra? Algebra is

More information

Elements of a graph. Click on the links below to jump directly to the relevant section

Elements of a graph. Click on the links below to jump directly to the relevant section Click on the links below to jump directly to the relevant section Elements of a graph Linear equations and their graphs What is slope? Slope and y-intercept in the equation of a line Comparing lines on

More information

MATH 60 NOTEBOOK CERTIFICATIONS

MATH 60 NOTEBOOK CERTIFICATIONS MATH 60 NOTEBOOK CERTIFICATIONS Chapter #1: Integers and Real Numbers 1.1a 1.1b 1.2 1.3 1.4 1.8 Chapter #2: Algebraic Expressions, Linear Equations, and Applications 2.1a 2.1b 2.1c 2.2 2.3a 2.3b 2.4 2.5

More information

Tutorial: Using Excel for Linear Optimization Problems

Tutorial: Using Excel for Linear Optimization Problems Tutorial: Using Excel for Linear Optimization Problems Part 1: Organize Your Information There are three categories of information needed for solving an optimization problem in Excel: an Objective Function,

More information

Simplex method summary

Simplex method summary Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

More information

The Point-Slope Form

The Point-Slope Form 7. The Point-Slope Form 7. OBJECTIVES 1. Given a point and a slope, find the graph of a line. Given a point and the slope, find the equation of a line. Given two points, find the equation of a line y Slope

More information

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1 Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

Grade 6 Mathematics Performance Level Descriptors

Grade 6 Mathematics Performance Level Descriptors Limited Grade 6 Mathematics Performance Level Descriptors A student performing at the Limited Level demonstrates a minimal command of Ohio s Learning Standards for Grade 6 Mathematics. A student at this

More information

A synonym is a word that has the same or almost the same definition of

A synonym is a word that has the same or almost the same definition of Slope-Intercept Form Determining the Rate of Change and y-intercept Learning Goals In this lesson, you will: Graph lines using the slope and y-intercept. Calculate the y-intercept of a line when given

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Algebra I. In this technological age, mathematics is more important than ever. When students

Algebra I. In this technological age, mathematics is more important than ever. When students In this technological age, mathematics is more important than ever. When students leave school, they are more and more likely to use mathematics in their work and everyday lives operating computer equipment,

More information

Linear Programming Refining Transportation

Linear Programming Refining Transportation Solving a Refinery Problem with Excel Solver Type of Crude or Process Product A B C 1 C 2 D Demand Profits on Crudes 10 20 15 25 7 Products Product Slate for Crude or Process G 0.6 0.5 0.4 0.4 0.3 170

More information

01 In any business, or, indeed, in life in general, hindsight is a beautiful thing. If only we could look into a

01 In any business, or, indeed, in life in general, hindsight is a beautiful thing. If only we could look into a 01 technical cost-volumeprofit relevant to acca qualification paper F5 In any business, or, indeed, in life in general, hindsight is a beautiful thing. If only we could look into a crystal ball and find

More information

Mathematics. Mathematical Practices

Mathematics. Mathematical Practices Mathematical Practices 1. Make sense of problems and persevere in solving them. 2. Reason abstractly and quantitatively. 3. Construct viable arguments and critique the reasoning of others. 4. Model with

More information

Sensitivity Report in Excel

Sensitivity Report in Excel The Answer Report contains the original guess for the solution and the final value of the solution as well as the objective function values for the original guess and final value. The report also indicates

More information

LINEAR PROGRAMMING WITH THE EXCEL SOLVER

LINEAR PROGRAMMING WITH THE EXCEL SOLVER cha06369_supa.qxd 2/28/03 10:18 AM Page 702 702 S U P P L E M E N T A LINEAR PROGRAMMING WITH THE EXCEL SOLVER Linear programming (or simply LP) refers to several related mathematical techniques that are

More information

Intermediate PowerPoint

Intermediate PowerPoint Intermediate PowerPoint Charts and Templates By: Jim Waddell Last modified: January 2002 Topics to be covered: Creating Charts 2 Creating the chart. 2 Line Charts and Scatter Plots 4 Making a Line Chart.

More information

2013 MBA Jump Start Program

2013 MBA Jump Start Program 2013 MBA Jump Start Program Module 2: Mathematics Thomas Gilbert Mathematics Module Algebra Review Calculus Permutations and Combinations [Online Appendix: Basic Mathematical Concepts] 2 1 Equation of

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

Solving Simultaneous Equations and Matrices

Solving Simultaneous Equations and Matrices Solving Simultaneous Equations and Matrices The following represents a systematic investigation for the steps used to solve two simultaneous linear equations in two unknowns. The motivation for considering

More information

Question 2: How will changes in the objective function s coefficients change the optimal solution?

Question 2: How will changes in the objective function s coefficients change the optimal solution? Question 2: How will changes in the objective function s coefficients change the optimal solution? In the previous question, we examined how changing the constants in the constraints changed the optimal

More information

Chapter 27: Taxation. 27.1: Introduction. 27.2: The Two Prices with a Tax. 27.2: The Pre-Tax Position

Chapter 27: Taxation. 27.1: Introduction. 27.2: The Two Prices with a Tax. 27.2: The Pre-Tax Position Chapter 27: Taxation 27.1: Introduction We consider the effect of taxation on some good on the market for that good. We ask the questions: who pays the tax? what effect does it have on the equilibrium

More information