LINEAR PROGRAMMING. A Concise Introduction. Thomas S. Ferguson

Size: px
Start display at page:

Download "LINEAR PROGRAMMING. A Concise Introduction. Thomas S. Ferguson"

Transcription

1 LINEAR PROGRAMMING A Concise Introduction Thomas S Ferguson Contents 1 Introduction 3 The Standard Maximum and Minimum Problems 4 The Diet Problem 5 The Transportation Problem 6 The Activity Analysis Problem 6 The Optimal Assignment Problem 7 Terminology 8 2 Duality 10 Dual Linear Programming Problems 10 The Duality Theorem 11 The Equilibrium Theorem 12 Interpretation of the Dual 14 3 The Pivot Operation 16 4 The Simplex Method 20 The Simplex Tableau 20 The Pivot Madly Method 21 Pivot Rules for the Simplex Method 23 The Dual Simplex Method 26 5 Generalized Duality 28 The General Maximum and Minimum Problems 28 Solving General Problems by the Simplex Method 29 Solving Matrix Games by the Simplex Method 30 1

2 6 Cycling 33 A Modification of the Simplex Method That Avoids Cycling 33 7 Four Problems with Nonlinear Objective Function 36 Constrained Games 36 The General Production Planning Problem 36 Minimizing the Sum of Absolute Values 37 Minimizing the Maximum of Absolute Values 38 Chebyshev Approximation 39 Linear Fractional Programming 39 Activity Analysis to Maximize the Rate of Return 40 8 The Transportation Problem 42 Finding a Basic Feasible Shipping Schedule 44 Checking for Optimality 45 The Improvement Routine 47 Related Texts 50 2

3 LINEAR PROGRAMMING 1 Introduction A linear programming problem may be defined as the problem of maximizing or minimizing a linear function subject to linear constraints The constraints may be equalities or inequalities Here is a simple example Find numbers x 1 and x 2 that maximize the sum x 1 + x 2 subject to the constraints x 1 0, x 2 0, and x 1 +2x 2 4 4x 1 +2x 2 12 x 1 + x 2 1 In this problem there are two unknowns, and five constraints All the constraints are inequalities and they are all linear in the sense that each involves an inequality in some linear function of the variables The first two constraints, x 1 0andx 2 0, are special These are called nonnegativity constraints and are often found in linear programming problems The other constraints are then called the main constraints The function to be maximized (or minimized) is called the objective function Here, the objective function is x 1 + x 2 Since there are only two variables, we can solve this problem by graphing the set of points in the plane that satisfies all the constraints (called the constraint set) and then finding which point of this set maximizes the value of the objective function Each inequality constraint is satisfied by a half-plane of points, and the constraint set is the intersection of all the half-planes In the present example, the constraint set is the fivesided figure shaded in Figure 1 We seek the point (x 1,x 2 ), that achieves the maximum of x 1 + x 2 as (x 1,x 2 ) ranges over this constraint set The function x 1 + x 2 is constant on lines with slope 1, for example the line x 1 + x 2 = 1, and as we move this line further from the origin up and to the right, the value of x 1 + x 2 increases Therefore, we seek the line of slope 1 thatis farthest from the origin and still touches the constraint set This occurs at the intersection of the lines x 1 +2x 2 =4 and 4x 1 +2x 2 =12,namely,(x 1,x 2 )=(8/3, 2/3) The value of the objective function there is (8/3) + (2/3) = 10/3 Exercises 1 and 2 can be solved as above by graphing the feasible set It is easy to see in general that the objective function, being linear, always takes on its maximum (or minimum) value at a corner point of the constraint set, provided the 3

4 x x 1 + 2x 2 = 12 -x 1 + x 2 = optimal point x 1 + 2x 2 = x Figure 1 constraint set is bounded Occasionally, the maximum occurs along an entire edge or face of the constraint set, but then the maximum occurs at a corner point as well Not all linear programming problems are so easily solved There may be many variables and many constraints Some variables may be constrained to be nonnegative and others unconstrained Some of the main constraints may be equalities and others inequalities However, two classes of problems, called here the standard maximum problem and the standard minimum problem, play a special role In these problems, all variables are constrained to be nonnegative, and all main constraints are inequalities We are given an m-vector, b =(b 1,,b m ) T,ann-vector, c =(c 1,,c n ) T,andan m n matrix, a 11 a 12 a 1n a A = 21 a 22 a 2n a m1 a m2 a mn of real numbers The Standard Maximum Problem: Find an n-vector, x = (x 1,,x n ) T, to maximize c T x = c 1 x c n x n subject to the constraints a 11 x 1 + a 12 x a 1n x n b 1 a 21 x 1 + a 22 x a 2n x n b 2 (or Ax b) a m1 x 1 + a m2 x a mn x n b m and x 1 0,x 2 0,,x n 0 (or x 0) 4

5 The Standard Minimum Problem: Find an m-vector, y = (y 1,,y m ), to minimize y T b = y 1 b y m b m subject to the constraints y 1 a 11 + y 2 a y m a m1 c 1 y 1 a 12 + y 2 a y m a m2 c 2 (or y T A c T ) y 1 a 1n + y 2 a 2n + + y m a mn c n and y 1 0,y 2 0,,y m 0 (or y 0) Note that the main constraints are written as for the standard maximum problem and for the standard minimum problem The introductory example is a standard maximum problem We now present examples of four general linear programming problems Each of these problems has been extensively studied Example 1 The Diet Problem There are m different types of food, F 1,,F m, that supply varying quantities of the n nutrients, N 1,,N n, that are essential to good health Let c j be the minimum daily requirement of nutrient, N j Letb i be the price per unit of food, F i Leta ij be the amount of nutrient N j contained in one unit of food F i The problem is to supply the required nutrients at minimum cost Let y i be the number of units of food F i to be purchased per day The cost per day of such a diet is b 1 y 1 + b 2 y b m y m (1) The amount of nutrient N j containedinthisdietis a 1j y 1 + a 2j y a mj y m for j =1,,n We do not consider such a diet unless all the minimum daily requirements are met, that is, unless a 1j y 1 + a 2j y a mj y m c j for j =1,,n (2) Of course, we cannot purchase a negative amount of food, so we automatically have the constraints y 1 0,y 2 0,,y m 0 (3) Our problem is: minimize (1) subject to (2) and (3) This is exactly the standard minimum problem 5

6 Example 2 The Transportation Problem There are I ports, or production plants, P 1,,P I, that supply a certain commodity, and there are J markets, M 1,,M J, to which this commodity must be shipped Port P i possesses an amount s i of the commodity (i =1, 2,,I), and market M j must receive the amount r j of the comodity (j =1,,J) Let b ij be the cost of transporting one unit of the commodity from port P i to market M j The problem is to meet the market requirements at minimum transportation cost Let y ij be the quantity of the commodity shipped from port P i to market M j The total transportation cost is I J y ij b ij (4) The amount sent from port P i is J y ij and since the amount available at port P i is s i,wemusthave J y ij s i for i =1,,I (5) The amount sent to market M j is I y ij, and since the amount required there is r j, we must have I y ij r j for j =1,,J (6) It is assumed that we cannot send a negative amount from P I to M j,wehave y ij 0 for i =1,,I and j =1,,J (7) Our problem is: minimize (4) subject to (5), (6) and (7) Let us put this problem in the form of a standard minimum problem The number of y variables is IJ,som = IJ Butwhatisn? It is the total number of main constraints There are n = I + J of them, but some of the constraints are constraints, and some of them are constraints In the standard minimum problem, all constraints are This can be obtained by multiplying the constraints (5) by 1: J ( 1)y ij s j for i =1,,I (5 ) The problem minimize (4) subject to (5 ), (6) and (7) is now in standard form In Exercise 3, you are asked to write out the matrix A for this problem Example 3 The Activity Analysis Problem There are n activities, A 1,,A n, that a company may employ, using the available supply of m resources, R 1,,R m (labor hours, steel, etc) Let b i be the available supply of resource R i Let a ij be the amount 6

7 of resource R i used in operating activity A j at unit intensity Let c j be the net value to the company of operating activity A j at unit intensity The problem is to choose the intensities which the various activities are to be operated to maximize the value of the output to the company subject to the given resources Let x j be the intensity at which A j is to be operated The value of such an activity allocation is n c j x j (8) The amount of resource R i used in this activity allocation must be no greater than the supply, b i ;thatis, a ij x j b i for i =1,,m (9) It is assumed that we cannot operate an activity at negative intensity; that is, x 1 0,x 2 0,,x n 0 (10) Our problem is: maximize (8) subject to (9) and (10) maximum problem This is exactly the standard Example 4 The Optimal Assignment Problem There are I persons available for J jobs The value of person i working 1 day at job j is a ij,fori =1,,I,and j =1,,J The problem is to choose an assignment of persons to jobs to maximize the total value x ij and An assignment is a choice of numbers, x ij,fori =1,,I,andj =1,,J,where represents the proportion of person i s time that is to be spent on job j Thus, J x ij 1 for i =1,,I (11) I x ij 1 for j =1,,J (12) x ij 0 for i =1,,I and j =1,,J (13) Equation (11) reflects the fact that a person cannot spend more than 100% of his time working, (12) means that only one person is allowed on a job at a time, and (13) says that no one can work a negative amount of time on any job Subject to (11), (12) and (13), we wish to maximize the total value, I J a ij x ij (14) 7

8 This is a standard maximum problem with m = I + J and n = IJ Terminology The function to be maximized or minimized is called the objective function A vector, x for the standard maximum problem or y for the standard minimum problem, is said to be feasible if it satisfies the corresponding constraints The set of feasible vectors is called the constraint set A linear programming problem is said to be feasible if the constraint set is not empty; otherwise it is said to be infeasible A feasible maximum (resp minimum) problem is said to be unbounded if the objective function can assume arbitrarily large positive (resp negative) values at feasible vectors; otherwise, it is said to be bounded Thus there are three possibilities for a linear programming problem It may be bounded feasible, it may be unbounded feasible, and it may be infeasible The value of a bounded feasible maximum (resp, minimum) problem is the maximum (resp minimum) value of the objective function as the variables range over the constraint set A feasible vector at which the objective function achieves the value is called optimal All Linear Programming Problems Can be Converted to Standard Form A linear programming problem was defined as maximizing or minimizing a linear function subject to linear constraints All such problems can be converted into the form of a standard maximum problem by the following techniques A minimum problem can be changed to a maximum problem by multiplying the objective function by 1 Similarly, constraints of the form n a ijx j b i can be changed into the form n ( a ij)x j b i Two other problems arise (1) Some constraints may be equalities An equality constraint n a ijx j = b i may be removed, by solving this constraint for some x j for which a ij 0 and substituting this solution into the other constraints and into the objective function wherever x j appears This removes one constraint and one variable from the problem (2) Some variable may not be restricted to be nonnegative An unrestricted variable, x j, may be replaced by the difference of two nonnegative variables, x j = u j v j,where u j 0 and v j 0 This adds one variable and two nonnegativity constraints to the problem Any theory derived for problems in standard form is therefore applicable to general problems However, from a computational point of view, the enlargement of the number of variables and constraints in (2) is undesirable and, as will be seen later, can be avoided 8

9 Exercises 1 Consider the linear programming problem: Find y 1 and y 2 to minimize y 1 + y 2 subject to the constraints, y 1 +2y 2 3 2y 1 + y 2 5 y 2 0 Graph the constraint set and solve 2 Find x 1 and x 2 to maximize ax 1 + x 2 subject to the constraints in the numerical example of Figure 1 Find the value as a function of a 3 WriteoutthematrixA for the transportation problem in standard form 4 Put the following linear programming problem into standard form Find x 1, x 2, x 3, x 4 to maximize x 1 +2x 2 +3x 3 +4x subject to the constraints, 4x 1 +3x 2 +2x 3 + x 4 10 x 1 x 3 +2x 4 = 2 x 1 + x 2 + x 3 + x 4 1, and x 1 0,x 3 0,x 4 0 9

10 2 Duality To every linear program there is a dual linear program with which it is intimately connected We first state this duality for the standard programs As in Section 1, c and x are n-vectors, b and y are m-vectors, and A is an m n matrix We assume m 1 and n 1 Definition The dual of the standard maximum problem maximize c T x subject to the constraints Ax b and x 0 (1) is defined to be the standard minimum problem minimize y T b subject to the constraints y T A c T and y 0 Let us reconsider the numerical example of the previous section: maximize x 1 + x 2 subject to the constraints x 1 0, x 2 0, and x 1 +2x 2 4 4x 1 +2x 2 12 x 1 + x 2 1 (2) Find x 1 and x 2 to (3) The dual of this standard maximum problem is therefore the standard minimum problem: Find y 1, y 2,andy 3 to minimize 4y 1 +12y 2 +y 3 subject to the constraints y 1 0, y 2 0, y 3 0, and y 1 +4y 2 y 3 1 (4) 2y 1 +2y 2 + y 3 1 If the standard minimum problem (2) is transformed into a standard maximum problem (by multiplying A, b, andc by 1), its dual by the definition above is a standard minimum problem which, when transformed to a standard maximum problem (again by changing the signs of all coefficients) becomes exactly (1) Therefore, the dual of the standard minimum problem (2) is the standard maximum problem (1) The problems (1) and (2) are said to be duals The general standard maximum problem and the dual standard minimum problem may be simultaneously exhibited in the display: x 1 x 2 x n y 1 a 11 a 12 a 1n b 1 y 2 a 21 a 22 a 2n b 2 y m a m1 a m2 a mn b m c 1 c 2 c n (5) 10

11 Our numerical example in this notation becomes x 1 x 2 y y y (6) The relation between a standard problem and its dual is seen in the following theorem and its corollaries Theorem 1 If x is feasible for the standard maximum problem (1) and if y is feasible for its dual (2), then c T x y T b (7) Proof c T x y T Ax y T b The first inequality follows from x 0 and c T y T A The second inequality follows from y 0 and Ax b Corollary 1 If a standard problem and its dual are both feasible, then both are bounded feasible Proof If y is feasible for the minimum problem, then (7) shows that y T b is an upper bound for the values of c T x for x feasible for the maximum problem Similarly for the converse Corollary 2 If there exists feasible x and y for a standard maximum problem (1) and its dual (2) such that c T x = y T b, then both are optimal for their respective problems Proof If x is any feasible vector for (1), then c T x y T b = c T x which shows that x is optimal A symmetric argument works for y The following fundamental theorem completes the relationship between a standard problem and its dual It states that the hypothesis of Corollary 2 are always satisfied if one of the problems is bounded feasible The proof of this theorem is not as easy as the previous theorem and its corollaries We postpone the proof until later when we give a constructive proof via the simplex method (The simplex method is an algorithmic method for solving linear programming problems) We shall also see later that this theorem contains the Minimax Theorem for finite games of Game Theory The Duality Theorem If a standard linear programming problem is bounded feasible, then so is its dual, their values are equal, and there exists optimal vectors for both problems 11

12 There are three possibilities for a linear program It may be feasible bounded (fb), feasible unbounded (fu), or infeasible (i) For a program and its dual, there are therefore nine possibilities Corollary 1 states that three of these cannot occur: If a problem and its dual are both feasible, then both must be bounded feasible The first conclusion of the Duality Theorem states that two other possiblities cannot occur If a program is feasible bounded, its dual cannot be infeasible The x s in the accompanying diagram show the impossibilities The remaining four possibilities can occur Standard Maximum Problem fb fu i fb x x (8) Dual fu x x i x As an example of the use of Corollary 2, consider the following maximum problem Find x 1, x 2, x 2, x 4 to maximize 2x 1 +4x 2 + x 3 + x 4, subject to the constraints x j 0 for all j,and x 1 +3x 2 + x 4 4 2x 1 + x 2 3 (9) x 2 +4x 3 + x 4 3 The dual problem is found to be: find y 1, y 2, y 3 to minimize 4y 1 +3y 2 +3y 3 subject to the constraints y i 0 for all i, and y 1 +2y 2 2 3y 1 + y 2 + y 3 4 (10) 4y 3 1 y 1 + y 3 1 The vector (x 1,x 2,x 3,x 4 )=(1, 1, 1/2, 0) satisfies the constraints of the maximum problem and the value of the objective function there is 13/2 The vector (y 1,y 2,y 3 ) = (11/10, 9/20, 1/4) satisfies the constraints of the minimum problem and has value there of 13/2 also Hence, both vectors are optimal for their respective problems As a corollary of the Duality Theorem we have The Equilibrium Theorem Let x and y be feasible vectors for a standard maximum problem (1) and its dual (2) respectively Then x and y are optimal if, and only if, and y i =0 x j =0 for all i for which n a ijx j <b i (11) for all j for which m y i a ij >c j (12) Proof If: Equation (11) implies that yi = 0 unless there is equality in j a ijx j b i Hence m m n m yi b i = a ij x j = n yi a ijx j (13) y i 12

13 Similarly Equation (12) implies m n yi a ij x j = n c j x j (14) Corollary 2 now implies that x and y are optimal Only if: As in the first line of the proof of Theorem 1, n m c j x j n m yi a ijx j yi b i (15) By the Duality Theorem, if x and y are optimal, the left side is equal to the right side so we get equality throughout The equality of the first and second terms may be written as ( ) n m c j yi a ij x j =0 (16) Since x and y are feasible, each term in this sum is nonnegative The sum can be zero only if each term is zero Hence if m y i a ij >c j,thenx j = 0 A symmetric argument shows that if n a ijx j <b i,thenyi =0 Equations (11) and (12) are sometimes called the complementary slackness conditions They require that a strict inequality (a slackness) in a constraint in a standard problem implies that the complementary constraint in the dual be satisfied with equality As an example of the use of the Equilibrium Theorem, let us solve the dual to the introductory numerical example Find y 1, y 2, y 3 to minimize 4y 1 +12y 2 + y 3 subject to y 1 0, y 2 0, y 3 0, and y 1 +4y 2 y 3 1 2y 1 +2y 2 + y 3 1 (17) We have already solved the dual problem and found that x 1 > 0andx 2 > 0 Hence, from (12) we know that the optimal y gives equality in both inequalities in (17) (2 equations in 3 unknowns) If we check the optimal x in the first three main constraints of the maximum problem, we find equality in the first two constraints, but a strict inequality in the third From condition (11), we conclude that y3 = 0 Solving the two equations, y 1 +4y 2 =1 2y 1 +2y 2 =1 we find (y 1,y 2)=(1/3, 1/6) Since this vector is feasible, the if part of the Equilibrium Theorem implies it is optimal As a check we may find the value, 4(1/3)+12(1/6) = 10/3, and see it is the same as for the maximum problem In summary, if you conjecture a solution to one problem, you may solve for a solution to the dual using the complementary slackness conditions, and then see if your conjecture is correct 13

14 Interpretation of the dual In addition to the help it provides in finding a solution, the dual problem offers advantages in the interpretation of the original, primal problem In practical cases, the dual problem may be analyzed in terms of the primal problem As an example, consider the diet problem, a standard minimum problem of the form (2) Its dual is the standard maximum problem (1) First, let us find an interpretation of the dual variables, x 1,x 2,,x n In the dual constraint, n a ij x j b i, (18) the variable b i is measured as price per unit of food, F i,anda ij is measured as units of nutrient N j per unit of food F i To make the two sides of the constraint comparable, x j must be measured in of price per unit of food F i (This is known as a dimensional analysis) Since c j is the amount of N j required per day, the objective function, n 1 c jx j, represents the total price of the nutrients required each day Someone is evidently trying to choose vector x of prices for the nutrients to maximize the total worth of the required nutrients per day, subject to the constraints that x 0, and that the total value of the nutrients in food F i,namely, n a ijx j, is not greater than the actual cost, b i,ofthat food We may imagine that an entrepreneur is offering to sell us the nutrients without the food, say in the form of vitamin or mineral pills He offers to sell us the nutrient N j at a price x j per unit of N j If he wants to do business with us, he would choose the x j so that price he charges for a nutrient mixture substitute of food F i would be no greater than the original cost to us of food F i This is the constraint, (18) If this is true for all i, we may do business with him So he will choose x to maximize his total income, n 1 c jx j, subject to these constraints (Actually we will not save money dealing with him since the duality theorem says that our minimum, m 1 y ib i,isequaltohismaximum, n 1 c jx j ) The optimal price, x j,isreferredtoastheshadow price of nutrient N j Although no such entrepreneur exists, the shadow prices reflect the actual values of the nutrients as shaped by the market prices of the foods, and our requirements of the nutrients Exercises 1 Find the dual to the following standard minimum problem Find y 1, y 2 and y 3 to minimize y 1 +2y 2 + y 3, subject to the constraints, y i 0 for all i, and y 1 2y 2 + y 3 2 y 1 + y 2 + y 3 4 2y 1 + y 3 6 y 1 + y 2 + y Consider the problem of Exercise 1 Show that (y 1,y 2,y 3 )=(2/3, 0, 14/3) is optimal for this problem, and that (x 1,x 2,x 3,x 4 )=(0, 1/3, 2/3, 0) is optimal for the dual 14

15 3 Consider the problem: Maximize 3x 1 +2x 2 +x 3 subject to x 1 0, x 2 0, x 3 0, and x 1 x 2 + x 3 4 2x 1 x 2 +3x 3 6 x 1 +2x 3 3 x 1 + x 2 + x 3 8 (a) State the dual minimum problem (b) Suppose you suspect that the vector (x 1,x 2,x 3 )=(0, 6, 0) is optimal for the maximum problem Use the Equilibrium Theorem to solve the dual problem, and then show that your suspicion is correct 4 (a) State the dual to the transportation problem (b) Give an interpretation to the dual of the transportation problem 15

16 3 The Pivot Operation Consider the following system of equations 3y 1 +2y 2 = s 1 y 1 3y 2 +3y 3 = s 2 5y 1 + y 2 + y 3 = s 3 (1) This expresses dependent variables, s 1, s 2 and s 3 in terms of the independent variables, y 1, y 2 and y 3 Suppose we wish to obtain y 2, s 2 and s 3 in terms of y 1, s 1 and y 3 We solve the first equation for y 2, y 2 = 1 2 s y 1, and substitute this value of y 2 into the other equations These three equations simplified become y 1 3( 1 2 s y 1)+3y 3 = s 2 5y 1 + ( 1 2 s y 1)+ y 3 = s y s 1 = y y s 1 +3y 3 = s y s 1 + y 3 = s 3 (2) This example is typical of the following class of problems We are given a system of n linear functions in m unknowns, written in matrix form as where y T =(y 1,,y m ), s T =(s 1,,s n ), and Equation (3) therefore represents the system y T A = s T (3) a 11 a 12 a 1n a A = 21 a 22 a 2n a m1 a m2 a mn y 1 a y i a i1 + + y m a m1 = s 1 y 1 a 1j + + y i a ij + + y m a mj = s j y 1 a 1n + + y i a in + + y m a mn = s n In this form, s 1,,s n are the dependent variables, and y 1,,y m variables (4) are the independent 16

17 Suppose that we want to interchange one of the dependent variables for one of the independent variables For example, we might like to have s 1,,s j 1,y i,s j+1,,s n in terms of y 1,,y i 1,s j,y i+1,,y m,withy i and s j interchanged This may be done if and only if a ij 0 If a ij 0, we may take the j th equation and solve for y i, to find y i = 1 ( y 1 a 1j y i 1 a (i 1)j + s j y i+1 a (i+1)j y m a mj ) (5) a ij Then we may substitute this expression for y i into the other equations For example, the k th equation becomes ( y 1 a 1k a ) ( ) ( ika 1j aik + + s j + + y m a mk a ) ika mj = s k (6) a ij a ij a ij We arrive at a system of equations of the form y 1 â s j â i1 + + y m â m1 = s 1 y 1 â 1j + + s j â ij + + y m â mj = y i y 1 â 1n + + s j â in + + y m â mn = s n The relation between the â ij s and the a ij s may be found from (5) and (6) â ij = 1 a ij â hj = a hj a ij â ik = a ik a ij for h i for k j â hk = a hk a ika hj a ij for k j and h i (7) Let us mechanize this procedure We write the original m n matrix A in a display with y 1 to y m down the left side and s 1 to s n across the top This display is taken to represent the original system of equations, (3) To indicate that we are going to interchange y i and s j,wecircletheentrya ij, and call this entry the pivot We draw an arrow to the new display with y i and s j interchanged, and the new entries of â ij s The new display, of course, represents the equations (7), which are equivalent to the equations (4) s 1 s j s n y 1 a 11 a 1j a 1n y i a i1 a ij a in y m a m1 a mj a mn 17 s 1 y i s n y 1 â 11 â 1j â 1n s j â i1 â ij â in y m â m1 â mj â mn

18 In the introductory example, this becomes s 1 s 2 s 3 y y y y 2 s 2 s 3 y 1 3/2 11/2 7/2 s 1 1/2 3/2 1/2 y (Note that the matrix A appearing on the left is the transpose of the numbers as they appear in equations (1)) We say that we have pivoted around the circled entry from the first matrix to the second The pivoting rules may be summarized in symbolic notation: p c r q 1/p r/p c/p q (rc/p) This signifies: The pivot quantity goes into its reciprocal Entries in the same row as the pivot are divided by the pivot Entries in the same column as the pivot are divided by the pivot and changed in sign The remaining entries are reduced in value by the product of the corresponding entries in the same row and column as themselves and the pivot, divided by the pivot We pivot the introductory example twice more y 2 s 2 s 3 y 1 3/2 11/2 7/2 s 1 1/2 3/2 1/2 y y 2 s 2 y 3 y 1 3/2 5 7/2 s 1 1/2 3 1/2 s y 2 y 1 y 3 s s s The last display expresses y 1, y 2 and y 3 in terms of s 1, s 2 and s 3 Rearranging rows and columns, we can find A 1 y 1 y 2 y 3 s s s so A 1 = The arithmetic may be checked by checking AA 1 = I Exercise Solve the system of equations y T A = s, s 1 s 2 s 3 s 4 y y y y

19 for y 1, y 2, y 3 and y 4, by pivoting, in order, about the entries (1) first row, first column (circled) (2) second row, third column (the pivot is 1) (3) fourth row, second column (the pivot turns out to be 1) (4) third row, fourth column (the pivot turns out to be 3) Rearrange rows and columns to find A 1, and check that the product with A gives the identity matrix 19

20 4 The Simplex Method The Simplex Tableau Consider the standard minimum problem: Find y to minimize y T b subject to y 0 and y T A c T It is useful conceptually to transform this last set of inequalities into equalities For this reason, we add slack variables, s T = y T A c T 0 The problem can be restated: Find y and s to minimize y T b subject to y 0, s 0 and s T = y T A c T We write this problem in a tableau to represent the linear equations s T = y T A c T s 1 s 2 s n y 1 a 11 a 12 a 1n b 1 y 2 a 21 a 22 a 2n b 2 y m a m1 a m2 a mn b m 1 c 1 c 2 c n 0 The Simplex Tableau The last column represents the vector whose inner product with y we are trying to minimize If c 0 and b 0, there is an obvious solution to the problem; namely, the minimum occurs at y = 0 and s = c, and the minimum value is y T b =0 Thisis feasible since y 0, s 0, ands T = y T A c, andyet y i b i cannotbemadeany smaller than 0, since y 0, andb 0 Suppose then we cannot solve this problem so easily because there is at least one negative entry in the last column or last row (exclusive of the corner) Let us pivot about a 11 (suppose a 11 0), including the last column and last row in our pivot operations We obtain this tableau: y 1 s 2 s n s 1 â 11 â 12 â 1n ˆb1 y 2 â 21 â 22 â 2n ˆb2 y m â m1 â m2 â mn ˆbm 1 ĉ 1 ĉ 2 ĉ n ˆv Let r = (r 1,,r n ) = (y 1,s 2,,s n ) denote the variables on top, and let t = (t 1,,t m )=(s 1,y 2,,y m ) denote the variables on the left The set of equations s T = y T A c T is then equivalent to the set r T = t T Â ĉ, represented by the new tableau Moreover, the objective function y T b may be written (replacing y 1 by its value in terms of s 1 ) m y i b i = b 1 s 1 +(b 2 a 21b 1 )y 2 + +(b m a m1b 1 )y 2 + c 1b 1 a 11 a 11 a 11 a 11 = t Tˆb +ˆv 20

21 This is represented by the last column in the new tableau We have transformed our problem into the following: Find vectors, y and s, to minimize t Tˆb subject to y 0, s 0andr = t T Â ĉ (where t T represents the vector, (s 1,y 2,,y m ), and r T represents (y 1,s 2,,s n )) This is just a restatement of the original problem Again, if ĉ 0 and ˆb 0, we have the obvious solution: t = 0 and r = ĉ with value ˆv It is easy to see that this process may be continued This gives us a method, admittedly not very systematic, for finding the solution The Pivot Madly Method Pivot madly until you suddenly find that all entries in the last column and last row (exclusive of the corner) are nonnegative Then, setting the variables on the left to zero and the variables on top to the corresponding entry on the last row gives the solution The value is the lower right corner This same method may be used to solve the dual problem: Maximize c T x subject to x 0 and Ax b This time, we add the slack variables u = b Ax The problem becomes: Find x and u to maximize c T x subject to x 0, u 0, andu = b Ax We may use the same tableau to solve this problem if we write the constraint, u = b Ax as u = Ax b x 1 x 2 x n 1 u 1 a 11 a 12 a 1n b 1 u 2 a 21 a 22 a 2n b 2 u m a m1 a m2 a mn b m c 1 c 2 c n 0 We note as before that if c 0 and b 0, then the solution is obvious: x = 0, u = b, and value equal to zero (since the problem is equivalent to minimizing c T x) Suppose we want to pivot to interchange u 1 and x 1 and suppose a 11 0 The equations u 1 = a 11 x 1 + a 12 x a 1n x n b 1 become u 2 = a 21 x 1 + a 22 x a 2n x n b 2 x 1 = 1 u 1 + a 12 x 2 + a 1n a 11 a 11 u 2 = a 21 u 1 + a 11 In other words, the same pivot rules apply! a 11 x n b 1 ( a 22 a 21a 12 a 11 a 11 ) x 2 + etc etc p c r q 1/p r/p c/p q (rc/p) 21

22 If you pivot until the last row and column (exclusive of the corner) are nonnegative, you can find the solution to the dual problem and the primal problem at the same time In summary, you may write the simplex tableau as x 1 x 2 x n 1 y 1 a 11 a 12 a 1n b 1 y 2 a 21 a 22 a 2n b 2 y m a m1 a m2 a mn b m 1 c 1 c 2 c n 0 If we pivot until all entries in the last row and column (exclusive of the corner) are nonnegative, then the value of the program and its dual is found in the lower right corner The solution of the minimum problem is obtained by letting the y i s on the left be zero and the y i s on top be equal to the corresponding entry in the last row The solution of the maximum problem is obtained by letting the x j s on top be zero, and the x j s on the left be equal to the corresponding entry in the last column Example Consider the problem: Maximize 5x 1 +2x 2 + x 3 subject to all x j 0and x 1 +3x 2 x 3 6 x 2 + x 3 4 3x 1 + x 2 7 The dual problem is : Minimize 6y 1 +4y 2 +7y 3 subject to all y i 0and y 1 +3y 3 5 3y 1 + y 2 + y 3 2 y 1 + y 2 1 The simplex tableau is x 1 x 2 x 3 y y y If we pivot once about the circled points, interchanging y 2 with x 3,andy 3 with x 1,we arrive at y 3 x 2 y 2 y 1 23/3 x 3 4 x 1 7/3 5/3 2/3 1 47/3 22

23 From this we can read off the solution to both problems The value of both problems is 47/3 The optimal vector for the maximum problem is x 1 =7/3, x 2 =0andx 3 =4 The optimal vector for the minimum problem is y 1 =0, y 2 =1 andy 3 =5/3 The Simplex Method is just the pivot madly method with the crucial added ingredient that tells you which points to choose as pivots to approach the solution systematically Suppose after pivoting for a while, one obtains the tableau r t A b c v where b 0 Then one immediately obtains a feasible point for the maximum problem (in fact an extreme point of the constraint set) by letting r = 0 and t = b, thevalueof the program at this point being v Similarly, if one had c 0, one would have a feasible point for the minimum problem by setting r = c and t = 0 Pivot Rules for the Simplex Method We first consider the case where we have already found a feasible point for the maximum problem Case 1: b 0 Take any column with last entry negative, say column j 0 with c j0 < 0 Among those i for which a i,j0 > 0, choose that i 0 for which the ratio b i /a i,j0 is smallest (If there are ties, choose any such i 0 ) Pivot around a i0,j 0 Here are some examples In the tableaux, the possible pivots are circled r 1 r 2 r 3 r 4 t t t r 1 r 2 r 3 t t t What happens if you cannot apply the pivot rule for case 1? First of all it might happen that c j 0 for all j Then you are finished you have found the solution The only other thing that can go wrong is that after finding some c j0 < 0, you find a i,j0 0 for all i Ifso,thenthe maximum problem is unbounded feasible Toseethis,considerany vector, r, such that r j0 > 0, and r j =0 for j j 0 Thenris feasible for the maximum problem because n t i = ( a i,j )r j + b i = a i,j0 r j0 + b i 0 for all i, and this feasible vector has value c j r j = c j0 r j0, which can be made as large as desired by making r j0 sufficiently large 23

24 Such pivoting rules are good to use because: 1 After pivoting, the b column stays nonnegative, so you still have a feasible point for the maximum problem 2 The value of the new tableau is never less than that of the old (generally it is greater) Proof of 1 Let the new tableau be denoted with hats on the variables We are to show ˆbi 0 for all i Fori = i 0 we have ˆb i0 = b i0 /a i0,j 0 still nonnegative, since a i0,j 0 > 0 For i i 0,wehave ˆbi = b i a i,j 0 b i0 a i0,j 0 If a i,j0 0, then ˆb i b i 0sincea i,j0 b i0 /a i0,j 0 0 If a i,j0 > 0, then by the pivot rule, b i /a i,j0 b i0 /a i0,j 0,sothatˆb i b i b i =0 Proofof2 ˆv = v ( c j0 )(b i0 /a i0,j 0 ) v, because c j0 < 0, a i0,j 0 > 0, and b i0 0 These two properties imply that if you keep pivoting according to the rules of the simplex method and the value keeps getting greater, then because there are only a finite number of tableaux, you will eventually terminate either by finding the solution or by finding that the problem is unbounded feasible In the proof of 2, note that v increases unless the pivot is in a row with b i0 = 0 So the simplex method will eventually terminate unless we keep pivoting in rows with a zero in the last column Example: Maximize x 1 + x 2 +2x 3 subject to all x i 0and x 2 +2x 3 3 x 1 +3x 3 2 2x 1 + x 2 + x 3 1 We pivot according to the rules of the simplex method, first pivoting about row three column 2 x 1 x 2 x 3 y y y x 1 y 3 x 3 y y x x 1 y 3 y 2 y 1 4/3 x 3 2/3 x 2 1/3 2/3 1 1/3 5/3 The value for the program and its dual is 5/3 The optimal vector for the maximum problem is given by x 1 =0,x 2 =1/3 andx 3 =2/3 The optimal vector for the minimum problem is given by y 1 =0, y 2 =1/3 andy 3 =1 Case 2: Some b i are negative Take the first negative b i,sayb k < 0 (where b 1 0,,b k 1 0) Find any negative entry in row k,saya k,j0 < 0 (The pivot will be in column j 0 ) Compare b k /a k,j0 and the b i /a i,j0 for which b i 0abda i,j0 > 0, and choose i 0 for which this ratio is smallest (i 0 may be equal to k ) You may choose any such i 0 if there are ties Pivot on a i0,j 0 24

25 Here are some examples In the tableaux, the possible pivots according to the rules for Case 2 are circled r 1 r 2 r 3 r 4 t t t r 1 r 2 r 3 t t t What if the rules of Case 2 cannot be applied? The only thing that can go wrong is that for b k < 0, we find a kj 0 for all j Ifso,thenthe maximum problem is infeasible, since the equation for row k reads t k = n a kj r j b k For all feasible vectors (t 0, r 0), the left side is negative or zero, and the right side is positive TheobjectiveinCase2istogettoCase1,since we know what to do from there The rule for Case 2 is likely to be good because: 1 The nonnegative b i stay nonnegative after pivoting, and 2 b k has become no smaller (generally it gets larger) Proofof1Suppose b i 0, so i k Ifi = i 0,thenˆb i0 = b i0 /a i0,j 0 0 If i i 0, then ˆbi = b i a i,j 0 ai 0,j 0 b i0 Now b i0 /a i0,j 0 0 Hence, if a i,j0 0, then ˆb i b i 0 and if a i,j0 > 0, then b i0 /a i0,j 0 b i /a i,j0,sothatˆb i b i b i =0 Proofof2If k = i 0,thenˆb k = b k /a k,j0 > 0(bothb k < 0anda k,j0 < 0) If k i 0, then ˆbk = b k a k,j 0 b i0 b k, a i0,j 0 since b i0 /a i0,j 0 0anda k,j0 < 0 These two properties imply that if you keep pivoting according to the rule for Case 2, and b k keeps getting larger, you will eventually get b k 0, and be one coefficient closer to having all b i 0 Note in the proof of 2, that if b i0 > 0, then ˆb k >b k Example: Minimize 3y 1 2y 2 +5y 3 subject to all y i 0and y 2 +2y 3 1 y 1 + y 3 1 2y 1 3y 2 +7y

26 We pivot according to the rules for Case 2 of the simplex method, first pivoting about row two column one x 1 x 2 x 3 y y y y 2 x 2 x 3 y x y Note that after this one pivot, we are now in Case 1 The tableau is identical to the example for Case 1 except for the lower right corner and the labels of the variables Further pivoting as is done there leads to y 2 y 3 x 1 y 1 4/3 x 3 2/3 x 2 1 2/3 1 1/3 11/3 The value for both programs is 11/3 The solution to the minimum problem is y 1 =0, y 2 =2/3 andy 3 = 1 The solution to the maximum problem is x 1 =0,x 2 =1/3 and x 3 =2/3 The Dual Simplex Method The simplex method has been stated from the point of view of the maximum problem Clearly, it may be stated with regard to the minimum problem as well In particular, if a feasible vector has already been found for the minimum problem (ie the bottom row is already nonnegative), it is more efficient to improve this vector with another feasible vector than it is to apply the rule for Case 2 of the maximum problem We state the simplex method for the minimum problem Case 1: c 0 Take any row with the last entry negative, say b i0 < 0 Among those j for which a i0,j < 0, choose that j 0 for which the ratio c j /a i0,j is closest to zero Pivot on a i0,j 0 Case 2: Some c j are negative Take the first negative c j,say c k < 0 (where c 1 0,, c k 1 0) Find any positive entry in column k,saya i0,k > 0 Compare c k /a i0,k and those c j /a i0,j for which c j 0anda i0,j < 0, and choose j 0 for which this ratio is closest to zero (j 0 may be k ) Pivot on a i0,j 0 Example: x 1 x 2 x 3 y y y y 1 x 2 x 3 x 1 1/2 y 2 1 y 3 5/2 1/23/2 2 1/2

27 If this example were treated as Case 2 for the maximum problem, we might pivot about the 4 or the 5 Since we have a feasible vector for the minimum problem, we apply Case 1 above, find a unique pivot and arrive at the solution in one step Exercises For the linear programming problems below, state the dual problem, solve by the simplex (or dual simplex) method, and state the solutions to both problems 1 Maximize x 1 2x 2 3x 3 x 4 subject to the constraints x j 0 for all j and x 1 x 2 2x 3 x 4 4 2x 1 + x 3 4x 4 2 2x 1 + x 2 + x Minimize 3y 1 y 2 +2y 3 subject to the constraints y i 0 for all i and 2y 1 y 2 + y 3 1 y 1 +2y 3 2 7y 1 +4y 2 6y Maximize x 1 x 2 +2x 3 subject to the constraints x j 0 for all j and 3x 1 +3x 2 + x 3 3 2x 1 x 2 2x 3 1 x 1 + x Minimize 5y 1 2y 2 y 3 subject to the constraints y i 0 for all i and 2y 1 +3y 3 1 2y 1 y 2 + y 3 1 3y 1 +2y 2 y Minimize 2y 2 + y 3 subject to the constraints y i 0 for all i and y 1 2y 2 3 4y 1 + y 2 +7y 3 1 2y 1 3y 2 + y Maximize 3x 1 +4x 2 +5x 3 subject to the constraints x j 0 for all j and x 1 +2x 2 +2x 3 1 3x 1 + x 3 1 2x 1 x

28 5 Generalized Duality We consider the general form of the linear programming problem, allowing some constraints to be equalities, and some variables to be unrestricted ( <x j < ) The General Maximum Problem Find x j, j =1,,n, to maximize x T c subject to and n a ij x j b i n a ij x j = b i x j 0 x j unrestricted for i =1,,k for i = k +1,,m for j =1,,l for j = l +1,,n The dual to this problem is The General Minimum Problem Find y i, i =1,,m, to minimize y T b subject to and m y i a ij c j m y i a ij = c j y i 0 y i unrestricted for j =1,,l for j = l +1,,n for i =1,,k for i = k +1,,m In other words, a strict equality in the constraints of one program corresponds to an unrestricted variable in the dual If the general maximum problem is transformed into a standard maximum problem by 1 replacing each equality constraint, j a ijx j = b i, by two inequality constraints, j a ijx j = b i and j ( a ij)x j b i,and 2 replacing each unrestricted variable, x j, by the difference of two nonnegative variables, x j = x j x j with x j 0andx j 0, and if the dual general minimum problem is transformed into a standard minimum problem by the same techniques, the transformed standard problems are easily seen to be duals by the definition of duality for standard problems (Exercise 1) Therefore the theorems on duality for the standard programs in Section 2 are valid for the general programs as well The Equilibrium Theorem seems as if it might require 28

29 special attention for the general programming problems since some of the yi and x j may be negative However, it is also valid (Exercise 2) Solving General Problems by the Simplex Method The simplex tableau and pivot rules may be used to find the solution to the general problem if the following modifications are introduced 1 Consider the general minimum problem We add the slack variables, s T = y T A c T, but the constraints now demand that s 1 0,,s l 0,s l+1 =0,,s n =0 Ifwepivot in the simplex tableau so that s l+1, say, goes from the top to the left, it becomes an independent variable and may be set equal to zero as required Once s l+1 is on the left, it is immaterial whether the corresponding ˆb i in the last column is negative or positive, since this ˆb i is the coefficient multiplying s l+1 in the objective function, and s l+1 is zero anyway In other words, once s l+1 is pivoted to the left, we may delete that row we will never pivot in that row and we ignore the sign of the last coefficient in that row This analysis holds for all equality constraints: pivot s l+1,,s n to the left and delete This is equivalent to solving each equality constraint for one of the variables and substituting the result into the other linear forms 2 Similarly, the unconstrained y i, i = k+1,,m, must be pivoted to the top where they represent dependent variables Once there we do not care whether the corresponding ĉ j is positive or not, since the unconstrained variable y i may be positive or negative In other words, once y k+1, say, is pivoted to the top, we may delete that column (If you want the value of y k+1 in the solution, you may leave that column in, but do not pivot in that column ignor the sign of the last coefficient in that column) After all equality constraints and unrestricted variables are taken care of, we may pivot according to the rules of the simplex method to find the solution Similar arguments apply to the general maximum problem The unrestricted x i may be pivoted to the left and deleted, and the slack variables corresponding to the equality constraints may be pivoted to the top and deleted 3 What happens if the above method for taking care of equality constraints cannot be made? It may happen in attempting to pivot one of the s j for j l + 1 to the left, that one of them, say s α, cannot be so moved without simultaneously pivoting some s j for j<αback to the top, because all the possible pivot numbers in column α are zero, except for those in rows labelled s j for j l +1 If so, column α represents the equation s α = y j â j,α ĉ α j l+1 This time there are two possibilities If ĉ α 0,the minimum problem is infeasible since all s j for j l + 1 must be zero The original equality constraints of the minimum problem were inconsistent If ĉ α = 0, equality is automatically obtained in the above equation and column α may be removed The original equality constraints contained a redundency A dual argument may be given to show that if it is impossible to pivot one of the unrestricted y i,sayy β, to the top (without moving some unrestricted y i from the top 29

30 back to the left), then the maximum problem is infeasible, unless perhaps the corresponding last entry in that row, ˆb β, is zero If ˆb β is zero, we may delete that row as being redundant 4 In summary, the general simplex method consists of three stages In the first stage, all equality constraints and unconstrained variables are pivoted (and removed if desired) In the second stage, one uses the simplex pivoting rules to obtain a feasible solution for the problem or its dual In the third stage, one improves the feasible solution according to the simplex pivoting rules until the optimal solution is found Example 1 Maximize 5x 2 + x 3 +4x 4 subject to the constraints x 1 0, x 2 0, x 4 0, x 3 unrestricted, and x 1 +5x 2 +2x 3 +5x 4 5 3x 2 + x 4 =2 x 1 + x 3 +2x 4 =1 In the tableau, we put arrows pointing up next to the variables that must be pivoted to the top, and arrows pointing left above the variables to end up on the left x 1 x 2 x 3 x 4 y y y x 1 x 2 y 3 x 4 y y x After deleting the third row and the third column, we pivot y 2 to the top, and we have found a feasible solution to the maximum problem We then delete y 2 and pivot according to the simplex rule for Case 1 x 1 x 2 y 2 y x y 1 x 2 y 2 x x After one pivot, we have already arrived at the solution The value of the program is 6, and the solution to the maximum problem is x 1 =1,x 2 =0,x 4 = 2 and (from the equality constraint of the original problem) x 3 = 2 Solving Matrix Games by the Simplex Method Consider a matrix game with n n matrix A IfPlayerIchoosesamixedstrategy,x =(x 1,,x n )with n 1 x i =1and x i 0 for all i, hewinsatleastλ on the average, where λ n x ia ij for j =1,,m He wants to choose x 1,,x n to maximize this minimum amount he wins This becomes a linear programming problem if we add λ to the list of variables chosen by I The problem becomes: Choose x 1,,x n and λ to maximize λ subject to x 1 0,,x n 0, λ unrestricted, and n λ x i a ij 0 for j =1,,n,and n x i =1 30

31 This is clearly a general minimum problem Player II chooses y 1,,y m with y i 0 for all i and m 1 y i =1,andlosesatmost µ, whereµ m a ijy j for all i The problem is therefore: Choose y 1,y m and µ to minimize µ subject to y 1 0,,y m 0, µ unrestricted, and m µ a ij y j 0 for i =1,,n,and m y j =1 j 1 This is exactly the dual of Player I s problem These problems may be solved simultaneously by the simplex method for general programs Note, however, that if A is the game matrix, it is the negative of the transpose of A that is placed in the simplex tableau: x 1 x 2 x n λ y 1 a 11 a 21 a n1 1 0 y m a 1m a 2m a nm 1 0 µ Example 2 Solve the matrix game with matrix A = The tableau is: x 1 x 2 x 3 λ y y y y µ We would like to pivot to interchange λ and µ Unfortunately, as is always the case for solving games by this method, the pivot point is zero So we must pivot twice, once to move µ to the top and once to move λ to the left First we interchange y 3 and λ and delete the λ row Then we interchange x 3 and µ and delete the µ column x 1 x 2 x 3 y 3 y y y µ x 1 x 2 y 3 y y y x

32 Now, we use the pivot rules for the simplex method until finally we obtain the tableau, say y 4 y 2 y 1 y 3 3/7 x 2 16/35 x 1 2/7 x 3 9/35 13/35 8/35 14/35 33/35 Therefore, the value of the game is 33/35 The unique optimal strategy for Player I is (2/7,16/35,9/35) The unique optimal strategy for Player II is (14/35,8/35,0,13/35) Exercises 1 (a) Transform the general maximum problem to standard form by (1) replacing each equality constraint, j a ijx j = b i by the two inequality constraints, j a ijx j b i and j ( a ij)x j b i, and (2) replacing each unrestricted x j by x j x j,wherex j and x j are restricted to be nonnegative (b) Transform the general minimum problem to standard form by (1) replacing each equality constraint, i y ia ij = c j by the two inequality constraints, i y ia ij c j and i y j( a ij ) c j, and (2) replacing each unrestricted y i by y i y i,wherey i and y i are restricted to be nonnegative (c) Show that the standard programs (a) and (b) are duals 2 Let x and y be feasible vectors for a general maximum problem and its dual respectively Assuming the Duality Theorem for the general problems, show that x and y are optimal if, and only if, and y i =0 x j =0 forall i for which n y i a ij <b j forall j for which m a ijx j >c j 3 (a) State the dual for the problem of Example 1 (b) Find the solution of the dual of the problem of Example 1 4 Solve the game with matrix, A = First pivot to interchange λ and y 1 Then pivot to interchange µ and x 2, and continue 32

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

9.4 THE SIMPLEX METHOD: MINIMIZATION

9.4 THE SIMPLEX METHOD: MINIMIZATION SECTION 9 THE SIMPLEX METHOD: MINIMIZATION 59 The accounting firm in Exercise raises its charge for an audit to $5 What number of audits and tax returns will bring in a maximum revenue? In the simplex

More information

The Graphical Method: An Example

The Graphical Method: An Example The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

More information

Standard Form of a Linear Programming Problem

Standard Form of a Linear Programming Problem 494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Simplex method summary

Simplex method summary Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

More information

Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4. Duality. 4.1 A Graphical Example Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1 Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

More information

0.1 Linear Programming

0.1 Linear Programming 0.1 Linear Programming 0.1.1 Objectives By the end of this unit you will be able to: formulate simple linear programming problems in terms of an objective function to be maximized or minimized subject

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS

December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B. KITCHENS December 4, 2013 MATH 171 BASIC LINEAR ALGEBRA B KITCHENS The equation 1 Lines in two-dimensional space (1) 2x y = 3 describes a line in two-dimensional space The coefficients of x and y in the equation

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

Study Guide 2 Solutions MATH 111

Study Guide 2 Solutions MATH 111 Study Guide 2 Solutions MATH 111 Having read through the sample test, I wanted to warn everyone, that I might consider asking questions involving inequalities, the absolute value function (as in the suggested

More information

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood

Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood PERFORMANCE EXCELLENCE IN THE WOOD PRODUCTS INDUSTRY EM 8720-E October 1998 $3.00 Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood A key problem faced

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

EQUATIONS and INEQUALITIES

EQUATIONS and INEQUALITIES EQUATIONS and INEQUALITIES Linear Equations and Slope 1. Slope a. Calculate the slope of a line given two points b. Calculate the slope of a line parallel to a given line. c. Calculate the slope of a line

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison

SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89. by Joseph Collison SYSTEMS OF EQUATIONS AND MATRICES WITH THE TI-89 by Joseph Collison Copyright 2000 by Joseph Collison All rights reserved Reproduction or translation of any part of this work beyond that permitted by Sections

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Linear Programming II: Minimization 2006 Samuel L. Baker Assignment 11 is on page 16.

Linear Programming II: Minimization 2006 Samuel L. Baker Assignment 11 is on page 16. LINEAR PROGRAMMING II 1 Linear Programming II: Minimization 2006 Samuel L. Baker Assignment 11 is on page 16. Introduction A minimization problem minimizes the value of the objective function rather than

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach Chapter 5 Linear Programming in Two Dimensions: A Geometric Approach Linear Inequalities and Linear Programming Section 3 Linear Programming gin Two Dimensions: A Geometric Approach In this section, we

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system

2x + y = 3. Since the second equation is precisely the same as the first equation, it is enough to find x and y satisfying the system 1. Systems of linear equations We are interested in the solutions to systems of linear equations. A linear equation is of the form 3x 5y + 2z + w = 3. The key thing is that we don t multiply the variables

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Chapter 6: Sensitivity Analysis

Chapter 6: Sensitivity Analysis Chapter 6: Sensitivity Analysis Suppose that you have just completed a linear programming solution which will have a major impact on your company, such as determining how much to increase the overall production

More information

Linear Programming Supplement E

Linear Programming Supplement E Linear Programming Supplement E Linear Programming Linear programming: A technique that is useful for allocating scarce resources among competing demands. Objective function: An expression in linear programming

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS Linear systems Ax = b occur widely in applied mathematics They occur as direct formulations of real world problems; but more often, they occur as a part of the numerical analysis

More information

Factorization Theorems

Factorization Theorems Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization

More information

Solution of Linear Systems

Solution of Linear Systems Chapter 3 Solution of Linear Systems In this chapter we study algorithms for possibly the most commonly occurring problem in scientific computing, the solution of linear systems of equations. We start

More information

Review of Fundamental Mathematics

Review of Fundamental Mathematics Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools

More information

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities

Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Algebra 1, Quarter 2, Unit 2.1 Creating, Solving, and Graphing Systems of Linear Equations and Linear Inequalities Overview Number of instructional days: 15 (1 day = 45 60 minutes) Content to be learned

More information

Module1. x 1000. y 800.

Module1. x 1000. y 800. Module1 1 Welcome to the first module of the course. It is indeed an exciting event to share with you the subject that has lot to offer both from theoretical side and practical aspects. To begin with,

More information

3.3 Real Zeros of Polynomials

3.3 Real Zeros of Polynomials 3.3 Real Zeros of Polynomials 69 3.3 Real Zeros of Polynomials In Section 3., we found that we can use synthetic division to determine if a given real number is a zero of a polynomial function. This section

More information

9.2 Summation Notation

9.2 Summation Notation 9. Summation Notation 66 9. Summation Notation In the previous section, we introduced sequences and now we shall present notation and theorems concerning the sum of terms of a sequence. We begin with a

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

Chapter 11 Number Theory

Chapter 11 Number Theory Chapter 11 Number Theory Number theory is one of the oldest branches of mathematics. For many years people who studied number theory delighted in its pure nature because there were few practical applications

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

1 Determinants and the Solvability of Linear Systems

1 Determinants and the Solvability of Linear Systems 1 Determinants and the Solvability of Linear Systems In the last section we learned how to use Gaussian elimination to solve linear systems of n equations in n unknowns The section completely side-stepped

More information

3.1 Solving Systems Using Tables and Graphs

3.1 Solving Systems Using Tables and Graphs Algebra 2 Chapter 3 3.1 Solve Systems Using Tables & Graphs 3.1 Solving Systems Using Tables and Graphs A solution to a system of linear equations is an that makes all of the equations. To solve a system

More information

EXCEL SOLVER TUTORIAL

EXCEL SOLVER TUTORIAL ENGR62/MS&E111 Autumn 2003 2004 Prof. Ben Van Roy October 1, 2003 EXCEL SOLVER TUTORIAL This tutorial will introduce you to some essential features of Excel and its plug-in, Solver, that we will be using

More information

1. Graphing Linear Inequalities

1. Graphing Linear Inequalities Notation. CHAPTER 4 Linear Programming 1. Graphing Linear Inequalities x apple y means x is less than or equal to y. x y means x is greater than or equal to y. x < y means x is less than y. x > y means

More information

Lecture 2 Matrix Operations

Lecture 2 Matrix Operations Lecture 2 Matrix Operations transpose, sum & difference, scalar multiplication matrix multiplication, matrix-vector product matrix inverse 2 1 Matrix transpose transpose of m n matrix A, denoted A T or

More information

Lecture 1: Systems of Linear Equations

Lecture 1: Systems of Linear Equations MTH Elementary Matrix Algebra Professor Chao Huang Department of Mathematics and Statistics Wright State University Lecture 1 Systems of Linear Equations ² Systems of two linear equations with two variables

More information

4 UNIT FOUR: Transportation and Assignment problems

4 UNIT FOUR: Transportation and Assignment problems 4 UNIT FOUR: Transportation and Assignment problems 4.1 Objectives By the end of this unit you will be able to: formulate special linear programming problems using the transportation model. define a balanced

More information

An Introduction to Linear Programming

An Introduction to Linear Programming An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

More information

CONTENTS. CASE STUDY W-3 Cost Minimization Model for Warehouse Distribution Systems and Supply Chain Management 22

CONTENTS. CASE STUDY W-3 Cost Minimization Model for Warehouse Distribution Systems and Supply Chain Management 22 CONTENTS CHAPTER W Linear Programming 1 W-1 Meaning, Assumptions, and Applications of Linear Programming 2 The Meaning and Assumptions of Linear Programming 2 Applications of Linear Programming 3 W-2 Some

More information

Linear Programming: Chapter 11 Game Theory

Linear Programming: Chapter 11 Game Theory Linear Programming: Chapter 11 Game Theory Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Rock-Paper-Scissors

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

More information

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS

CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS Linear programming is a mathematical technique for finding optimal solutions to problems that can be expressed using linear equations and inequalities. If a real-world problem can be represented accurately

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form

A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form Section 1.3 Matrix Products A linear combination is a sum of scalars times quantities. Such expressions arise quite frequently and have the form (scalar #1)(quantity #1) + (scalar #2)(quantity #2) +...

More information

Question 2: How do you solve a linear programming problem with a graph?

Question 2: How do you solve a linear programming problem with a graph? Question 2: How do you solve a linear programming problem with a graph? Now that we have several linear programming problems, let s look at how we can solve them using the graph of the system of inequalities.

More information

10 Evolutionarily Stable Strategies

10 Evolutionarily Stable Strategies 10 Evolutionarily Stable Strategies There is but a step between the sublime and the ridiculous. Leo Tolstoy In 1973 the biologist John Maynard Smith and the mathematician G. R. Price wrote an article in

More information

8 Primes and Modular Arithmetic

8 Primes and Modular Arithmetic 8 Primes and Modular Arithmetic 8.1 Primes and Factors Over two millennia ago already, people all over the world were considering the properties of numbers. One of the simplest concepts is prime numbers.

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL

LINEAR INEQUALITIES. Mathematics is the art of saying many things in many different ways. MAXWELL Chapter 6 LINEAR INEQUALITIES 6.1 Introduction Mathematics is the art of saying many things in many different ways. MAXWELL In earlier classes, we have studied equations in one variable and two variables

More information