1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Size: px
Start display at page:

Download "1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where."

Transcription

1 Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S R n is the feasible set. We might write this problem: max f(x) subject to x S. x.. For example f(x) = c T x for some vector c R n, S = {x : Ax b} vector b R m. for some m n matrix A and some If f is linear and S R n can be described by linear equalities/inequalities then we have a linear programming (LP) problem. If x S then x is called a feasible solution. Questions In general: does a feasible solution x S exist? if so, does an optimal solution exist? if so, is it unique? If the maximum of f(x) over x S occurs at x = x then x is an optimal solution, and f(x ) is the optimal value..3.4

2 Example A company produces drugs A and B using machines M and M. ton of drug A requires hour of processing on M and hours on M ton of drug B requires 3 hours of processing on M and hour on M 9 hours of processing on M and 8 hours on M are available each day Each ton of drug produced (of either type) yields million profit Solution Let x = number of tons of A produced x = number of tons of B produced P : maximise x + x (profit in million) subject to x + 3x 9 (M processing) x + x 8 (M processing) x, x 0 To maximise its profit, how much of each drug should the company make per day?.5.6 x Diet problem A pig-farmer can choose between four different varieties of food, providing different quantities of various nutrients. x + x = 5 x + x = 8 x x + 3x = 9 food required 3 4 amount/wk A nutrient B C cost/kg The (i, j) entry is the amount of nutrient i per kg of food j. The shaded region is the feasible set for P. The maximum occurs at x = (3, ) with value

3 General form of the diet problem Foods j =,..., n, nutrients i =,..., m. Problem P : minimise 5x + 7x + 7x 3 + 9x 4 subject to.5x + x + x x 4 4 x + 3.x + x x +.5x + 5.6x 3 +.x x, x, x 3, x 4 0 Data: a ij = amount of nutrient i in one unit of food j b i = required amount of nutrient i c j = cost per unit of food j Let x j = number of units of food j in the diet. The diet problem is minimise c x + + c n x n subject to a i x + + a in x n b i for i =,..., m x,..., x n Real applications Programming = planning In matrix notation the diet problem is min x c T x subject to Ax b, x 0. Note that our vectors are always column vectors. We write x 0 to mean x i 0 for all i. (0 is a vector of zeros.) Similarly Ax b means (Ax) i b i for all i. Maybe many thousands of variables or constraints Production management: realistic versions of P, large manufacturing plants, farms, etc Scheduling, e.g. airline crews: need all flights covered restrictions on working hours and patterns minimise costs: wages, accommodation, use of seats by non-working staff shift workers (call centres, factories, etc) Yield management (airline ticket pricing: multihops, business/economy mix, discounts, etc) Network problems: transportation capacity planning in telecoms networks Game theory: economics, evolution, animal behaviour..

4 Free variables Slack variables In P we had Some variables may be positive or negative, e.g. omit the constraint x 0. Such a free variable can be replaced by We can rewrite by maximise x + x subject to x + 3x 9 x + x 8 x, x 0. where u, v 0. x = u v maximise x + x subject to x + 3x + x 3 = 9 x + x + x 4 = 8 x,..., x 4 0. x 3 = unused time on machine M x 4 = unused time on machine M x 3 and x 4 are called slack variables..3.4 Two standard forms With the slack variables included, the problem has the form max x ct x subject to Ax = b x 0. In fact any LP (with equality constraints, weak inequality constraints, or a mixture) can be converted to the form since: max x ct x subject to Ax = b x 0 minimising c T x is equivalent to maximising c T x, inequalities can be converted to equalities by adding slack variables, free variables can be replaced as above..5.6

5 Remark Similarly, any LP can be put into the form since e.g. max x ct x subject to Ax b Ax = b { Ax b x 0 Ax b (more efficient rewriting may be possible!). So it is OK for us to concentrate on LPs in these forms. We always assume that the underlying space is R n. In particular x,..., x n need not be integers. If we restrict to x Z n we have an integer linear program (ILP). ILPs are in some sense harder than LPs. Note that the optimal value of an LP gives a bound on the optimal value of the associated ILP

6 Geometry of linear programming Definition. A set S R n is called convex if for all u, v S and all λ (0, ), we have λu + ( λ)v S. For now we will consider LPs in the form u v v u u v max c T x subject to Ax = b x 0. convex convex not convex That is, a set is convex if all points on the line segment joining u and v are in S, for all possible line segments... Theorem. The feasible set is convex. S = {x R n : Ax = b, x 0} Extreme points Definition.3 A point x in a convex set S is called an extreme point of S if there are no two distinct points u, v S, and λ (0, ), such that x = λu + ( λ)v. Proof. Suppose u, v S, λ (0, ). Let w = λu + ( λ)v. Then That is, an extreme point x is not in the interior of any line segment lying in S. Aw = A[λu + ( λ)v] = λau + ( λ)av = [λ + ( λ)]b = b and w λ0 + ( λ)0 = 0. So w S..3.4

7 Theorem.4 If an LP has an optimal solution, then it has an optimal solution at an extreme point of the feasible set. Proof. Idea: If the optimum is not extremal, it s on some line within S all of which is optimal: go to the end of that line, repeat if necessary. Since there exists an optimal solution, there exists an optimal solution x with a minimal number of non-zero components. Suppose x is not extremal, so that x = λu + ( λ)v Since x is optimal, c T u c T x and c T v c T x. But also c T x = λc T u + ( λ)c T v so in fact c T u = c T v = c T x. Now consider the line defined by x(ε) = x + ε(u v), ε R. Then (a) Ax = Au = Av = b so Ax(ε) = b for all ε, (b) c T x(ε) = c T x for all ε, (c) if x i = 0 then u i = v i = 0, which implies x(ε) i = 0 for all ε, (d) if x i > 0 then x(0) i > 0, and x(ε) i is continuous in ε. for some u v S, λ (0, )..5.6 So we can increase ε from zero, in a positive or a negative direction as appropriate, until at least one extra component of x(ε) becomes zero. This gives an optimal solution with fewer non-zero components than x. So x must be extreme. Basic solutions Let a i be the ith column of A, so that n Ax = b x i a i = b. Definition.5 () A solution x of Ax = b is called a basic solution if the vectors {a i : x i 0} are linearly independent. (That is, columns of A corresponding to non-zero variables x i are linearly independent.) () A basic solution satisfying x 0 is called a basic feasible solution (BFS). i= Note: If A has m rows, then at most m columns can be linearly independent. So any basic solution x has at least n m zero components. More later..7.8

8 Theorem.6 x is an extreme point of if and only if x is a BFS. Proof. S = {x : Ax = b, x 0} () Let x be a BFS. Suppose x = λu + ( λ)v for u, v S, λ (0, ). To show x is extreme we need to show u = v. Let I = {i : x i > 0}. Then (a) if i I then x i = 0, which implies u i = v i = 0. (b) Au = Av = b, so A(u v) = 0 = n (u i v i )a i = 0 i= = i I (u i v i )a i = 0 since u i = v i = 0 for i I which implies u i = v i for i I since {a i : i I} are linearly independent. Hence u = v, so x is an extreme point..9.0 () Suppose x is not a BFS, i.e. {a i : i I} are linearly dependent. Then there exists u 0 with u i = 0 for i I such that Au = 0. For small enough ε, x ± εu are feasible, and x = (x + εu) + (x εu) Corollary.7 If there is an optimal solution, then there is an optimal BFS. Proof. This is immediate from Theorems.4 and.6. so x is not extreme...

9 Discussion Typically we may assume: n > m (more variables than constraints), A has rank m (its rows are linearly independent; if not, either we have a contradiction, or redundancy). Then: x is a basic solution there is a set B {,..., n} of size m such that x i = 0 if i B, {a i : i B} are linearly independent. Proof. Simple exercise. Take I = {i : x i 0} and augment it to a larger linearly independent set B if necessary. Then to look for basic solutions: choose n m of the n variables to be 0 (x i = 0 for i B), look at remaining m columns {a i : i B}. Are they linearly independent? If so we have an invertible m m matrix. Solve for {x i : i B} to give i B x ia i = b. So also Ax = n x i a i = i= n 0a i + i B n x i a i i B = b as required. This way we obtain all basic solutions (at most ( n m) of them)..3.4 Bad algorithm: look through all basic solutions, which are feasible? what is the value of the objective function? We can do much better! Simplex algorithm: will move from one BFS to another, improving the value of the objective function at each step..5.6

10 3 The simplex algorithm () The simplex algorithm works as follows.. Start with an initial BFS.. Is the current BFS optimal? 3. If YES, stop. If NO, move to a new and improved BFS, then return to. Recall P, expressed without slack variables: maximise x + x subject to x + 3x 9 x + x 8 x, x 0 From Corollary.7, it is sufficient to consider only BFSs when searching for an optimal solution x Rewrite: E x +3x +x 3 = 9 () x + x +x 4 = 8 () x + x = f(x) (3) Put x, x = 0, giving x 3 = 9, x 4 = 8, f = 0 (we re at the BFS x = (0, 0, 9, 8)). B A C D x + x = 8 F x x + 3x = 9 Note: In writing the three equations as () (3) we are effectively expressing x 3, x 4, f in terms of x, x

11 . Start at the initial BFS x = (0, 0, 9, 8), vertex A, where f = 0.. From (3), increasing x or x will increase f(x). Let s increase x. (a) From (): we can increase x to 9, if we decrease x 3 to 0. (b) From (): we can increase x to 4, if we decrease x 4 to 0. The stricter restriction on x is (b). 3. So (keeping x = 0), (a) increase x to 4, decrease x 4 to 0 using (), this maintains equality in (), (b) and, using (), decreasing x 3 to 5 maintains equality in (). With these changes we move to a new and improved BFS x = (4, 0, 5, 0), f(x) = 4, vertex D. To see if this new BFS is optimal, rewrite () (3) so that each non-zero variable appears in exactly one constraint, f is in terms of variables which are zero at vertex D. Alternatively, we want to express x, x 3, f in terms of x, x 4. How? Add multiples of () to the other equations We are at vertex D, x = (4, 0, 5, 0) and f = 4. () () : 5 x +x 3 x 4 = 5 (4) () : x + x + x 4 = 4 (5) (3) () : x x 4 = f 4 (6) Now f = 4 + x x 4. So we should increase x to increase f.. From (6), increasing x will increase f (increasing x 4 would decrease f). (a) From (4): we can increase x to, if we decrease x 3 to 0. (b) From (5): we can increase x to 8, if we decrease x to 0. The stricter restriction on x is (a). 3. So increase x to, decrease x 3 to 0 (x 4 stays at 0, and from (5) x decreases to 3). With these changes we move to the BFS x = (3,, 0, 0), vertex C

12 Rewrite (4) (6) so that they correspond to vertex C: 5 (4) : x + 5 x 3 5 x 4 = (7) (5) 5 (4) : x 5 x x 4 = 3 (8) (6) 5 (4) : 5 x 3 5 x 4 = f 5 (9). We are at vertex C, x = (3,, 0, 0) and f = 5.. We have deduced f = 5 5 x 3 5 x 4 5. So x 3 = x 4 = 0 is the best we can do! In that case we can read off x = 3 and x =. So x = (3,, 0, 0), which has f = 5, is optimal. Summary At each stage: B = basic variables, we express x i, i B and f in terms of x i, i B, setting x i = 0, i B, we can read off f and x i, i B (gives a BFS!). At each update: look at f in terms of x i, i B, which x i, i B, would we like to increase? if none, STOP! otherwise, choose one and increase it as much as possible, i.e. until one variable x i, i B, becomes Summary continued So one new variable enters B (becomes non-zero, becomes basic), another one leaves B (becomes 0, becomes non-basic). This gives a new BFS. We update our expressions to correspond to the new B. Simplex algorithm We can write equations as a tableau x +3x +x 3 = 9 () x + x +x 4 = 8 () x + x = f 0 (3) x x x 3 x ρ 0 8 ρ ρ 3 This initial tableau represents the BFS x = (0, 0, 9, 8) at which f = 0. Note the identity matrix in the x 3, x 4 columns (first two rows), and the zeros in the bottom row below it

13 The tableau To update ( pivot ) At a given stage, the tableau has the form (a ij ) b which means Ax = b f(x) = c T x f We start from A = A, b = b, c = c and f = 0. c T f. Choose a pivot column Choose a j such that c j > 0 (corresponds to variable x j = 0 that we want to increase). Here we can take j =.. Choose a pivot row Among the i s with a ij > 0, choose i to minimize b i /a ij (strictest limit on how much we can increase x j ). Here take i = since 8/ < 9/. Updating the tableau is called pivoting In our example we pivot on j =, i =. The updated tableau is 3. Do row operations so that column j gets a in row i and 0s elsewhere: multiply row i by a ij, for i i, add a i j a ij (row i) to row i, add c j a ij (row i) to objective function row. which means x x x 3 x ρ = ρ ρ 0 4 ρ = ρ ρ 3 = ρ 3 ρ 5 x +x 3 x 4 = 5 x + x + x 4 = 4 x x 4 = f 4 non-basic variables x, x 4 are 0, x = (4, 0, 5, 0). Note the identity matrix inside (a ij ) telling us this

14 Geometric picture for P Next pivot: column, row since 5 5/ < 4 /. x x x x 3 x ρ = 5 ρ ρ = ρ ρ ρ 3 = ρ 3 ρ Now we have only non-positive entries in the bottom row: STOP. x = (3,, 0, 0), f(x) = 5 optimal. E B A C D x + x = 8 F x x + 3x = Comments on simplex tableaux x x x 3 x 4 A B C D A, B, C, D are BFSs. E, F are basic solutions but not feasible. Simplex algorithm: A D C (or A B C is we choose a different column for the first pivot). Higher-dimensional problems less trivial! We always find an m m identity matrix embedded in (a ij ), in the columns corresponding to basic variables x i, i B, in the objective function row (bottom row) we find zeros in these columns. Hence f and x i, i B, are all written in terms of x i, i B. Since we set x i = 0 for i B, it s then trivial to read off f and x i, i B

15 Comments on simplex algorithm Choosing a pivot column We may choose any j such that c j > 0. In general, there is no way to tell which such j result in fewest pivot steps. Choosing a pivot row Having chosen pivot column j (which variable x j to increase), we look for rows with a ij > 0. If a ij 0, constraint i places no restriction on the increase of x j. If a ij 0 for all i, x j can be increased without limit: the objective function is unbounded. Otherwise, the most stringent limit comes from the i that minimises b i /a ij

16 4 The simplex algorithm () Initialisation Two issues to consider: can we always find a BFS from which to start the simplex algorithm? does the simplex algorithm always terminate, i.e. find an optimal BFS or prove the problem is unbounded? To start the simplex algorithm, we need to start from a BFS, with basic variables x i, i B, written in terms of non-basic variables x i, i B. If A already contains I m as an m m submatrix, this is usually easy! This always happens if b 0 and A is created by adding slack variables to make inequalities into equalities Suppose the constraints are Ax b, x 0 where b 0. Then an initial BFS is immediate: introducing slack variables z = (z,..., z m ), so the initial tableau is Ax + z = b, x, z 0 x x n z z m A I m b c T 0 T 0 Example Add slack variables: max 6x + x + x 3 s.t. 9x + x + x 3 8 4x + x + 4x 3 4 x + 3x + 4x 3 96 x, x, x x + x + x 3 + w = 8 4x + x + 4x 3 + w = 4 x + 3x + 4x 3 + w 3 = 96 and an initial BFS is ( x z ) = ( 0b )

17 Solve by simplex What if A doesn t have this form? This initial tableau is already in the form we need ρ = ρ ρ = ρ ρ ρ 3 = ρ 3 3ρ ρ 4 = ρ 4 ρ This solution is optimal: x = 8, x = 0, x 3 = 0, f = 8. In general we can write the constraints as Ax = b, x 0 (4.) where b 0 (if necessary, multiply rows by to get b 0). If there is no obvious initial BFS and we need to find one, we can introduce artificial variables w,..., w m and solve the LP problem min x,w subject to m i= w i Ax + w = b x, w 0. (4.) Two cases arise: () if (4.) has a feasible solution, then (4.) has optimal value 0 with w = 0. () if (4.) has no feasible solution, then the optimal value of (4.) is > 0. We can apply the simplex algorithm to determine whether it s case () or (). In case (), give up. In case (), the optimal BFS for (4.) with w i 0 gives a BFS for (4.)! This leads us to the two-phase simplex algorithm. Two-phase simplex algorithm Example: With artificial variables: maximize 3x + x 3 subject to x + x + x 3 = 30 x x + x 3 = 8 x 0 minimize w + w subject to x + x + x 3 + w = 30 x x + x 3 + w = 8 x, w

18 So start by adding row + row to objective row: Note: To minimise w + w, we can maximise w w. So start from the simple tableau x x x 3 w w The objective function row should be expressed in terms of non-basic variables (the entries under the identity matrix should be 0). x x x 3 w w Now start with simplex pivot on a 3 : x x x 3 w w Deleting the w columns and replacing the objective row by the original objective function 3x + x 3 : Pivot on a : x x x 3 w w So we have found a point with w w = 0, i.e. w = 0. Phase I is finished. BFS of the original problem is x = (0, 7, 6). x x x Again we want zeros below the identity matrix subtract row from row 3: Now do simplex. x x x

19 Shadow prices Pivot on a : x x x Done! Maximum at x = 4, x = 3, x 3 = 0, f = 7. Recall P : max x + x s.t. x + 3x 9 x + x 8 x, x We had initial tableau and final tableau ρ 0 8 ρ ρ ρ ρ ρ 3 Directly from the tableaux (look at columns 3 and 4): ρ = 5 ρ 5 ρ ρ = 5 ρ ρ ρ 3 = ρ 3 5 ρ 5 ρ From the mechanics of the simplex algorithm: ρ, ρ are created by taking linear combinations of ρ, ρ ρ 3 is ρ 3 (a linear combination of ρ, ρ )

20 Suppose we change the constraints to x + 3x 9 + ε x + x 8 + ε. Then the final tableau will change to ε 5 ε ε ε ε 5 ε This is still a valid tableau as long as + 5 ε 5 ε ε ε 0. In that case we still get an optimal BFS from it, with optimal value ε + 5 ε. The objective function increases by 5 per extra hour on M and by 5 per extra hour on M (if the changes are small enough ). These shadow prices can always be read off from the initial and final tableaux Digression: Termination of simplex algorithm Typical situation : each BFS has exactly m non-zero and n m zero variables. Then each pivoting operation (moving from one BFS to another) strictly increases the new variable entering the basis and strictly increases the objective function. Since there are only finitely many BFSs, we have the following theorem. Theorem 4. If each BFS has exactly m non-zero variables, then the simplex algorithm terminates (i.e. finds an optimal solution or proves that the objective function is unbounded). What is some BFSs have extra zero variables? Then the problem is degenerate. Almost always: this is no problem. In rare cases, some choices of pivot columns/rows may cause the algorithm to cycle (repeat itself). There are various ways to avoid this (e.g. always choosing the leftmost column, and then the highest row, can be proved to work always.) See e.g. Chvátal book for a nice discussion

21 5 Duality: Introduction Recall P : Obvious bounds on f(x) = x + x : maximise x + x subject to x + 3x 9 (5.) x + x 8 (5.) x, x 0. x + x x + 3x 9 from (5.) x + x x + x 8 from (5.). By combining the constraints we can improve the bound, e.g. 3 [(5.) + (5.)]: x + x x x 7 3. More systematically? For y, y 0, consider y (5.) + y (5.). We obtain (y + y )x + (3y + y )x 9y + 8y Since we want an upper bound for x + x, we need coefficients : y + y 3y + y. How to get the best bound by this method? D : minimise 9y + 8y subject to y + y 3y + y y, y 0. P = primal problem, D = dual of P Duality: General Weak duality In general, given a primal problem P : maximise c T x subject to Ax b, x 0 the dual of P is defined by D : minimise b T y subject to A T y c, y 0. Exercise The dual of the dual is the primal. Theorem 5. (Weak duality theorem) If x is feasible for P, and y is feasible for D, then c T x b T y. Proof. Since x 0 and A T y c: c T x (A T y) T x = y T Ax. Since y 0 and Ax b: y T Ax y T b = b T y. Hence c T x y T Ax b T y

22 Comments Suppose y is a feasible solution to D. Then any feasible solution x to P has value bounded above by b T y. So D feasible = P bounded. Similarly P feasible = D bounded. Corollary 5. If x is feasible for P, y is feasible for D, and c T x = b T y, then x is optimal for P and y is optimal for D. Proof. For all x feasible for P, c T x b T y by Theorem 5. = c T x and so x is optimal for P. Similarly, for all y feasible for D, and so y is optimal for D. b T y c T x = b T y Strong duality As an example of applying this result, look at x = (3, ), y = ( 5, 5 ) for P and D above. Both are feasible, both have value 5. So both are optimal. Does this nice situation always occur? Theorem 5.3 (Strong duality theorem) If P has an optimal solution x, then D has an optimal solution y such that c T x = b T y. Proof. Write the constraints of P as Ax + z = b, x, z 0. Consider the bottom row in the final tableau of the simplex algorithm applied to P

23 { x-columns }} { { z-columns }} { c c n y ym f Here f is the optimal value of c T x, c j 0 for all j (5.3) yi 0 for all i (5.4) and the bottom row tells us that c T x f = c T x y T z. Thus c T x = f + c T x y T z = f + c T x y T (b Ax) = f b T y + (c T + y T A)x. This is true for all x, so f = b T y (5.5) c = A T y + c. From (5.3) c 0, hence A T y c. (5.6) Comments So (5.4) and (5.6) show that y is feasible for D. And (5.5) shows that the objective function of D at y is b T y = f = optimal value of P. So from the weak duality theorem (Theorem 5.), y is optimal for D. Note: the coefficients y from the bottom row in the columns corresponding to slack variables give us the optimal solution to D, comparing with the shadow prices discussion: these optimal dual variables are the shadow prices! 5. 5.

24 Example Example It is possible that neither P nor D has a feasible solution: consider the problem maximise x x subject to x x x + x x, x 0. Consider the problem maximise x + 4x + x 3 + x 4 subject to x + 3x + x 4 4 x + x 3 x + 4x 3 + x 4 3 x,..., x The final tableau is x x x 3 x 4 x 5 x 6 x () By the proof of Theorem 5.3, y = ( 5 4, 4, 4 ) is optimal for the dual. () Suppose the RHSs of the original constraints become 4 + ε, 3 + ε, 3 + ε 3. Then the objective function becomes ε + 4 ε + 4 ε 3. If the original RHSs of 4, 3, 3 correspond to the amount of raw material i available, then the most you d be prepared to pay per additional unit of raw material i is y i (with y as in ())

25 (3) Suppose raw material is available at a price < 5 4 per unit. How much should you buy? With ε > 0, ε = ε 3 = 0, the final tableau would be + 5 ε 5 ε + 0 ε For this tableau to represent a BFS, the three entries in the final column must be 0, giving ε 5. So we should buy 5 additional units of raw material. (4) The optimal solution x = (,,, 0) is unique as the entries in the bottom row corresponding to non-basic variables (i.e. the, 5 4, 4, 4 ) are < 0. (5) If say the 5 4 was zero, we could pivot in that column (observe that there would somewhere to pivot) to get a second optimal BFS x. Then λx + ( λ)x would be optimal for all λ [0, ]

26 6 Duality: Complementary slackness Recall P : maximise c T x subject to Ax b, x 0, D : minimise b T y subject to A T y c, y 0. The optimal solutions to P and D satisfy complementary slackness conditions that we can use to solve one problem when we know a solution of the other. Theorem 6. (Complementary slackness theorem) Suppose x is feasible for P and y is feasible for D. Then x and y are optimal (for P and D) if and only if and (A T y c) j x j = 0 for all j (6.) (Ax b) i y i = 0 for all i. (6.) Conditions (6.) and (6.) are called the complementary slackness conditions for P and D Interpretation Proof. As in the proof of the weak duality theorem, c T x (A T y) T x = y T Ax y T b. (6.3) () If dual constraint j is slack, then primal variable j is zero. If primal variable j is > 0, then dual constraint j is tight. () The same with primal dual. From the strong duality theorem, x, y both optimal c T x = b T y c T x = y T Ax = b T y from (6.3) (y T A c T )x = 0 and y T (Ax b) = 0 n (A T y c) j x j = 0 and j= m (Ax b) i y i = 0. i=

27 Comments But A T y c and x 0, so n j= (AT y c) j x j is a sum of non-negative terms. Also, Ax b and y 0, so m i= (Ax b) iy i is a sum of non-positive terms. Hence n j= (AT y c) j x j = 0 and m i= (Ax b) iy i = 0 is equivalent to (6.) and (6.). What s the use of complementary slackness? Among other things, given an optimal solution of P (or D), it makes finding an optimal solution of D (or P ) very easy, because we know which the non-zero variables can be and which constraints must be tight. Sometimes one of P and D is much easier to solve than the other, e.g. with variables, 5 constraints, we can solve graphically, but 5 variables and constraints is not so easy Example Consider P and D with A = ( ) 4 0, b = 3 ( ), c = Is x = (0, 4, 3 4 ) optimal? It is feasible. If it is optimal, then x > 0 = (A T y) = c, that is 4y y = x 3 > 0 = (A T y) 3 = c 3, that is 0y + y = 3 So x = (0, 4, 3 4 ) and y = (, 3) are feasible and satisfy complementary slackness, therefore they are optimal by Theorem 6.. Alternatively, we could note that this x and y are feasible and c T x = 0 = b T y, so they are optimal by Corollary 5.. which gives y = (y, y ) = (, 3). The remaining dual constraint y + 3y 4 is also satisfied, so y = (, 3) is feasible for D

28 Example continued If we don t know the solution to P, we can first solve D graphically. y 4y y = feasible set y = 3 The optimal solution is at y = (, 3), and we can use this to solve P : y > 0 = (Ax) = b, that is x + 4x = y > 0 = (Ax) = b, that is 3x x + x 3 = 3 y + 3y > 4, that is (A T y) > c = x = 0 and so x = (0, 4, 3 4 ). y y + 3y = Example Consider the primal problem y maximise 0x + 0x + 0x 3 + 0x 4 subject to x + 8x + 6x 3 + 4x 4 0 3x + 6x + x 3 + 4x 4 0 x,..., x 4 0 dual feasible set with dual minimise 0y + 0y subject to 3y + 3y 0 () 8y + 6y 0 () 6y + y 0 (3) 4y + 4y 0 (4) y, y 0. (4) () () (3) The dual optimum is where () and (3) intersect. y 6. 6.

29 Example continued Since the second and fourth dual constraints are slack at the optimum, the optimal x has x = x 4 = 0. Also, since y, y > 0 at the optimum, } x + 6x 3 = 0 = x = 0, x 3 = 5. 3x + x 3 = 0 Hence the optimal x is (0, 0, 5, 0). Suppose the second 0 is replaced by 4. The new dual optimum is where (3) and (4) intersect, at which point the first two constraints are slack, so x = x = 0. Also, since y, y > 0 at the new optimum, } 6x 3 + 4x 4 = 0 = x 3 = 35 4 x 3 + 4x 4 = 4, x 4 = 6. So the new optimum is at x = (0, 0, 35 4, 6 )

30 7 Two-player zero-sum games () Payoff matrix There is a payoff matrix A = (a ij ): We consider games that are zero-sum in the sense that one player wins what the other loses. Each player has a choice of actions (the choices may be different for each player). Players move simultaneously. Player I plays i 3 Player II plays j If Player I plays i and Player II plays j, then Player I wins a ij from Player II. The game is defined by the payoff matrix. Note that our convention is that I wins a ij from II, so a ij > 0 is good for Player I = row player What s the worst that can happen to Player I if Player I chooses row? row? row 3? (We look at the smallest entry in the appropriate row.) What s the worst that can happen to Player II if Player II chooses a particular column? (We look at the largest entry in that column.) The matrix above has a special property. Entry a 3 = 4 is both the smallest entry in row the largest entry in column 3 (, 3) is a saddle point of A. Thus: Player I can guarantee to win at least 4 by choosing row. Player II can guarantee to lose at most 4 by choosing column 3. The above is still true if either player announces their strategy in advance. Hence the game is solved and it has value

31 Mixed strategies Consider the game of Scissors-Paper-Stone: Scissors beats Paper, Paper beats Stone, Stone beats Scissors. No saddle point. Scissors Paper Stone Scissors Paper Stone If either player announces a fixed action in advance (e.g. play Paper ) the other player can take advantage. So we consider a mixed strategy: each action is played with a certain probability. (This is in contrast with a pure strategy which is to select a single action with probability.) Suppose Player I plays i with probability p i, i =,..., m. Then Player I s expected payoff if Player II plays j is m a ij p i. i= Suppose Player I wishes to maximise (over p) his minimal expected payoff m min a ij p i. j i= LP formulation Similarly Player II plays j with probability q j, j =,..., n, and looks to minimise (over q) max i n a ij q j. j= This aim for Player II may seem like only one of several sensible aims (and similarly for the earlier aim for Player I). Soon we will see that they lead to a solution in a very appropriate way, corresponding to the solution for the case of the saddle point above. Consider Player II s problem minimise maximal expected payout : n n min q max a ij q j i subject to q j =, q 0. j= j= This is not exactly an LP look at the objective function

32 Equivalent formulation An equivalent formulation is: min v subject to m a ij q j v q,v j= n q j = j= q 0. for i =,..., m since v, on being minimised, will decrease until it takes the value of max i n j= a ijq j. This is an LP but not exactly in a useful form for our methods. We will transform it! First add a constant k to each a ij so that a ij > 0 for all i, j. This doesn t change the nature of the game, but guarantees v > 0. So WLOG, assume a ij > 0 for all i, j. Now change variables to x j = q j /v. The problem becomes: min v subject to m a ij x j x,v j= n x j = /v j= x 0. for i =,..., m This transformed problem for Player II is equivalent to P : max x n x j subject to Ax, x 0 j= which is now in our standard form. ( denotes a vector of s.) Doing the same transformations for Player I s problem { } m m max min a ij p i subject to p i =, p 0 p j turns into D : min y i= i= m y i subject to A T y, y 0. i= (Check: on problem sheet.) P and D are dual and hence have the same optimal value

33 Conclusion Let x, y be optimal for P, D. Then: Player I can guarantee an expected gain of at least v = / m i= y i, by following strategy p = vy. Player II can guarantee an expected loss of at most v = / n j= x j, by following strategy q = vx. The above is still true if a player announces his strategy in advance. So the game is solved as in the saddle point case (this was just a special case where the strategies were pure). v is the value of the game (the amount that Player I should fairly pay to Player II for the chance to play the game)

34 8 Two-player zero-sum games () x Some games are easy to solve without the LP formulation, e.g. ( ) A = 4 3 Suppose Player I chooses row with probability p, row with probability p. The he should maximise min( p + 4( p), p 3( p)) = min(4 6p, 5p 3) 5p 3 x 4 6p So the min is maximised when which occurs when p = 7. And then v = 35 3 =. 4 6p = 5p 3 (We could go on to find Player II s optimal strategy too.) A useful trick: dominated actions Consider Player II should never play column 3, since column is always at least as good as column 3 (column dominates column 3.) So we reduce to Now Player I will never play row 3 since row is always better, so ( 4 ) 3 has the same value (and optimal strategies) as A

35 Final example Consider add to each entry A = Ã = , Solve the LP for Player II s optimal strategy: 0 max x + x + x 3 subject to x,x,x x 0. x x x 3 This has value > 0 (e.g. strategy ( 3, 3, 3 ) for Player I) Initial simplex tableau: final tableau: Optimum: x = 5 6, x = 4, x 3 = 3 8, and x + x + x 3 = 5 6. So value v = /(x + x + x 3 ) = 6 5. Player II s optimal strategy: q = vx = 6 5 ( 5 6, 4, 3 8 ) = ( 3, 4 5, 5 ). Dual problem for Player I s strategy has solution y = 4, y =, y 3 = 3 6 (from bottom row of final tableau). So Player I s optmal strategy: p = vy = 6 5 ( 4,, 3 6 ) = ( 4 5, 8 5, 3 5 ). Ã has value 6 6 5, so the original game A has value 5 =

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Linear Programming: Chapter 11 Game Theory

Linear Programming: Chapter 11 Game Theory Linear Programming: Chapter 11 Game Theory Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Rock-Paper-Scissors

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

Simplex method summary

Simplex method summary Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

More information

Linear Programming Notes VII Sensitivity Analysis

Linear Programming Notes VII Sensitivity Analysis Linear Programming Notes VII Sensitivity Analysis 1 Introduction When you use a mathematical model to describe reality you must make approximations. The world is more complicated than the kinds of optimization

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1

Operation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1 Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear

More information

Chapter 6: Sensitivity Analysis

Chapter 6: Sensitivity Analysis Chapter 6: Sensitivity Analysis Suppose that you have just completed a linear programming solution which will have a major impact on your company, such as determining how much to increase the overall production

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4. Duality. 4.1 A Graphical Example Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

More information

Standard Form of a Linear Programming Problem

Standard Form of a Linear Programming Problem 494 CHAPTER 9 LINEAR PROGRAMMING 9. THE SIMPLEX METHOD: MAXIMIZATION For linear programming problems involving two variables, the graphical solution method introduced in Section 9. is convenient. However,

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Study Guide 2 Solutions MATH 111

Study Guide 2 Solutions MATH 111 Study Guide 2 Solutions MATH 111 Having read through the sample test, I wanted to warn everyone, that I might consider asking questions involving inequalities, the absolute value function (as in the suggested

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information

Systems of Linear Equations

Systems of Linear Equations Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and

More information

Linear Programming. Solving LP Models Using MS Excel, 18

Linear Programming. Solving LP Models Using MS Excel, 18 SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

9.4 THE SIMPLEX METHOD: MINIMIZATION

9.4 THE SIMPLEX METHOD: MINIMIZATION SECTION 9 THE SIMPLEX METHOD: MINIMIZATION 59 The accounting firm in Exercise raises its charge for an audit to $5 What number of audits and tax returns will bring in a maximum revenue? In the simplex

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Equilibrium computation: Part 1

Equilibrium computation: Part 1 Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

More information

0.1 Linear Programming

0.1 Linear Programming 0.1 Linear Programming 0.1.1 Objectives By the end of this unit you will be able to: formulate simple linear programming problems in terms of an objective function to be maximized or minimized subject

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Linear Programming. April 12, 2005

Linear Programming. April 12, 2005 Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first

More information

1 VECTOR SPACES AND SUBSPACES

1 VECTOR SPACES AND SUBSPACES 1 VECTOR SPACES AND SUBSPACES What is a vector? Many are familiar with the concept of a vector as: Something which has magnitude and direction. an ordered pair or triple. a description for quantities such

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1

5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition

More information

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach

Chapter 5. Linear Inequalities and Linear Programming. Linear Programming in Two Dimensions: A Geometric Approach Chapter 5 Linear Programming in Two Dimensions: A Geometric Approach Linear Inequalities and Linear Programming Section 3 Linear Programming gin Two Dimensions: A Geometric Approach In this section, we

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

Chapter 2 Solving Linear Programs

Chapter 2 Solving Linear Programs Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (Addison-Wesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A

More information

9 Multiplication of Vectors: The Scalar or Dot Product

9 Multiplication of Vectors: The Scalar or Dot Product Arkansas Tech University MATH 934: Calculus III Dr. Marcel B Finan 9 Multiplication of Vectors: The Scalar or Dot Product Up to this point we have defined what vectors are and discussed basic notation

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

56:171 Operations Research Midterm Exam Solutions Fall 2001

56:171 Operations Research Midterm Exam Solutions Fall 2001 56:171 Operations Research Midterm Exam Solutions Fall 2001 True/False: Indicate by "+" or "o" whether each statement is "true" or "false", respectively: o_ 1. If a primal LP constraint is slack at the

More information

The Multiplicative Weights Update method

The Multiplicative Weights Update method Chapter 2 The Multiplicative Weights Update method The Multiplicative Weights method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

4 UNIT FOUR: Transportation and Assignment problems

4 UNIT FOUR: Transportation and Assignment problems 4 UNIT FOUR: Transportation and Assignment problems 4.1 Objectives By the end of this unit you will be able to: formulate special linear programming problems using the transportation model. define a balanced

More information

Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

More information

OPTIMIZATION. Schedules. Notation. Index

OPTIMIZATION. Schedules. Notation. Index Easter Term 00 Richard Weber OPTIMIZATION Contents Schedules Notation Index iii iv v Preliminaries. Linear programming............................ Optimization under constraints......................3

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Mathematics Course 111: Algebra I Part IV: Vector Spaces

Mathematics Course 111: Algebra I Part IV: Vector Spaces Mathematics Course 111: Algebra I Part IV: Vector Spaces D. R. Wilkins Academic Year 1996-7 9 Vector Spaces A vector space over some field K is an algebraic structure consisting of a set V on which are

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

An Introduction to Linear Programming

An Introduction to Linear Programming An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important

More information

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above: Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

3.2 The Factor Theorem and The Remainder Theorem

3.2 The Factor Theorem and The Remainder Theorem 3. The Factor Theorem and The Remainder Theorem 57 3. The Factor Theorem and The Remainder Theorem Suppose we wish to find the zeros of f(x) = x 3 + 4x 5x 4. Setting f(x) = 0 results in the polynomial

More information

This exposition of linear programming

This exposition of linear programming Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George

More information

Chapter 13: Binary and Mixed-Integer Programming

Chapter 13: Binary and Mixed-Integer Programming Chapter 3: Binary and Mixed-Integer Programming The general branch and bound approach described in the previous chapter can be customized for special situations. This chapter addresses two special situations:

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Row Echelon Form and Reduced Row Echelon Form

Row Echelon Form and Reduced Row Echelon Form These notes closely follow the presentation of the material given in David C Lay s textbook Linear Algebra and its Applications (3rd edition) These notes are intended primarily for in-class presentation

More information

Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

More information

DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II. Matrix Algorithms DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

More information

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology

More information

Sensitivity Analysis with Excel

Sensitivity Analysis with Excel Sensitivity Analysis with Excel 1 Lecture Outline Sensitivity Analysis Effects on the Objective Function Value (OFV): Changing the Values of Decision Variables Looking at the Variation in OFV: Excel One-

More information

How To Understand And Solve A Linear Programming Problem

How To Understand And Solve A Linear Programming Problem At the end of the lesson, you should be able to: Chapter 2: Systems of Linear Equations and Matrices: 2.1: Solutions of Linear Systems by the Echelon Method Define linear systems, unique solution, inconsistent,

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

Optimization in R n Introduction

Optimization in R n Introduction Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Linear Programming. Before studying this supplement you should know or, if necessary, review

Linear Programming. Before studying this supplement you should know or, if necessary, review S U P P L E M E N T Linear Programming B Before studying this supplement you should know or, if necessary, review 1. Competitive priorities, Chapter 2 2. Capacity management concepts, Chapter 9 3. Aggregate

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Linear programming and reductions

Linear programming and reductions Chapter 7 Linear programming and reductions Many of the problems for which we want algorithms are optimization tasks: the shortest path, the cheapest spanning tree, the longest increasing subsequence,

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

University of Lille I PC first year list of exercises n 7. Review

University of Lille I PC first year list of exercises n 7. Review University of Lille I PC first year list of exercises n 7 Review Exercise Solve the following systems in 4 different ways (by substitution, by the Gauss method, by inverting the matrix of coefficients

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

ALMOST COMMON PRIORS 1. INTRODUCTION

ALMOST COMMON PRIORS 1. INTRODUCTION ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,

More information