International Doctoral School Algorithmic Decision Theory: MCDA and MOO


 Elwin Chandler
 5 years ago
 Views:
Transcription
1 International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire d Informatique de Nantes Atlantique, CNRS, Université de Nantes, France MCDA and MOO, Han sur Lesse, September
2 Overview 1 Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 2 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 3 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples
3 Overview Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 1 Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 2 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 3 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples
4 Formulation and Example Solving MOLPs by Weighted Sums Variables x R n Objective function Cx where C R p n Constraints Ax = b where A R m n and b R m min {Cx : Ax = b, x 0} (1) X = {x R n : Ax = b, x 0} is the feasible set in decision space Y = {Cx : x X } is the feasible set in objective space
5 Formulation and Example Solving MOLPs by Weighted Sums Variables x R n Objective function Cx where C R p n Constraints Ax = b where A R m n and b R m min {Cx : Ax = b, x 0} (1) X = {x R n : Ax = b, x 0} is the feasible set in decision space Y = {Cx : x X } is the feasible set in objective space
6 Formulation and Example Solving MOLPs by Weighted Sums Variables x R n Objective function Cx where C R p n Constraints Ax = b where A R m n and b R m min {Cx : Ax = b, x 0} (1) X = {x R n : Ax = b, x 0} is the feasible set in decision space Y = {Cx : x X } is the feasible set in objective space
7 Formulation and Example Solving MOLPs by Weighted Sums Variables x R n Objective function Cx where C R p n Constraints Ax = b where A R m n and b R m min {Cx : Ax = b, x 0} (1) X = {x R n : Ax = b, x 0} is the feasible set in decision space Y = {Cx : x X } is the feasible set in objective space
8 Formulation and Example Solving MOLPs by Weighted Sums Example ( ) 3x1 + x min 2 x 1 2x 2 subject to x 2 3 3x 1 x 2 6 x 0 ( 3 1 C = 1 2 ) A = ( ) b = ( 3 6 )
9 Formulation and Example Solving MOLPs by Weighted Sums Example ( ) 3x1 + x min 2 x 1 2x 2 subject to x 2 3 3x 1 x 2 6 x 0 ( 3 1 C = 1 2 ) A = ( ) b = ( 3 6 )
10 Formulation and Example Solving MOLPs by Weighted Sums x 2 x 3 X x 1 x Feasible set in decision space
11 Formulation and Example Solving MOLPs by Weighted Sums Cx 1 Cx Y Cx Cx 310 Feasible set in objective space
12 Formulation and Example Solving MOLPs by Weighted Sums Definition Let ˆx X be a feasible solution of the MOLP (1) and let ŷ = C ˆx ˆx is called weakly efficient if there is no x X such that Cx < C ˆx; ŷ = C ˆx is called weakly nondominated ˆx is called efficient if there is no x X such that Cx C ˆx; ŷ = C ˆx is called nondominated ˆx is called properly efficient if it is efficient and if there exists a real number M > 0 such that for all i and x with c T i x < c T i ˆx there is an index j and M > 0 such that c T j x > c T j ˆx and c T i ˆx c T i x c T j x c T j ˆx M
13 Formulation and Example Solving MOLPs by Weighted Sums Definition Let ˆx X be a feasible solution of the MOLP (1) and let ŷ = C ˆx ˆx is called weakly efficient if there is no x X such that Cx < C ˆx; ŷ = C ˆx is called weakly nondominated ˆx is called efficient if there is no x X such that Cx C ˆx; ŷ = C ˆx is called nondominated ˆx is called properly efficient if it is efficient and if there exists a real number M > 0 such that for all i and x with c T i x < c T i ˆx there is an index j and M > 0 such that c T j x > c T j ˆx and c T i ˆx c T i x c T j x c T j ˆx M
14 Formulation and Example Solving MOLPs by Weighted Sums Definition Let ˆx X be a feasible solution of the MOLP (1) and let ŷ = C ˆx ˆx is called weakly efficient if there is no x X such that Cx < C ˆx; ŷ = C ˆx is called weakly nondominated ˆx is called efficient if there is no x X such that Cx C ˆx; ŷ = C ˆx is called nondominated ˆx is called properly efficient if it is efficient and if there exists a real number M > 0 such that for all i and x with c T i x < c T i ˆx there is an index j and M > 0 such that c T j x > c T j ˆx and c T i ˆx c T i x c T j x c T j ˆx M
15 Formulation and Example Solving MOLPs by Weighted Sums Definition Let ˆx X be a feasible solution of the MOLP (1) and let ŷ = C ˆx ˆx is called weakly efficient if there is no x X such that Cx < C ˆx; ŷ = C ˆx is called weakly nondominated ˆx is called efficient if there is no x X such that Cx C ˆx; ŷ = C ˆx is called nondominated ˆx is called properly efficient if it is efficient and if there exists a real number M > 0 such that for all i and x with c T i x < c T i ˆx there is an index j and M > 0 such that c T j x > c T j ˆx and c T i ˆx c T i x c T j x c T j ˆx M
16 Formulation and Example Solving MOLPs by Weighted Sums Cx 1 Cx Y Cx Cx 310 Nondominated set
17 Formulation and Example Solving MOLPs by Weighted Sums x 2 x 3 X x 1 x Efficient set
18 Overview Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 1 Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 2 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 3 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples
19 Formulation and Example Solving MOLPs by Weighted Sums Let λ 1,, λ p 0 and consider LP(λ) p min λ k ck T x = min λt Cx k=1 subject to Ax = b x 0 with some vector λ 0 (Why not λ = 0 or λ 0?) LP(λ) is a linear programme that can be solved by the Simplex method If λ > 0 then optimal solution of LP(λ) is properly efficient If λ 0 then optimal solution of LP(λ) is weakly efficient Converse also true, because Y convex
20 Formulation and Example Solving MOLPs by Weighted Sums Let λ 1,, λ p 0 and consider LP(λ) p min λ k ck T x = min λt Cx k=1 subject to Ax = b x 0 with some vector λ 0 (Why not λ = 0 or λ 0?) LP(λ) is a linear programme that can be solved by the Simplex method If λ > 0 then optimal solution of LP(λ) is properly efficient If λ 0 then optimal solution of LP(λ) is weakly efficient Converse also true, because Y convex
21 Formulation and Example Solving MOLPs by Weighted Sums Let λ 1,, λ p 0 and consider LP(λ) p min λ k ck T x = min λt Cx k=1 subject to Ax = b x 0 with some vector λ 0 (Why not λ = 0 or λ 0?) LP(λ) is a linear programme that can be solved by the Simplex method If λ > 0 then optimal solution of LP(λ) is properly efficient If λ 0 then optimal solution of LP(λ) is weakly efficient Converse also true, because Y convex
22 Formulation and Example Solving MOLPs by Weighted Sums Let λ 1,, λ p 0 and consider LP(λ) p min λ k ck T x = min λt Cx k=1 subject to Ax = b x 0 with some vector λ 0 (Why not λ = 0 or λ 0?) LP(λ) is a linear programme that can be solved by the Simplex method If λ > 0 then optimal solution of LP(λ) is properly efficient If λ 0 then optimal solution of LP(λ) is weakly efficient Converse also true, because Y convex
23 Formulation and Example Solving MOLPs by Weighted Sums Let λ 1,, λ p 0 and consider LP(λ) p min λ k ck T x = min λt Cx k=1 subject to Ax = b x 0 with some vector λ 0 (Why not λ = 0 or λ 0?) LP(λ) is a linear programme that can be solved by the Simplex method If λ > 0 then optimal solution of LP(λ) is properly efficient If λ 0 then optimal solution of LP(λ) is weakly efficient Converse also true, because Y convex
24 Formulation and Example Solving MOLPs by Weighted Sums Illustration in objective space Y = CX Cx 1 Cx 2 Cx 3 Cx 4 y 1 y 2 {(λ 1 ) T Cx = α 1 } {(λ 2 ) T Cx = α 2 } {(λ 3 ) T Cx = α 3 } λ 1 = (2, 1), λ 2 = (1, 3), λ 3 = (1, 1)
25 Formulation and Example Solving MOLPs by Weighted Sums Illustration in objective space Y = CX Cx 1 Cx 2 Cx 3 Cx 4 y 1 y 2 {(λ 1 ) T Cx = α 1 } {(λ 2 ) T Cx = α 2 } {(λ 3 ) T Cx = α 3 } λ 1 = (2, 1), λ 2 = (1, 3), λ 3 = (1, 1)
26 Formulation and Example Solving MOLPs by Weighted Sums y R p satisfying λ T y = α define a straight line (hyperplane) Since y = Cx and λ T Cx is minimised, we push the line towards the origin (left and down) When the line only touches Y nondominated points are found Nondominated points Y N are on the boundary of Y Y is convex polyhedron and has finite number of facets Y N consists of finitely many facets of Y The normal of the facet can serve as weight vector λ
27 Formulation and Example Solving MOLPs by Weighted Sums y R p satisfying λ T y = α define a straight line (hyperplane) Since y = Cx and λ T Cx is minimised, we push the line towards the origin (left and down) When the line only touches Y nondominated points are found Nondominated points Y N are on the boundary of Y Y is convex polyhedron and has finite number of facets Y N consists of finitely many facets of Y The normal of the facet can serve as weight vector λ
28 Formulation and Example Solving MOLPs by Weighted Sums y R p satisfying λ T y = α define a straight line (hyperplane) Since y = Cx and λ T Cx is minimised, we push the line towards the origin (left and down) When the line only touches Y nondominated points are found Nondominated points Y N are on the boundary of Y Y is convex polyhedron and has finite number of facets Y N consists of finitely many facets of Y The normal of the facet can serve as weight vector λ
29 Formulation and Example Solving MOLPs by Weighted Sums y R p satisfying λ T y = α define a straight line (hyperplane) Since y = Cx and λ T Cx is minimised, we push the line towards the origin (left and down) When the line only touches Y nondominated points are found Nondominated points Y N are on the boundary of Y Y is convex polyhedron and has finite number of facets Y N consists of finitely many facets of Y The normal of the facet can serve as weight vector λ
30 Formulation and Example Solving MOLPs by Weighted Sums y R p satisfying λ T y = α define a straight line (hyperplane) Since y = Cx and λ T Cx is minimised, we push the line towards the origin (left and down) When the line only touches Y nondominated points are found Nondominated points Y N are on the boundary of Y Y is convex polyhedron and has finite number of facets Y N consists of finitely many facets of Y The normal of the facet can serve as weight vector λ
31 Formulation and Example Solving MOLPs by Weighted Sums Question: Can all efficient solutions be found using weighted sums? If ˆx X is efficient, does there exist λ > 0 such that ˆx is optimal solution to min{λ T Cx : Ax = b, x 0}? Lemma A feasible solution x 0 X is efficient if and only if the linear programme max e T z subject to Ax = b Cx + Iz = Cx 0 x, z 0, (2) where e T = (1,, 1) R p and I is the p p identity matrix, has an optimal solution (ˆx, ẑ) with ẑ = 0
32 Formulation and Example Solving MOLPs by Weighted Sums Question: Can all efficient solutions be found using weighted sums? If ˆx X is efficient, does there exist λ > 0 such that ˆx is optimal solution to min{λ T Cx : Ax = b, x 0}? Lemma A feasible solution x 0 X is efficient if and only if the linear programme max e T z subject to Ax = b Cx + Iz = Cx 0 x, z 0, (2) where e T = (1,, 1) R p and I is the p p identity matrix, has an optimal solution (ˆx, ẑ) with ẑ = 0
33 Formulation and Example Solving MOLPs by Weighted Sums Question: Can all efficient solutions be found using weighted sums? If ˆx X is efficient, does there exist λ > 0 such that ˆx is optimal solution to min{λ T Cx : Ax = b, x 0}? Lemma A feasible solution x 0 X is efficient if and only if the linear programme max e T z subject to Ax = b Cx + Iz = Cx 0 x, z 0, (2) where e T = (1,, 1) R p and I is the p p identity matrix, has an optimal solution (ˆx, ẑ) with ẑ = 0
34 Formulation and Example Solving MOLPs by Weighted Sums Proof LP is always feasible with x = x 0, z = 0 (and value 0) Let (ˆx, ẑ) be optimal solution If ẑ = 0 then ẑ = Cx 0 C ˆx = 0 Cx 0 = C ˆx There is no x X such that Cx Cx 0 because (x, Cx 0 Cx) would be better solution x 0 efficient If ˆx 0 efficient there is no x X with Cx Cx 0 there is no z with z = Cx 0 Cx 0 max e T z 0 max e T z = 0
35 Formulation and Example Solving MOLPs by Weighted Sums Proof LP is always feasible with x = x 0, z = 0 (and value 0) Let (ˆx, ẑ) be optimal solution If ẑ = 0 then ẑ = Cx 0 C ˆx = 0 Cx 0 = C ˆx There is no x X such that Cx Cx 0 because (x, Cx 0 Cx) would be better solution x 0 efficient If ˆx 0 efficient there is no x X with Cx Cx 0 there is no z with z = Cx 0 Cx 0 max e T z 0 max e T z = 0
36 Formulation and Example Solving MOLPs by Weighted Sums Proof LP is always feasible with x = x 0, z = 0 (and value 0) Let (ˆx, ẑ) be optimal solution If ẑ = 0 then ẑ = Cx 0 C ˆx = 0 Cx 0 = C ˆx There is no x X such that Cx Cx 0 because (x, Cx 0 Cx) would be better solution x 0 efficient If ˆx 0 efficient there is no x X with Cx Cx 0 there is no z with z = Cx 0 Cx 0 max e T z 0 max e T z = 0
37 Formulation and Example Solving MOLPs by Weighted Sums Proof LP is always feasible with x = x 0, z = 0 (and value 0) Let (ˆx, ẑ) be optimal solution If ẑ = 0 then ẑ = Cx 0 C ˆx = 0 Cx 0 = C ˆx There is no x X such that Cx Cx 0 because (x, Cx 0 Cx) would be better solution x 0 efficient If ˆx 0 efficient there is no x X with Cx Cx 0 there is no z with z = Cx 0 Cx 0 max e T z 0 max e T z = 0
38 Formulation and Example Solving MOLPs by Weighted Sums Proof LP is always feasible with x = x 0, z = 0 (and value 0) Let (ˆx, ẑ) be optimal solution If ẑ = 0 then ẑ = Cx 0 C ˆx = 0 Cx 0 = C ˆx There is no x X such that Cx Cx 0 because (x, Cx 0 Cx) would be better solution x 0 efficient If ˆx 0 efficient there is no x X with Cx Cx 0 there is no z with z = Cx 0 Cx 0 max e T z 0 max e T z = 0
39 Formulation and Example Solving MOLPs by Weighted Sums Proof LP is always feasible with x = x 0, z = 0 (and value 0) Let (ˆx, ẑ) be optimal solution If ẑ = 0 then ẑ = Cx 0 C ˆx = 0 Cx 0 = C ˆx There is no x X such that Cx Cx 0 because (x, Cx 0 Cx) would be better solution x 0 efficient If ˆx 0 efficient there is no x X with Cx Cx 0 there is no z with z = Cx 0 Cx 0 max e T z 0 max e T z = 0
40 Formulation and Example Solving MOLPs by Weighted Sums Proof LP is always feasible with x = x 0, z = 0 (and value 0) Let (ˆx, ẑ) be optimal solution If ẑ = 0 then ẑ = Cx 0 C ˆx = 0 Cx 0 = C ˆx There is no x X such that Cx Cx 0 because (x, Cx 0 Cx) would be better solution x 0 efficient If ˆx 0 efficient there is no x X with Cx Cx 0 there is no z with z = Cx 0 Cx 0 max e T z 0 max e T z = 0
41 Formulation and Example Solving MOLPs by Weighted Sums Lemma A feasible solution x 0 X is efficient if and only if the linear programme min u T b + w T Cx 0 subject to u T A + w T C 0 w e u R m (3) has an optimal solution (û, ŵ) with û T b + ŵ T Cx 0 = 0 Proof The LP (3) is the dual of the LP (2)
42 Formulation and Example Solving MOLPs by Weighted Sums Lemma A feasible solution x 0 X is efficient if and only if the linear programme min u T b + w T Cx 0 subject to u T A + w T C 0 w e u R m (3) has an optimal solution (û, ŵ) with û T b + ŵ T Cx 0 = 0 Proof The LP (3) is the dual of the LP (2)
43 Formulation and Example Solving MOLPs by Weighted Sums Theorem A feasible solution x 0 X is an efficient solution of the MOLP (1) if and only if there exists a λ R p > such that for all x X λ T Cx 0 λ T Cx (4) Note: We already know that optimal solutions of weighted sum problems are efficient
44 Formulation and Example Solving MOLPs by Weighted Sums Theorem A feasible solution x 0 X is an efficient solution of the MOLP (1) if and only if there exists a λ R p > such that for all x X λ T Cx 0 λ T Cx (4) Note: We already know that optimal solutions of weighted sum problems are efficient
45 Formulation and Example Solving MOLPs by Weighted Sums Proof Let x 0 X E By Lemma 4 LP (3) has an optimal solution (û, ŵ) such that û T b = ŵ T Cx 0 (5) û is also an optimal solution of the LP { } min u T b : u T A ŵ T C, (6) which is (3) with w = ŵ fixed There is an optimal solution of the dual of (6) { } max ŵ T Cx : Ax = b, x 0 (7)
46 Formulation and Example Solving MOLPs by Weighted Sums Proof Let x 0 X E By Lemma 4 LP (3) has an optimal solution (û, ŵ) such that û T b = ŵ T Cx 0 (5) û is also an optimal solution of the LP { } min u T b : u T A ŵ T C, (6) which is (3) with w = ŵ fixed There is an optimal solution of the dual of (6) { } max ŵ T Cx : Ax = b, x 0 (7)
47 Formulation and Example Solving MOLPs by Weighted Sums Proof Let x 0 X E By Lemma 4 LP (3) has an optimal solution (û, ŵ) such that û T b = ŵ T Cx 0 (5) û is also an optimal solution of the LP { } min u T b : u T A ŵ T C, (6) which is (3) with w = ŵ fixed There is an optimal solution of the dual of (6) { } max ŵ T Cx : Ax = b, x 0 (7)
48 Formulation and Example Solving MOLPs by Weighted Sums Proof Let x 0 X E By Lemma 4 LP (3) has an optimal solution (û, ŵ) such that û T b = ŵ T Cx 0 (5) û is also an optimal solution of the LP { } min u T b : u T A ŵ T C, (6) which is (3) with w = ŵ fixed There is an optimal solution of the dual of (6) { } max ŵ T Cx : Ax = b, x 0 (7)
49 Formulation and Example Solving MOLPs by Weighted Sums Proof By weak duality u T b ŵ T Cx for all feasible solutions u of (6) and for all feasible solutions x of (7) We already know that û T b = ŵ T Cx 0 from (5) x 0 is an optimal solution of (7) Note that (7) is equivalent to { } min ŵ T Cx : Ax = b, x 0 with ŵ e > 0 from the constraints in (3)
50 Formulation and Example Solving MOLPs by Weighted Sums Proof By weak duality u T b ŵ T Cx for all feasible solutions u of (6) and for all feasible solutions x of (7) We already know that û T b = ŵ T Cx 0 from (5) x 0 is an optimal solution of (7) Note that (7) is equivalent to { } min ŵ T Cx : Ax = b, x 0 with ŵ e > 0 from the constraints in (3)
51 Formulation and Example Solving MOLPs by Weighted Sums Proof By weak duality u T b ŵ T Cx for all feasible solutions u of (6) and for all feasible solutions x of (7) We already know that û T b = ŵ T Cx 0 from (5) x 0 is an optimal solution of (7) Note that (7) is equivalent to { } min ŵ T Cx : Ax = b, x 0 with ŵ e > 0 from the constraints in (3)
52 Formulation and Example Solving MOLPs by Weighted Sums Proof By weak duality u T b ŵ T Cx for all feasible solutions u of (6) and for all feasible solutions x of (7) We already know that û T b = ŵ T Cx 0 from (5) x 0 is an optimal solution of (7) Note that (7) is equivalent to { } min ŵ T Cx : Ax = b, x 0 with ŵ e > 0 from the constraints in (3)
53 Overview Multiobjective Linear Programming The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 1 Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 2 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 3 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples
54 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Modification of the Simplex algorithm for LPs with two objectives min ( (c 1 ) T x, (c 2 ) T x ) subject to Ax = b x 0 (8) We can find all efficient solutions by solving the parametric LP { } min λ 1 (c 1 ) T x + λ 2 (c 2 ) T x : Ax = b, x 0 for all λ = (λ 1, λ 2 ) > 0
55 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Modification of the Simplex algorithm for LPs with two objectives min ( (c 1 ) T x, (c 2 ) T x ) subject to Ax = b x 0 (8) We can find all efficient solutions by solving the parametric LP { } min λ 1 (c 1 ) T x + λ 2 (c 2 ) T x : Ax = b, x 0 for all λ = (λ 1, λ 2 ) > 0
56 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example We can divide the objective by λ 1 + λ 2 without changing the optima, ie λ 1 = λ 1/(λ 1 + λ 2 ), λ 2 = λ 2/(λ 1 + λ 2 and λ 1 + λ 2 = 1 or λ 2 = 1 λ 1 LPs with one parameter 0 λ 1 and parametric objective c(λ) := λc 1 + (1 λ)c 2 { } min c(λ) T x : Ax = b, x 0 (9)
57 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example We can divide the objective by λ 1 + λ 2 without changing the optima, ie λ 1 = λ 1/(λ 1 + λ 2 ), λ 2 = λ 2/(λ 1 + λ 2 and λ 1 + λ 2 = 1 or λ 2 = 1 λ 1 LPs with one parameter 0 λ 1 and parametric objective c(λ) := λc 1 + (1 λ)c 2 { } min c(λ) T x : Ax = b, x 0 (9)
58 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Let B be a feasible basis Recall reduced cost c N = c N cb T B 1 N Reduced cost for the parametric LP c(λ) = λ c 1 + (1 λ) c 2 (10) Suppose ˆB is an optimal basis of (9) for some ˆλ c(ˆλ) 0
59 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Let B be a feasible basis Recall reduced cost c N = c N cb T B 1 N Reduced cost for the parametric LP c(λ) = λ c 1 + (1 λ) c 2 (10) Suppose ˆB is an optimal basis of (9) for some ˆλ c(ˆλ) 0
60 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Let B be a feasible basis Recall reduced cost c N = c N cb T B 1 N Reduced cost for the parametric LP c(λ) = λ c 1 + (1 λ) c 2 (10) Suppose ˆB is an optimal basis of (9) for some ˆλ c(ˆλ) 0
61 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Let B be a feasible basis Recall reduced cost c N = c N cb T B 1 N Reduced cost for the parametric LP c(λ) = λ c 1 + (1 λ) c 2 (10) Suppose ˆB is an optimal basis of (9) for some ˆλ c(ˆλ) 0
62 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Let B be a feasible basis Recall reduced cost c N = c N cb T B 1 N Reduced cost for the parametric LP c(λ) = λ c 1 + (1 λ) c 2 (10) Suppose ˆB is an optimal basis of (9) for some ˆλ c(ˆλ) 0
63 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
64 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
65 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
66 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
67 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
68 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
69 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
70 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
71 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Case 1: c 2 0 From (10) c(λ) 0 for all λ < ˆλ ˆB is optimal basis for all 0 λ ˆλ Case 2: There is at least one i N with c 2 i < 0 there is λ < ˆλ such that c(λ) i = 0 λ c 1 i + (1 λ) c 2 i = 0 λ( c 1 i c 2 i ) + c2 i = 0 c2 i λ = c i 1 c2 i Below this value ˆB is not optimal
72 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example I = {i N : c 2 i < 0, c 1 i 0} λ := max i I c 1 i c i 2 c i 2 (11) ˆB is optimal for all λ [λ, ˆλ] As soon as λ < λ new bases become optimal Entering variable x s has to be chosen where the maximum in (11) is attained for i = s
73 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example I = {i N : c 2 i < 0, c 1 i 0} λ := max i I c 1 i c i 2 c i 2 (11) ˆB is optimal for all λ [λ, ˆλ] As soon as λ < λ new bases become optimal Entering variable x s has to be chosen where the maximum in (11) is attained for i = s
74 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example I = {i N : c 2 i < 0, c 1 i 0} λ := max i I c 1 i c i 2 c i 2 (11) ˆB is optimal for all λ [λ, ˆλ] As soon as λ < λ new bases become optimal Entering variable x s has to be chosen where the maximum in (11) is attained for i = s
75 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example I = {i N : c 2 i < 0, c 1 i 0} λ := max i I c 1 i c i 2 c i 2 (11) ˆB is optimal for all λ [λ, ˆλ] As soon as λ < λ new bases become optimal Entering variable x s has to be chosen where the maximum in (11) is attained for i = s
76 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example I = {i N : c 2 i < 0, c 1 i 0} λ := max i I c 1 i c i 2 c i 2 (11) ˆB is optimal for all λ [λ, ˆλ] As soon as λ < λ new bases become optimal Entering variable x s has to be chosen where the maximum in (11) is attained for i = s
77 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Algorithm (Parametric Simplex for biobjective LPs) Input: Data A, b, C for a biobjective LP Phase I: Solve the auxiliary LP for Phase I using the Simplex algorithm If the optimal value is positive, STOP, X = Otherwise let B be an optimal basis Phase II: Solve the LP (9) for λ = 1 starting from basis B found in Phase I yielding an optimal basis ˆB Compute Ã and b Phase III: While I = {i N : c 2 i < 0, c 1 i 0} = c 2 i λ := max i I c i 1 c2 i n o c i s argmax i I : 2 c n i 1 c2 i o b r argmin j B : j, Ã Ã js > 0 js Let B := (B \ {r}) {s} and update Ã and b End while Output: Sequence of λvalues and sequence of optimal BFSs
78 Overview Multiobjective Linear Programming The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 1 Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 2 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 3 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples
79 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Example LP(λ) min ( 3x1 + x 2 ) x 1 2x 2 subject to x 2 3 3x 1 x 2 6 x 0 min (4λ 1)x 1 + (3λ 2)x 2 subject to x 2 + x 3 = 3 3x 1 x 2 + x 4 = 6 x 0
80 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Use Simplex tableaus showing reduced cost vectors c 1 and c 2 Optimal basis for λ = 1 is B = {3, 4}, optimal basic feasible solution x = (0, 0, 3, 6) Start with Phase 3
81 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Use Simplex tableaus showing reduced cost vectors c 1 and c 2 Optimal basis for λ = 1 is B = {3, 4}, optimal basic feasible solution x = (0, 0, 3, 6) Start with Phase 3
82 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Use Simplex tableaus showing reduced cost vectors c 1 and c 2 Optimal basis for λ = 1 is B = {3, 4}, optimal basic feasible solution x = (0, 0, 3, 6) Start with Phase 3
83 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Iteration 1: c c x x λ = 1, c(λ) = (3, 1, 0, { 0), B 1 = {3, } 4}, x 1 = (0, 0, 3, 6) I = {1, 2}, λ = max 1 3+1, = 2 3 s = 2, r = 3
84 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Iteration 2 c c x x λ = 2/3, c(λ) = (5/3, { 0, 0, } 0), B 2 = {2, 4}, x 2 = (0, 3, 0, 9) I = {1}, λ = max = 1 4 s = 1, r = 4
85 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Iteration 3 c c /3 1/3 9 x x /3 1/3 3 λ = 1/4, c(λ) = (0, 0, 5/4, 0), B 3 = {1, 2}, x 3 = (3, 3, 0, 0) I =
86 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight values λ 1 = 1, λ 2 = 2/3, λ 3 = 1/4, λ 4 = 0 Basic feasible solutions x 1, x 2, x 3 In each iteration c(λ) can be calculated with the previous and current c 1 and c 2 Basis B 1 = (3, 4) and BFS x 1 = (0, 0, 3, 6) are optimal for λ [2/3, 1] Basis B 2 = (2, 4) and BFS x 2 = (0, 3, 0, 9) are optimal for λ [1/4, 2/3], and Basis B 3 = (1, 2) and BFS x 3 = (3, 3, 0, 0) are optimal for λ [0, 1/4] Objective vectors for basic feasible solutions: Cx 1 = (0, 0), Cx 2 = (3, 6), and Cx 3 = (12, 9)
87 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight values λ 1 = 1, λ 2 = 2/3, λ 3 = 1/4, λ 4 = 0 Basic feasible solutions x 1, x 2, x 3 In each iteration c(λ) can be calculated with the previous and current c 1 and c 2 Basis B 1 = (3, 4) and BFS x 1 = (0, 0, 3, 6) are optimal for λ [2/3, 1] Basis B 2 = (2, 4) and BFS x 2 = (0, 3, 0, 9) are optimal for λ [1/4, 2/3], and Basis B 3 = (1, 2) and BFS x 3 = (3, 3, 0, 0) are optimal for λ [0, 1/4] Objective vectors for basic feasible solutions: Cx 1 = (0, 0), Cx 2 = (3, 6), and Cx 3 = (12, 9)
88 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight values λ 1 = 1, λ 2 = 2/3, λ 3 = 1/4, λ 4 = 0 Basic feasible solutions x 1, x 2, x 3 In each iteration c(λ) can be calculated with the previous and current c 1 and c 2 Basis B 1 = (3, 4) and BFS x 1 = (0, 0, 3, 6) are optimal for λ [2/3, 1] Basis B 2 = (2, 4) and BFS x 2 = (0, 3, 0, 9) are optimal for λ [1/4, 2/3], and Basis B 3 = (1, 2) and BFS x 3 = (3, 3, 0, 0) are optimal for λ [0, 1/4] Objective vectors for basic feasible solutions: Cx 1 = (0, 0), Cx 2 = (3, 6), and Cx 3 = (12, 9)
89 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight values λ 1 = 1, λ 2 = 2/3, λ 3 = 1/4, λ 4 = 0 Basic feasible solutions x 1, x 2, x 3 In each iteration c(λ) can be calculated with the previous and current c 1 and c 2 Basis B 1 = (3, 4) and BFS x 1 = (0, 0, 3, 6) are optimal for λ [2/3, 1] Basis B 2 = (2, 4) and BFS x 2 = (0, 3, 0, 9) are optimal for λ [1/4, 2/3], and Basis B 3 = (1, 2) and BFS x 3 = (3, 3, 0, 0) are optimal for λ [0, 1/4] Objective vectors for basic feasible solutions: Cx 1 = (0, 0), Cx 2 = (3, 6), and Cx 3 = (12, 9)
90 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight values λ 1 = 1, λ 2 = 2/3, λ 3 = 1/4, λ 4 = 0 Basic feasible solutions x 1, x 2, x 3 In each iteration c(λ) can be calculated with the previous and current c 1 and c 2 Basis B 1 = (3, 4) and BFS x 1 = (0, 0, 3, 6) are optimal for λ [2/3, 1] Basis B 2 = (2, 4) and BFS x 2 = (0, 3, 0, 9) are optimal for λ [1/4, 2/3], and Basis B 3 = (1, 2) and BFS x 3 = (3, 3, 0, 0) are optimal for λ [0, 1/4] Objective vectors for basic feasible solutions: Cx 1 = (0, 0), Cx 2 = (3, 6), and Cx 3 = (12, 9)
91 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight values λ 1 = 1, λ 2 = 2/3, λ 3 = 1/4, λ 4 = 0 Basic feasible solutions x 1, x 2, x 3 In each iteration c(λ) can be calculated with the previous and current c 1 and c 2 Basis B 1 = (3, 4) and BFS x 1 = (0, 0, 3, 6) are optimal for λ [2/3, 1] Basis B 2 = (2, 4) and BFS x 2 = (0, 3, 0, 9) are optimal for λ [1/4, 2/3], and Basis B 3 = (1, 2) and BFS x 3 = (3, 3, 0, 0) are optimal for λ [0, 1/4] Objective vectors for basic feasible solutions: Cx 1 = (0, 0), Cx 2 = (3, 6), and Cx 3 = (12, 9)
92 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight values λ 1 = 1, λ 2 = 2/3, λ 3 = 1/4, λ 4 = 0 Basic feasible solutions x 1, x 2, x 3 In each iteration c(λ) can be calculated with the previous and current c 1 and c 2 Basis B 1 = (3, 4) and BFS x 1 = (0, 0, 3, 6) are optimal for λ [2/3, 1] Basis B 2 = (2, 4) and BFS x 2 = (0, 3, 0, 9) are optimal for λ [1/4, 2/3], and Basis B 3 = (1, 2) and BFS x 3 = (3, 3, 0, 0) are optimal for λ [0, 1/4] Objective vectors for basic feasible solutions: Cx 1 = (0, 0), Cx 2 = (3, 6), and Cx 3 = (12, 9)
93 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Values λ = 2/3 and λ = 1/4 correspond to weight vectors (2/3, 1/3) and (1/4, 3/4) Contour lines for weighted sum objectives in decision are parallel to efficient edges 2 3 (3x 1 + x 2 ) ( x 1 2x 2 ) = 5 3 x (3x 1 + x 2 ) ( x 1 2x 2 ) = 5 4 x x 2 x 3 X Feasible set in decision space and efficient set 1 x 1 x
94 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Values λ = 2/3 and λ = 1/4 correspond to weight vectors (2/3, 1/3) and (1/4, 3/4) Contour lines for weighted sum objectives in decision are parallel to efficient edges 2 3 (3x 1 + x 2 ) ( x 1 2x 2 ) = 5 3 x (3x 1 + x 2 ) ( x 1 2x 2 ) = 5 4 x x 2 x 3 X Feasible set in decision space and efficient set 1 x 1 x
95 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Values λ = 2/3 and λ = 1/4 correspond to weight vectors (2/3, 1/3) and (1/4, 3/4) Contour lines for weighted sum objectives in decision are parallel to efficient edges 2 3 (3x 1 + x 2 ) ( x 1 2x 2 ) = 5 3 x (3x 1 + x 2 ) ( x 1 2x 2 ) = 5 4 x x 2 x 3 X Feasible set in decision space and efficient set 1 x 1 x
96 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Weight vectors (2/3, 1/3) and (1/4, 3/4) are normal to nondominated edges Cx 1 Cx Y Cx Cx 310 Objective space and nondominated set
97 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Example Algorithm finds all nondominated extreme points in objective space and one efficient bfs for each of those Algorithm does not find all efficient solutions just as Simplex algorithm does not find all optimal solutions of an LP min (x 1, x 2 ) T subject to 0 x i 1 i = 1, 2, 3 Efficient set: {x R 3 : x 1 = x 2 = 0, 0 x 3 1}
98 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Example Algorithm finds all nondominated extreme points in objective space and one efficient bfs for each of those Algorithm does not find all efficient solutions just as Simplex algorithm does not find all optimal solutions of an LP min (x 1, x 2 ) T subject to 0 x i 1 i = 1, 2, 3 Efficient set: {x R 3 : x 1 = x 2 = 0, 0 x 3 1}
99 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example Example Algorithm finds all nondominated extreme points in objective space and one efficient bfs for each of those Algorithm does not find all efficient solutions just as Simplex algorithm does not find all optimal solutions of an LP min (x 1, x 2 ) T subject to 0 x i 1 i = 1, 2, 3 Efficient set: {x R 3 : x 1 = x 2 = 0, 0 x 3 1}
100 Overview Multiobjective Linear Programming A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples 1 Multiobjective Linear Programming Formulation and Example Solving MOLPs by Weighted Sums 2 The Parametric Simplex Algorithm Biobjective Linear Programmes: Example 3 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples
101 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples min{cx : Ax = b, x 0} Let B be a basis and C = C C B A 1 B A and R = C N How to calculate critical λ if p > 2? ( ) 3 1 At B 1 : C N =, λ 1 2 = 2/3, λ = (2/3, 1/3) T and λ T C N = (5/3, 0) T ( At B 2 : C 3 1 N = 1 2 λ T CN = (0, 5/4) T ), λ = 1/4, λ = (1/4, 3/4) T and Find λ R p, λ > 0 such that λ T R 0 (optimality) and λ T r j = 0 (alternative optimum) for some column r j of R
102 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples min{cx : Ax = b, x 0} Let B be a basis and C = C C B A 1 B A and R = C N How to calculate critical λ if p > 2? ( ) 3 1 At B 1 : C N =, λ 1 2 = 2/3, λ = (2/3, 1/3) T and λ T C N = (5/3, 0) T ( At B 2 : C 3 1 N = 1 2 λ T CN = (0, 5/4) T ), λ = 1/4, λ = (1/4, 3/4) T and Find λ R p, λ > 0 such that λ T R 0 (optimality) and λ T r j = 0 (alternative optimum) for some column r j of R
103 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples min{cx : Ax = b, x 0} Let B be a basis and C = C C B A 1 B A and R = C N How to calculate critical λ if p > 2? ( ) 3 1 At B 1 : C N =, λ 1 2 = 2/3, λ = (2/3, 1/3) T and λ T C N = (5/3, 0) T ( At B 2 : C 3 1 N = 1 2 λ T CN = (0, 5/4) T ), λ = 1/4, λ = (1/4, 3/4) T and Find λ R p, λ > 0 such that λ T R 0 (optimality) and λ T r j = 0 (alternative optimum) for some column r j of R
104 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples min{cx : Ax = b, x 0} Let B be a basis and C = C C B A 1 B A and R = C N How to calculate critical λ if p > 2? ( ) 3 1 At B 1 : C N =, λ 1 2 = 2/3, λ = (2/3, 1/3) T and λ T C N = (5/3, 0) T ( At B 2 : C 3 1 N = 1 2 λ T CN = (0, 5/4) T ), λ = 1/4, λ = (1/4, 3/4) T and Find λ R p, λ > 0 such that λ T R 0 (optimality) and λ T r j = 0 (alternative optimum) for some column r j of R
105 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples min{cx : Ax = b, x 0} Let B be a basis and C = C C B A 1 B A and R = C N How to calculate critical λ if p > 2? ( ) 3 1 At B 1 : C N =, λ 1 2 = 2/3, λ = (2/3, 1/3) T and λ T C N = (5/3, 0) T ( At B 2 : C 3 1 N = 1 2 λ T CN = (0, 5/4) T ), λ = 1/4, λ = (1/4, 3/4) T and Find λ R p, λ > 0 such that λ T R 0 (optimality) and λ T r j = 0 (alternative optimum) for some column r j of R
106 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples min{cx : Ax = b, x 0} Let B be a basis and C = C C B A 1 B A and R = C N How to calculate critical λ if p > 2? ( ) 3 1 At B 1 : C N =, λ 1 2 = 2/3, λ = (2/3, 1/3) T and λ T C N = (5/3, 0) T ( At B 2 : C 3 1 N = 1 2 λ T CN = (0, 5/4) T ), λ = 1/4, λ = (1/4, 3/4) T and Find λ R p, λ > 0 such that λ T R 0 (optimality) and λ T r j = 0 (alternative optimum) for some column r j of R
107 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma If X E then X has an efficient basic feasible solution Proof There is some λ > 0 such that min x X λ T Cx has an optimal solution Thus LP(λ) has an optimal basic feasible solution solution, which is an efficient solution of the MOLP
108 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma If X E then X has an efficient basic feasible solution Proof There is some λ > 0 such that min x X λ T Cx has an optimal solution Thus LP(λ) has an optimal basic feasible solution solution, which is an efficient solution of the MOLP
109 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma If X E then X has an efficient basic feasible solution Proof There is some λ > 0 such that min x X λ T Cx has an optimal solution Thus LP(λ) has an optimal basic feasible solution solution, which is an efficient solution of the MOLP
110 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Definition 1 A feasible basis B is called efficient basis if B is an optimal basis of LP(λ) for some λ R p > 2 Two bases B and ˆB are called adjacent if one can be obtained from the other by a single pivot step 3 Let B be an efficient basis Variable x j, j N is called efficient nonbasic variable at B if there exists a λ R p > such that λ T R 0 and λ T r j = 0, where r j is the column of R corresponding to variable x j 4 Let B be an efficient basis and let x j be an efficient nonbasic variable Then a feasible pivot from B with x j entering the basis (even with negative pivot element) is called an efficient pivot with respect to B and x j
111 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Definition 1 A feasible basis B is called efficient basis if B is an optimal basis of LP(λ) for some λ R p > 2 Two bases B and ˆB are called adjacent if one can be obtained from the other by a single pivot step 3 Let B be an efficient basis Variable x j, j N is called efficient nonbasic variable at B if there exists a λ R p > such that λ T R 0 and λ T r j = 0, where r j is the column of R corresponding to variable x j 4 Let B be an efficient basis and let x j be an efficient nonbasic variable Then a feasible pivot from B with x j entering the basis (even with negative pivot element) is called an efficient pivot with respect to B and x j
112 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Definition 1 A feasible basis B is called efficient basis if B is an optimal basis of LP(λ) for some λ R p > 2 Two bases B and ˆB are called adjacent if one can be obtained from the other by a single pivot step 3 Let B be an efficient basis Variable x j, j N is called efficient nonbasic variable at B if there exists a λ R p > such that λ T R 0 and λ T r j = 0, where r j is the column of R corresponding to variable x j 4 Let B be an efficient basis and let x j be an efficient nonbasic variable Then a feasible pivot from B with x j entering the basis (even with negative pivot element) is called an efficient pivot with respect to B and x j
113 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Definition 1 A feasible basis B is called efficient basis if B is an optimal basis of LP(λ) for some λ R p > 2 Two bases B and ˆB are called adjacent if one can be obtained from the other by a single pivot step 3 Let B be an efficient basis Variable x j, j N is called efficient nonbasic variable at B if there exists a λ R p > such that λ T R 0 and λ T r j = 0, where r j is the column of R corresponding to variable x j 4 Let B be an efficient basis and let x j be an efficient nonbasic variable Then a feasible pivot from B with x j entering the basis (even with negative pivot element) is called an efficient pivot with respect to B and x j
114 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples No efficient basis is optimal for all p objectives at the same time Therefore R always contains positive and negative entries Proposition Let B be an efficient basis There exists an efficient nonbasic variable at B
115 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples No efficient basis is optimal for all p objectives at the same time Therefore R always contains positive and negative entries Proposition Let B be an efficient basis There exists an efficient nonbasic variable at B
116 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples No efficient basis is optimal for all p objectives at the same time Therefore R always contains positive and negative entries Proposition Let B be an efficient basis There exists an efficient nonbasic variable at B
117 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples It is not possible to define efficient nonbasic variables by the existence of a column in R with positive and negative entries Example R = ( ) λ T r 2 = 0 requires λ 2 = 2λ 1 λ T r 1 0 requires λ 1 0, an impossibility for λ > 0
118 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma Let B be an efficient basis and x j be an efficient nonbasic variable Then any efficient pivot from B leads to an adjacent efficient basis ˆB Proof x j efficient entering variable at basis B there is λ R p > with λ T R 0 and λ T r j = 0 x j is nonbasic variable with reduced cost 0 in LP(λ) Reduced costs of LP(λ) do not change after a pivot with x j entering Let ˆB be the resulting basis with feasible pivot and x j entering Because λ T R 0 and λ T r j = 0 at ˆB, ˆB is an optimal basis for LP(λ) and therefore an adjacent efficient basis
119 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma Let B be an efficient basis and x j be an efficient nonbasic variable Then any efficient pivot from B leads to an adjacent efficient basis ˆB Proof x j efficient entering variable at basis B there is λ R p > with λ T R 0 and λ T r j = 0 x j is nonbasic variable with reduced cost 0 in LP(λ) Reduced costs of LP(λ) do not change after a pivot with x j entering Let ˆB be the resulting basis with feasible pivot and x j entering Because λ T R 0 and λ T r j = 0 at ˆB, ˆB is an optimal basis for LP(λ) and therefore an adjacent efficient basis
120 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma Let B be an efficient basis and x j be an efficient nonbasic variable Then any efficient pivot from B leads to an adjacent efficient basis ˆB Proof x j efficient entering variable at basis B there is λ R p > with λ T R 0 and λ T r j = 0 x j is nonbasic variable with reduced cost 0 in LP(λ) Reduced costs of LP(λ) do not change after a pivot with x j entering Let ˆB be the resulting basis with feasible pivot and x j entering Because λ T R 0 and λ T r j = 0 at ˆB, ˆB is an optimal basis for LP(λ) and therefore an adjacent efficient basis
121 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma Let B be an efficient basis and x j be an efficient nonbasic variable Then any efficient pivot from B leads to an adjacent efficient basis ˆB Proof x j efficient entering variable at basis B there is λ R p > with λ T R 0 and λ T r j = 0 x j is nonbasic variable with reduced cost 0 in LP(λ) Reduced costs of LP(λ) do not change after a pivot with x j entering Let ˆB be the resulting basis with feasible pivot and x j entering Because λ T R 0 and λ T r j = 0 at ˆB, ˆB is an optimal basis for LP(λ) and therefore an adjacent efficient basis
122 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma Let B be an efficient basis and x j be an efficient nonbasic variable Then any efficient pivot from B leads to an adjacent efficient basis ˆB Proof x j efficient entering variable at basis B there is λ R p > with λ T R 0 and λ T r j = 0 x j is nonbasic variable with reduced cost 0 in LP(λ) Reduced costs of LP(λ) do not change after a pivot with x j entering Let ˆB be the resulting basis with feasible pivot and x j entering Because λ T R 0 and λ T r j = 0 at ˆB, ˆB is an optimal basis for LP(λ) and therefore an adjacent efficient basis
123 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma Let B be an efficient basis and x j be an efficient nonbasic variable Then any efficient pivot from B leads to an adjacent efficient basis ˆB Proof x j efficient entering variable at basis B there is λ R p > with λ T R 0 and λ T r j = 0 x j is nonbasic variable with reduced cost 0 in LP(λ) Reduced costs of LP(λ) do not change after a pivot with x j entering Let ˆB be the resulting basis with feasible pivot and x j entering Because λ T R 0 and λ T r j = 0 at ˆB, ˆB is an optimal basis for LP(λ) and therefore an adjacent efficient basis
124 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Lemma Let B be an efficient basis and x j be an efficient nonbasic variable Then any efficient pivot from B leads to an adjacent efficient basis ˆB Proof x j efficient entering variable at basis B there is λ R p > with λ T R 0 and λ T r j = 0 x j is nonbasic variable with reduced cost 0 in LP(λ) Reduced costs of LP(λ) do not change after a pivot with x j entering Let ˆB be the resulting basis with feasible pivot and x j entering Because λ T R 0 and λ T r j = 0 at ˆB, ˆB is an optimal basis for LP(λ) and therefore an adjacent efficient basis
125 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples How to identify efficient nonbasic variables? Theorem Let B be an efficient basis and let x j be a nonbasic variable Variable x j is an efficient nonbasic variable if and only if the LP has an optimal value of 0 max e t v subject to Rz r j δ + Iv = 0 z, δ, v 0 (12) (12) is always feasible with (z, δ, v) = 0
126 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples How to identify efficient nonbasic variables? Theorem Let B be an efficient basis and let x j be a nonbasic variable Variable x j is an efficient nonbasic variable if and only if the LP has an optimal value of 0 max e t v subject to Rz r j δ + Iv = 0 z, δ, v 0 (12) (12) is always feasible with (z, δ, v) = 0
127 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Proof By definition x j is an efficient nonbasic variable if the LP min 0 T λ = 0 subject to R T λ 0 (r j ) T λ = 0 I λ e (13) has an optimal objective value of 0, ie if it is feasible (13) is equivalent to min 0 T λ = 0 subject to R T λ 0 (r j ) T λ 0 I λ e (14)
128 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Proof By definition x j is an efficient nonbasic variable if the LP min 0 T λ = 0 subject to R T λ 0 (r j ) T λ = 0 I λ e (13) has an optimal objective value of 0, ie if it is feasible (13) is equivalent to min 0 T λ = 0 subject to R T λ 0 (r j ) T λ 0 I λ e (14)
129 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Proof The dual of (14) is max e T v subject to Rz r j δ + Iv = 0 z, δ, v 0 (15)
130 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Need to show: ALL efficient bases can be reached by efficient pivots Definition Two efficient bases B and ˆB are called connected if one can be obtained from the other by performing only efficient pivots Theorem All efficient bases are connected
131 A Multiobjective Simplex Algorithm Multiobjective Simplex: Examples Need to show: ALL efficient bases can be reached by efficient pivots Definition Two efficient bases B and ˆB are called connected if one can be obtained from the other by performing only efficient pivots Theorem All efficient bases are connected
1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
More information. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2
4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationDuality in Linear Programming
Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadowprice interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow
More information2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationOptimization in R n Introduction
Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
More informationLecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method
Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming
More informationLinear Programming in Matrix Form
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationLECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
More informationMathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
More informationLinear Programming. March 14, 2014
Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1
More informationChapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints
Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and
More informationSome representability and duality results for convex mixedinteger programs.
Some representability and duality results for convex mixedinteger programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer
More informationSimplex method summary
Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on righthand side, positive constant on left slack variables for
More information1 Linear Programming. 1.1 Introduction. Problem description: motivate by mincost flow. bit of history. everything is LP. NP and conp. P breakthrough.
1 Linear Programming 1.1 Introduction Problem description: motivate by mincost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities
More informationInteger Programming: Algorithms  3
Week 9 Integer Programming: Algorithms  3 OPR 992 Applied Mathematical Programming OPR 992  Applied Mathematical Programming  p. 1/12 DantzigWolfe Reformulation Example Strength of the Linear Programming
More informationSystems of Linear Equations
Systems of Linear Equations Beifang Chen Systems of linear equations Linear systems A linear equation in variables x, x,, x n is an equation of the form a x + a x + + a n x n = b, where a, a,, a n and
More informationLecture 2: August 29. Linear Programming (part I)
10725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.
More informationCHAPTER 9. Integer Programming
CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral
More informationLinear Programming I
Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins
More informationDegeneracy in Linear Programming
Degeneracy in Linear Programming I heard that today s tutorial is all about Ellen DeGeneres Sorry, Stan. But the topic is just as interesting. It s about degeneracy in Linear Programming. Degeneracy? Students
More informationIEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2
IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3
More informationSpecial Situations in the Simplex Algorithm
Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the
More informationLinear Programming: Theory and Applications
Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................
More informationAn Introduction to Linear Programming
An Introduction to Linear Programming Steven J. Miller March 31, 2007 Mathematics Department Brown University 151 Thayer Street Providence, RI 02912 Abstract We describe Linear Programming, an important
More informationProximal mapping via network optimization
L. Vandenberghe EE236C (Spring 234) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:
More informationLecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
More informationLinear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
More information5.1 Bipartite Matching
CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the FordFulkerson
More informationReduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:
Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in
More informationChapter 4. Duality. 4.1 A Graphical Example
Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates
More informationNonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
More informationApplied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
More informationThis exposition of linear programming
Linear Programming and the Simplex Method David Gale This exposition of linear programming and the simplex method is intended as a companion piece to the article in this issue on the life and work of George
More informationChapter 2 Solving Linear Programs
Chapter 2 Solving Linear Programs Companion slides of Applied Mathematical Programming by Bradley, Hax, and Magnanti (AddisonWesley, 1977) prepared by José Fernando Oliveira Maria Antónia Carravilla A
More information56:171 Operations Research Midterm Exam Solutions Fall 2001
56:171 Operations Research Midterm Exam Solutions Fall 2001 True/False: Indicate by "+" or "o" whether each statement is "true" or "false", respectively: o_ 1. If a primal LP constraint is slack at the
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More information160 CHAPTER 4. VECTOR SPACES
160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results
More informationLinear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
More informationSolving Linear Programs
Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,
More informationLinear Programming. April 12, 2005
Linear Programming April 1, 005 Parts of this were adapted from Chapter 9 of i Introduction to Algorithms (Second Edition) /i by Cormen, Leiserson, Rivest and Stein. 1 What is linear programming? The first
More informationChapter 6: Sensitivity Analysis
Chapter 6: Sensitivity Analysis Suppose that you have just completed a linear programming solution which will have a major impact on your company, such as determining how much to increase the overall production
More informationSupport Vector Machines
Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric
More informationLinear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
More informationTwoStage Stochastic Linear Programs
TwoStage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 TwoStage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic
More informationDUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING
DUAL METHODS IN MIXED INTEGER LINEAR PROGRAMMING by Menal Guzelsoy Presented to the Graduate and Research Committee of Lehigh University in Candidacy for the Degree of Doctor of Philosophy in Industrial
More informationArrangements And Duality
Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,
More informationNonlinear Optimization: Algorithms 3: Interiorpoint methods
Nonlinear Optimization: Algorithms 3: Interiorpoint methods INSEAD, Spring 2006 JeanPhilippe Vert Ecole des Mines de Paris JeanPhilippe.Vert@mines.org Nonlinear optimization c 2006 JeanPhilippe Vert,
More informationSeveral Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
More informationOptimization Modeling for Mining Engineers
Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2
More informationWalrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
More informationGENERALIZED INTEGER PROGRAMMING
Professor S. S. CHADHA, PhD University of Wisconsin, Eau Claire, USA Email: schadha@uwec.edu Professor Veena CHADHA University of Wisconsin, Eau Claire, USA Email: chadhav@uwec.edu GENERALIZED INTEGER
More informationDiscrete Optimization
Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.14.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 20150331 Todays presentation Chapter 3 Transforms using
More information26 Linear Programming
The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory
More information3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max
SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,
More informationDuality in General Programs. Ryan Tibshirani Convex Optimization 10725/36725
Duality in General Programs Ryan Tibshirani Convex Optimization 10725/36725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
More informationOPTIMIZATION. Schedules. Notation. Index
Easter Term 00 Richard Weber OPTIMIZATION Contents Schedules Notation Index iii iv v Preliminaries. Linear programming............................ Optimization under constraints......................3
More informationCyberSecurity Analysis of State Estimators in Power Systems
CyberSecurity Analysis of State Estimators in Electric Power Systems André Teixeira 1, Saurabh Amin 2, Henrik Sandberg 1, Karl H. Johansson 1, and Shankar Sastry 2 ACCESS Linnaeus Centre, KTHRoyal Institute
More informationSolutions to Math 51 First Exam January 29, 2015
Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not
More informationLargest FixedAspect, AxisAligned Rectangle
Largest FixedAspect, AxisAligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 19982016. All Rights Reserved. Created: February 21, 2004 Last Modified: February
More informationOn Minimal Valid Inequalities for Mixed Integer Conic Programs
On Minimal Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç Karzan June 27, 2013 Abstract We study mixed integer conic sets involving a general regular (closed, convex, full dimensional,
More informationLECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005
LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 DAVID L. BERNICK dbernick@soe.ucsc.edu 1. Overview Typical Linear Programming problems Standard form and converting
More informationLecture 2: Homogeneous Coordinates, Lines and Conics
Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4
More informationGI01/M055 Supervised Learning Proximal Methods
GI01/M055 Supervised Learning Proximal Methods Massimiliano Pontil (based on notes by Luca Baldassarre) (UCL) Proximal Methods 1 / 20 Today s Plan Problem setting Convex analysis concepts Proximal operators
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More informationNPHardness Results Related to PPAD
NPHardness Results Related to PPAD Chuangyin Dang Dept. of Manufacturing Engineering & Engineering Management City University of Hong Kong Kowloon, Hong Kong SAR, China EMail: mecdang@cityu.edu.hk Yinyu
More informationRecovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branchandbound approach
MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branchandbound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical
More informationHøgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver
Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin email: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider twopoint
More informationOptimal shift scheduling with a global service level constraint
Optimal shift scheduling with a global service level constraint Ger Koole & Erik van der Sluis Vrije Universiteit Division of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The
More informationThe Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Linesearch Method
The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Linesearch Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More information15 Kuhn Tucker conditions
5 Kuhn Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes
More information24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NPcomplete. Then one can conclude according to the present state of science that no
More informationSolving Linear Systems, Continued and The Inverse of a Matrix
, Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing
More informationTOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationLinear Programming Problems
Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number
More informationLinear Programming. Solving LP Models Using MS Excel, 18
SUPPLEMENT TO CHAPTER SIX Linear Programming SUPPLEMENT OUTLINE Introduction, 2 Linear Programming Models, 2 Model Formulation, 4 Graphical Linear Programming, 5 Outline of Graphical Procedure, 5 Plotting
More informationMATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3
MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................
More informationThe Graphical Method: An Example
The Graphical Method: An Example Consider the following linear program: Maximize 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2 0, where, for ease of reference,
More informationDefinition 11.1. Given a graph G on n vertices, we define the following quantities:
Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define
More informationOperation Research. Module 1. Module 2. Unit 1. Unit 2. Unit 3. Unit 1
Operation Research Module 1 Unit 1 1.1 Origin of Operations Research 1.2 Concept and Definition of OR 1.3 Characteristics of OR 1.4 Applications of OR 1.5 Phases of OR Unit 2 2.1 Introduction to Linear
More information9.4 THE SIMPLEX METHOD: MINIMIZATION
SECTION 9 THE SIMPLEX METHOD: MINIMIZATION 59 The accounting firm in Exercise raises its charge for an audit to $5 What number of audits and tax returns will bring in a maximum revenue? In the simplex
More informationSensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS
Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and
More informationIntroduction to Linear Programming (LP) Mathematical Programming (MP) Concept
Introduction to Linear Programming (LP) Mathematical Programming Concept LP Concept Standard Form Assumptions Consequences of Assumptions Solution Approach Solution Methods Typical Formulations Massachusetts
More informationSolving Systems of Linear Equations
LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how
More information[1] Diagonal factorization
8.03 LA.6: Diagonalization and Orthogonal Matrices [ Diagonal factorization [2 Solving systems of first order differential equations [3 Symmetric and Orthonormal Matrices [ Diagonal factorization Recall:
More informationChapter 6. Cuboids. and. vol(conv(p ))
Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both
More information! Solve problem to optimality. ! Solve problem in polytime. ! Solve arbitrary instances of the problem. #approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NPhard problem What should I do? A Theory says you're unlikely to find a polytime algorithm Must sacrifice one of three
More informationLinear Programming: Chapter 11 Game Theory
Linear Programming: Chapter 11 Game Theory Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb RockPaperScissors
More informationLecture 2: The SVM classifier
Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function
More information5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition
More informationUsing the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood
PERFORMANCE EXCELLENCE IN THE WOOD PRODUCTS INDUSTRY EM 8720E October 1998 $3.00 Using the Simplex Method to Solve Linear Programming Maximization Problems J. Reeb and S. Leavengood A key problem faced
More informationMinkowski Sum of Polytopes Defined by Their Vertices
Minkowski Sum of Polytopes Defined by Their Vertices Vincent Delos, Denis Teissandier To cite this version: Vincent Delos, Denis Teissandier. Minkowski Sum of Polytopes Defined by Their Vertices. Journal
More informationDynamic TCP Acknowledgement: Penalizing Long Delays
Dynamic TCP Acknowledgement: Penalizing Long Delays Karousatou Christina Network Algorithms June 8, 2010 Karousatou Christina (Network Algorithms) Dynamic TCP Acknowledgement June 8, 2010 1 / 63 Layout
More informationFRACTIONAL COLORINGS AND THE MYCIELSKI GRAPHS
FRACTIONAL COLORINGS AND THE MYCIELSKI GRAPHS By G. Tony Jacobs Under the Direction of Dr. John S. Caughman A math 501 project submitted in partial fulfillment of the requirements for the degree of Master
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;
More information