CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 20: Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

Save this PDF as:

Size: px
Start display at page:

Download "CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 20: Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi"

Transcription

1 CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 20: Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi

2 Announcements

3 Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization 3 Complexity of Linear Programming 4 Equivalence of Separation and Optimization

4 Recall: Feasibility Problem The ellipsoid method solves the following problem. Convex Feasibility Problem Given as input the following A description of a compact convex set K R n An ellipsoid E(c, Q) (typically a ball) containing K A rational number R > 0 satisfying vol(e) R. A rational number r > 0 such that if K is nonempty, then vol(k) r. Find a point x K or declare that K is empty. Equivalent variant: drop the requirement on volume vol(k), and either find a point x K or an ellipsoid E K with vol(e) < r. Recapping the Ellipsoid Method 1/27

5 All the ellipsoid method needed was the following subroutine Recapping the Ellipsoid Method 2/27 Separation oracle An algorithm that takes as input x R n, and either certifies x K or outputs a hyperplane separting x from K. i.e. a vector h R n with h x h y for all y K. Equivalently, K is contained in the halfspace with x at its boundary. H(h, x) = {y : h y h x}

6 All the ellipsoid method needed was the following subroutine Recapping the Ellipsoid Method 2/27 Separation oracle An algorithm that takes as input x R n, and either certifies x K or outputs a hyperplane separting x from K. i.e. a vector h R n with h x h y for all y K. Equivalently, K is contained in the halfspace with x at its boundary. Examples: H(h, x) = {y : h y h x}

7 All the ellipsoid method needed was the following subroutine Recapping the Ellipsoid Method 2/27 Separation oracle An algorithm that takes as input x R n, and either certifies x K or outputs a hyperplane separting x from K. i.e. a vector h R n with h x h y for all y K. Equivalently, K is contained in the halfspace with x at its boundary. H(h, x) = {y : h y h x} Examples: Explicitly written polytope Ay b: take h = a i to the row of A corresponding to a constraint violated by x.

8 All the ellipsoid method needed was the following subroutine Recapping the Ellipsoid Method 2/27 Separation oracle An algorithm that takes as input x R n, and either certifies x K or outputs a hyperplane separting x from K. i.e. a vector h R n with h x h y for all y K. Equivalently, K is contained in the halfspace with x at its boundary. H(h, x) = {y : h y h x} Examples: Explicitly written polytope Ay b: take h = a i to the row of A corresponding to a constraint violated by x. Convex set given by a family of convex inequalities f i (y) 0: Let h = f i (x) for some violated constraint.

9 All the ellipsoid method needed was the following subroutine Examples: Explicitly written polytope Ay b: take h = a i to the row of A corresponding to a constraint violated by x. Convex set given by a family of convex inequalities f i (y) 0: Let h = f i (x) for some violated constraint. The positive semi-definite cone S + n : Let H be the outer product vv of an eigenvector v of X corresponding to a negative eigenvalue. Recapping the Ellipsoid Method 2/27 Separation oracle An algorithm that takes as input x R n, and either certifies x K or outputs a hyperplane separting x from K. i.e. a vector h R n with h x h y for all y K. Equivalently, K is contained in the halfspace with x at its boundary. H(h, x) = {y : h y h x}

10 Recapping the Ellipsoid Method 3/27 Ellipsoid Method 1 Start with initial ellipsoid E = E(c, Q) K 2 Using the separation oracle, check if the center c K. If so, terminate and output c. Otherwise, we get a separating hyperplane h such that K is contained in the half-ellipsoid E {y : h y h c} 3 Let E = E(c, Q ) be the minimum volume ellipsoid containing the half ellipsoid above. 4 If vol(e ) r then set E = E and repeat (step 2), otherwise stop and return empty.

11 Recapping the Ellipsoid Method 3/27 Ellipsoid Method 1 Start with initial ellipsoid E = E(c, Q) K 2 Using the separation oracle, check if the center c K. If so, terminate and output c. Otherwise, we get a separating hyperplane h such that K is contained in the half-ellipsoid E {y : h y h c} 3 Let E = E(c, Q ) be the minimum volume ellipsoid containing the half ellipsoid above. 4 If vol(e ) r then set E = E and repeat (step 2), otherwise stop and return empty.

12 Recapping the Ellipsoid Method 3/27 Ellipsoid Method 1 Start with initial ellipsoid E = E(c, Q) K 2 Using the separation oracle, check if the center c K. If so, terminate and output c. Otherwise, we get a separating hyperplane h such that K is contained in the half-ellipsoid E {y : h y h c} 3 Let E = E(c, Q ) be the minimum volume ellipsoid containing the half ellipsoid above. 4 If vol(e ) r then set E = E and repeat (step 2), otherwise stop and return empty.

13 Recapping the Ellipsoid Method 3/27 Ellipsoid Method 1 Start with initial ellipsoid E = E(c, Q) K 2 Using the separation oracle, check if the center c K. If so, terminate and output c. Otherwise, we get a separating hyperplane h such that K is contained in the half-ellipsoid E {y : h y h c} 3 Let E = E(c, Q ) be the minimum volume ellipsoid containing the half ellipsoid above. 4 If vol(e ) r then set E = E and repeat (step 2), otherwise stop and return empty.

14 Properties Recapping the Ellipsoid Method 4/27 Using T to denote the runtime of the separation oracle Theorem The ellipsoid algorithm terminates in time polynomial n, ln R r, and T, and either outputes x K or correctly declares that K is empty. We proved most of this. For the rest, see references.

15 Properties Using T to denote the runtime of the separation oracle Theorem The ellipsoid algorithm terminates in time polynomial n, ln R r, and T, and either outputes x K or correctly declares that K is empty. We proved most of this. For the rest, see references. Note For runtime polynomial in input size we need T polynomial in input size R r exponential in input size Recapping the Ellipsoid Method 4/27

16 Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization 3 Complexity of Linear Programming 4 Equivalence of Separation and Optimization

17 Complexity of Convex Optimization 5/27 Recall: Convex Optimization Problem A problem of minimizing a convex function (or maximizing a concave function) over a convex set. minimize subject to f(x) x X Where X R n is convex and closed, and f : R n R is convex

18 Complexity of Convex Optimization 5/27 Recall: Convex Optimization Problem A problem of minimizing a convex function (or maximizing a concave function) over a convex set. minimize subject to f(x) x X Where X R n is convex and closed, and f : R n R is convex Recall: A problem Π is a family of instances I = (f, X ) When represented explicitly, often given in standard form minimize f(x) subject to g i (x) 0, for i C 1. a i x = b i, for i C 2. The functions f,{g i } i are given in some parametric form allowing evaluation of each function and its derivatives.

19 Complexity of Convex Optimization 5/27 Recall: Convex Optimization Problem A problem of minimizing a convex function (or maximizing a concave function) over a convex set. minimize subject to f(x) x X Where X R n is convex and closed, and f : R n R is convex We will abstract away details of how instances of a problem are represented, but denote the length of the description by I We simply require polynomial time (in I and n) separation oracles and such.

20 Complexity of Convex Optimization 6/27 Solvability of Convex Optimization There are many subtly different solvability statements. This one is the most useful, yet simple to describe, IMO. Requirements We say an algorithm weakly solves a convex optimization problem in polynomial time if it: Takes an approximation parameter ɛ > 0 Terminates in time poly( I, n, log( 1 ɛ )) Returns an ɛ-optimal x X : f(x) min y X f(y) + ɛ[max f(y) min f(y)] y X y X

21 Solvability of Convex Optimization Complexity of Convex Optimization 6/27 Theorem (Polynomial Solvability of CP) Consider a family Π of convex optimization problems I = (f, X ) admitting the following operations in polynomial time (in I and n): A separation oracle for the feasible set X R n A first order oracle for f: evaluates f(x) and f(x). An algorithm which computes a starting ellipsoid E X with = O(exp( I, n)). vol(e) vol(x ) Then there is a polynomial time algorithm which weakly solves Π.

22 Solvability of Convex Optimization Complexity of Convex Optimization 6/27 Theorem (Polynomial Solvability of CP) Consider a family Π of convex optimization problems I = (f, X ) admitting the following operations in polynomial time (in I and n): A separation oracle for the feasible set X R n A first order oracle for f: evaluates f(x) and f(x). An algorithm which computes a starting ellipsoid E X with = O(exp( I, n)). vol(e) vol(x ) Then there is a polynomial time algorithm which weakly solves Π. Let s now prove this, by reducing to the ellipsoid method

23 Proof (Simplified) Complexity of Convex Optimization 7/27 Simplifying Assumption Assume we are given min y X f(y) and max y X f(y). Without loss of generality assume they are [0, 1].

24 Proof (Simplified) Complexity of Convex Optimization 7/27 Simplifying Assumption Assume we are given min y X f(y) and max y X f(y). Without loss of generality assume they are [0, 1]. Our task reduces to the following convex feasibility problem: find subject to x x X f(x) ɛ

25 Proof (Simplified) Complexity of Convex Optimization 7/27 Simplifying Assumption Assume we are given min y X f(y) and max y X f(y). Without loss of generality assume they are [0, 1]. Our task reduces to the following convex feasibility problem: find x subject to x X f(x) ɛ We can feed this into the Ellipsoid method! Needed Ingredients 1 Separation oracle for new feasible set K: 2 Ellipsoid E containing K: 3 Guarantee that vol(e) vol(k) exp(n, I, log 1 ɛ ):

26 Proof (Simplified) Complexity of Convex Optimization 7/27 Simplifying Assumption Assume we are given min y X f(y) and max y X f(y). Without loss of generality assume they are [0, 1]. Our task reduces to the following convex feasibility problem: find x subject to x X f(x) ɛ We can feed this into the Ellipsoid method! Needed Ingredients 1 Separation oracle for new feasible set K: Use the separation oracle for X and first order oracle for f 2 Ellipsoid E containing K: 3 Guarantee that vol(e) vol(k) exp(n, I, log 1 ɛ ):

27 Proof (Simplified) Complexity of Convex Optimization 7/27 Simplifying Assumption Assume we are given min y X f(y) and max y X f(y). Without loss of generality assume they are [0, 1]. Our task reduces to the following convex feasibility problem: find x subject to x X f(x) ɛ We can feed this into the Ellipsoid method! Needed Ingredients 1 Separation oracle for new feasible set K: Use the separation oracle for X and first order oracle for f 2 Ellipsoid E containing K: Use the ellipsoid containing X 3 Guarantee that vol(e) vol(k) exp(n, I, log 1 ɛ ):

28 Proof (Simplified) Complexity of Convex Optimization 7/27 Simplifying Assumption Assume we are given min y X f(y) and max y X f(y). Without loss of generality assume they are [0, 1]. Our task reduces to the following convex feasibility problem: find x subject to x X f(x) ɛ We can feed this into the Ellipsoid method! Needed Ingredients 1 Separation oracle for new feasible set K: Use the separation oracle for X and first order oracle for f 2 Ellipsoid E containing K: Use the ellipsoid containing X 3 Guarantee that vol(e) vol(k) exp(n, I, log 1 ɛ ): Uh oh!

29 Proof (Simplified) K = {x X : f(x) ɛ} Lemma vol(k) ɛ n vol(x). This shows that vol(k) is only exponentially smaller (in n and log 1 ɛ ) than vol(x ), and therefore also vol(e), so it suffices. Complexity of Convex Optimization 8/27

30 Proof (Simplified) K = {x X : f(x) ɛ} Lemma vol(k) ɛ n vol(x). This shows that vol(k) is only exponentially smaller (in n and log 1 ɛ ) than vol(x ), and therefore also vol(e), so it suffices. Assume wlog 0 X and f(0) = min x X f(x) = 0. Complexity of Convex Optimization 8/27

31 Proof (Simplified) K = {x X : f(x) ɛ} Lemma vol(k) ɛ n vol(x). This shows that vol(k) is only exponentially smaller (in n and log 1 ɛ ) than vol(x ), and therefore also vol(e), so it suffices. Assume wlog 0 X and f(0) = min x X f(x) = 0. Consider scaling X by ɛ to get ɛx. vol(ɛx ) = ɛ n vol(x). Complexity of Convex Optimization 8/27

32 Proof (Simplified) K = {x X : f(x) ɛ} Lemma vol(k) ɛ n vol(x). This shows that vol(k) is only exponentially smaller (in n and log 1 ɛ ) than vol(x ), and therefore also vol(e), so it suffices. Assume wlog 0 X and f(0) = min x X f(x) = 0. Consider scaling X by ɛ to get ɛx. vol(ɛx ) = ɛ n vol(x). We show that ɛx K by showing f(y) ɛ for all y ɛx. Complexity of Convex Optimization 8/27

33 Proof (Simplified) K = {x X : f(x) ɛ} Lemma vol(k) ɛ n vol(x). This shows that vol(k) is only exponentially smaller (in n and log 1 ɛ ) than vol(x ), and therefore also vol(e), so it suffices. Assume wlog 0 X and f(0) = min x X f(x) = 0. Consider scaling X by ɛ to get ɛx. vol(ɛx ) = ɛ n vol(x). We show that ɛx K by showing f(y) ɛ for all y ɛx. Let y = ɛx for x X, and invoke Jensen s inequality f(y) = f(ɛx + (1 ɛ)0) ɛf(x) + (1 ɛ)f(0) ɛ Complexity of Convex Optimization 8/27

34 Complexity of Convex Optimization 9/27 Proof (General) Denote L = min y X f(y) and H = max y X f(y) If we knew the target T = L + ɛ[h L], we can reduce to solving the feasibility problem over K = {x X : f(x) T }.

35 Complexity of Convex Optimization 9/27 Proof (General) Denote L = min y X f(y) and H = max y X f(y) If we knew the target T = L + ɛ[h L], we can reduce to solving the feasibility problem over K = {x X : f(x) T }. If we knew it lied in a sufficiently narrow range, we could binary search for T

36 Complexity of Convex Optimization 9/27 Proof (General) Denote L = min y X f(y) and H = max y X f(y) If we knew the target T = L + ɛ[h L], we can reduce to solving the feasibility problem over K = {x X : f(x) T }. If we knew it lied in a sufficiently narrow range, we could binary search for T We don t need to know anything about T! Key Observation We don t really need to know T, H, or L to simulate the same execution of the ellipsoid method on K!!

37 Proof (General) Complexity of Convex Optimization 10/27 find subject to x x X f(x) T = L + ɛ[h L] Simulate the execution of the ellipsoid method on K Polynomial number of iterations, terminating with point in K

38 Proof (General) Complexity of Convex Optimization 10/27 find subject to x x X f(x) T = L + ɛ[h L] Simulate the execution of the ellipsoid method on K Polynomial number of iterations, terminating with point in K Require separation oracle for K to use f only as a last resort This is allowed. Tries to get feasibility whenever possible.

39 Proof (General) Complexity of Convex Optimization 10/27 find subject to x x X f(x) T = L + ɛ[h L] Simulate the execution of the ellipsoid method on K Polynomial number of iterations, terminating with point in K Require separation oracle for K to use f only as a last resort This is allowed. Tries to get feasibility whenever possible. Action of algorithm in each iteration other than the last can be described independently of T If ellipsoid center c / X, use separating hyperplane with X. Else use f(c)

40 Proof (General) Complexity of Convex Optimization 10/27 find subject to x x X f(x) T = L + ɛ[h L] Simulate the execution of the ellipsoid method on K Polynomial number of iterations, terminating with point in K Require separation oracle for K to use f only as a last resort This is allowed. Tries to get feasibility whenever possible. Action of algorithm in each iteration other than the last can be described independently of T If ellipsoid center c / X, use separating hyperplane with X. Else use f(c) Run this simulation until enough iterations have passed, and take the best feasible point encountered. This must be in K.

41 Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization 3 Complexity of Linear Programming 4 Equivalence of Separation and Optimization

42 Recall: Linear Programming Complexity of Linear Programming 11/27 Recall: Linear Programming Problem A problem of maximizing a linear function over a polyhedron. maximize subject to c x Ax b

43 Recall: Linear Programming Complexity of Linear Programming 11/27 Recall: Linear Programming Problem A problem of maximizing a linear function over a polyhedron. maximize subject to c x Ax b When stated in standard form, optimal solution occurs at a vertex.

44 Recall: Linear Programming Complexity of Linear Programming 11/27 Recall: Linear Programming Problem A problem of maximizing a linear function over a polyhedron. maximize subject to c x Ax b When stated in standard form, optimal solution occurs at a vertex. We will consider both explicitly and implicit LPs Explicit: given by A, b and c Implicit: Given by c and a separation oracle for Ax b.

45 Recall: Linear Programming Complexity of Linear Programming 11/27 Recall: Linear Programming Problem A problem of maximizing a linear function over a polyhedron. maximize subject to c x Ax b When stated in standard form, optimal solution occurs at a vertex. We will consider both explicitly and implicit LPs Explicit: given by A, b and c Implicit: Given by c and a separation oracle for Ax b. In both cases, we require all numbers to be rational

46 Recall: Linear Programming Complexity of Linear Programming 11/27 Recall: Linear Programming Problem A problem of maximizing a linear function over a polyhedron. maximize subject to c x Ax b When stated in standard form, optimal solution occurs at a vertex. We will consider both explicitly and implicit LPs Explicit: given by A, b and c Implicit: Given by c and a separation oracle for Ax b. In both cases, we require all numbers to be rational In the explicit case, we require polynomial time in A, b, and c, the number of bits used to represent the parameters of the LP.

47 Recall: Linear Programming Complexity of Linear Programming 11/27 Recall: Linear Programming Problem A problem of maximizing a linear function over a polyhedron. maximize subject to c x Ax b When stated in standard form, optimal solution occurs at a vertex. We will consider both explicitly and implicit LPs Explicit: given by A, b and c Implicit: Given by c and a separation oracle for Ax b. In both cases, we require all numbers to be rational In the explicit case, we require polynomial time in A, b, and c, the number of bits used to represent the parameters of the LP. In the implicit case, we require polynomial time in the bit complexity of individual entries of A, b, c.

48 Complexity of Linear Programming 12/27 Theorem (Polynomial Solvability of Explicit LP) There is a polynomial time algorithm for linear programming, when the linear program is represented explicitly. Proof Sketch (Informal) Using result for weakly solving convex programs, we need 4 things: A separation oracle for Ax b: trivial when explicitly represented A first order oracle for c x: also trivial A bounding ellipsoid of volume at most an exponential times the volume of the feasible polyhedron: tricky A way of rounding an ɛ-optimal solution to an optimal vertex solution: tricky

49 Complexity of Linear Programming 12/27 Theorem (Polynomial Solvability of Explicit LP) There is a polynomial time algorithm for linear programming, when the linear program is represented explicitly. Proof Sketch (Informal) Using result for weakly solving convex programs, we need 4 things: A separation oracle for Ax b: trivial when explicitly represented A first order oracle for c x: also trivial A bounding ellipsoid of volume at most an exponential times the volume of the feasible polyhedron: tricky A way of rounding an ɛ-optimal solution to an optimal vertex solution: tricky Solution to both issues involves tedious accounting of numerical issues

50 Ellipsoid and Volume Bound (Informal) Key to tackling both difficulties is the following observation: Lemma Let v be vertex of the polyhedron Ax b. It is the case that v has polynomial bit complexity, i.e. v M, where M = O(poly( A, b )). Specifically, the solution of a system of linear equations has bit complexity polynomially related to that of the equations. Complexity of Linear Programming 13/27

51 Ellipsoid and Volume Bound (Informal) Key to tackling both difficulties is the following observation: Lemma Let v be vertex of the polyhedron Ax b. It is the case that v has polynomial bit complexity, i.e. v M, where M = O(poly( A, b )). Specifically, the solution of a system of linear equations has bit complexity polynomially related to that of the equations. Bounding ellipsoid: all vertices contained in the box 2 M x 2 M, which in turn is contained in an ellipsoid of volume exponential in M and n. Complexity of Linear Programming 13/27

52 Ellipsoid and Volume Bound (Informal) Key to tackling both difficulties is the following observation: Lemma Let v be vertex of the polyhedron Ax b. It is the case that v has polynomial bit complexity, i.e. v M, where M = O(poly( A, b )). Specifically, the solution of a system of linear equations has bit complexity polynomially related to that of the equations. Volume lowerbound: Relaxing to Ax b + ɛ, for sufficiently small ɛ with ɛ = poly(m). Gives volume exponentially small in M, but no smaller. Still close enough to original polyhedron so solution to relaxed problem can be rounded to solution of the latter. Complexity of Linear Programming 13/27

53 Ellipsoid and Volume Bound (Informal) Key to tackling both difficulties is the following observation: Lemma Let v be vertex of the polyhedron Ax b. It is the case that v has polynomial bit complexity, i.e. v M, where M = O(poly( A, b )). Specifically, the solution of a system of linear equations has bit complexity polynomially related to that of the equations. Rounding to a vertex: If a point y is ɛ-optimal, for sufficiently small ɛ chosen carefully to polynomial in description of input, then rounding to the nearest x with M bits recovers the vertex. Complexity of Linear Programming 13/27

54 Complexity of Linear Programming 14/27 Theorem (Polynomial Solvability of Implicit LP) Consider a family Π of linear programming problems I = (A, b, c) admitting the following operations in polynomial time (in I and n): A separation oracle for the polyhedron Ax b Explicit access to c Moreover, assume that every a ij, b i, c j are at most poly( I, n). Then there is a polynomial time algorithm for Π (both primal and dual). Informal Proof Sketch (Primal)

55 Complexity of Linear Programming 14/27 Theorem (Polynomial Solvability of Implicit LP) Consider a family Π of linear programming problems I = (A, b, c) admitting the following operations in polynomial time (in I and n): A separation oracle for the polyhedron Ax b Explicit access to c Moreover, assume that every a ij, b i, c j are at most poly( I, n). Then there is a polynomial time algorithm for Π (both primal and dual). Informal Proof Sketch (Primal) Separation oracle and first order oracle are given

56 Complexity of Linear Programming 14/27 Theorem (Polynomial Solvability of Implicit LP) Consider a family Π of linear programming problems I = (A, b, c) admitting the following operations in polynomial time (in I and n): A separation oracle for the polyhedron Ax b Explicit access to c Moreover, assume that every a ij, b i, c j are at most poly( I, n). Then there is a polynomial time algorithm for Π (both primal and dual). Informal Proof Sketch (Primal) Separation oracle and first order oracle are given Rounding to a vertex exactly as in the explicit case. Every vertex v still has polynomial bit complexity M

57 Complexity of Linear Programming 14/27 Theorem (Polynomial Solvability of Implicit LP) Consider a family Π of linear programming problems I = (A, b, c) admitting the following operations in polynomial time (in I and n): A separation oracle for the polyhedron Ax b Explicit access to c Moreover, assume that every a ij, b i, c j are at most poly( I, n). Then there is a polynomial time algorithm for Π (both primal and dual). Informal Proof Sketch (Primal) Separation oracle and first order oracle are given Rounding to a vertex exactly as in the explicit case. Every vertex v still has polynomial bit complexity M Bounding ellipsoid: Still true that we get a bounding ellipsoid of volume exponential in I and n

58 Complexity of Linear Programming 14/27 Theorem (Polynomial Solvability of Implicit LP) Consider a family Π of linear programming problems I = (A, b, c) admitting the following operations in polynomial time (in I and n): A separation oracle for the polyhedron Ax b Explicit access to c Moreover, assume that every a ij, b i, c j are at most poly( I, n). Then there is a polynomial time algorithm for Π (both primal and dual). Informal Proof Sketch (Primal) Separation oracle and first order oracle are given Rounding to a vertex exactly as in the explicit case. Every vertex v still has polynomial bit complexity M Bounding ellipsoid: Still true that we get a bounding ellipsoid of volume exponential in I and n However, no lowerbound on the volume of Ax b can t relax to Ax b + ɛ as in the explicit case.

59 Complexity of Linear Programming 14/27 Theorem (Polynomial Solvability of Implicit LP) Consider a family Π of linear programming problems I = (A, b, c) admitting the following operations in polynomial time (in I and n): A separation oracle for the polyhedron Ax b Explicit access to c Moreover, assume that every a ij, b i, c j are at most poly( I, n). Then there is a polynomial time algorithm for Π (both primal and dual). Informal Proof Sketch (Primal) Separation oracle and first order oracle are given Rounding to a vertex exactly as in the explicit case. Every vertex v still has polynomial bit complexity M Bounding ellipsoid: Still true that we get a bounding ellipsoid of volume exponential in I and n However, no lowerbound on the volume of Ax b can t relax to Ax b + ɛ as in the explicit case. It turns out this is still OK, but takes a lot of work.

60 Complexity of Linear Programming 14/27 Theorem (Polynomial Solvability of Implicit LP) Consider a family Π of linear programming problems I = (A, b, c) admitting the following operations in polynomial time (in I and n): A separation oracle for the polyhedron Ax b Explicit access to c Moreover, assume that every a ij, b i, c j are at most poly( I, n). Then there is a polynomial time algorithm for Π (both primal and dual). For the dual, we need equivalence of separation and optimization (HW?)

61 Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization 3 Complexity of Linear Programming 4 Equivalence of Separation and Optimization

62 Equivalence of Separation and Optimization 15/27 Separation and Optimization One interpretation of the previous theorem is that optimization of linear functions over a polytope of polynomial bit complexity reduces to implementing a separation oracle As it turns out, the two tasks are polynomial-time equivalent.

63 Separation and Optimization One interpretation of the previous theorem is that optimization of linear functions over a polytope of polynomial bit complexity reduces to implementing a separation oracle As it turns out, the two tasks are polynomial-time equivalent. Lets formalize the two questions, parametrized by a polytope P. Linear Optimization Problem Input: Linear objective c R n. Output: argmax x P c x. Separation Problem Input: y R n Output: Decide that y P, or else find h R n s.t. h x < h y for all x P. Equivalence of Separation and Optimization 15/27

64 Recall: Minimum Cost Spanning Tree Equivalence of Separation and Optimization 16/27 Given a connected undirected graph G = (V, E), and costs c e on edges e, find a minimum cost spanning tree of G.

65 Recall: Minimum Cost Spanning Tree Equivalence of Separation and Optimization 16/27 Given a connected undirected graph G = (V, E), and costs c e on edges e, find a minimum cost spanning tree of G. Spanning Tree Polytope x e X 1, for X V. e X x e = n 1 e E x e 0, for e E.

66 Recall: Minimum Cost Spanning Tree Equivalence of Separation and Optimization 16/27 Given a connected undirected graph G = (V, E), and costs c e on edges e, find a minimum cost spanning tree of G. Spanning Tree Polytope x e X 1, for X V. e X x e = n 1 e E x e 0, for e E. Optimization: Find the minimum/maximum weight spanning tree Separation: Find X V with e X x e > X 1, if one exists i.e. When edge weights are x, find a dense subgraph

67 Equivalence of Separation and Optimization 17/27 Theorem (Equivalence of Separation and Optimization for Polytopes) Consider a family P of polytopes P = {x : Ax b} described implicitly using P bits, and satisfying a ij, b i poly( P, n). Then the separation problem is solvable in poly( P, n, y ) time for P P if and only if the linear optimization problem is solvable in poly( P, n, c ) time. Colloquially, we say such polytope families are solvable.

68 Equivalence of Separation and Optimization 17/27 Theorem (Equivalence of Separation and Optimization for Polytopes) Consider a family P of polytopes P = {x : Ax b} described implicitly using P bits, and satisfying a ij, b i poly( P, n). Then the separation problem is solvable in poly( P, n, y ) time for P P if and only if the linear optimization problem is solvable in poly( P, n, c ) time. Colloquially, we say such polytope families are solvable. E.g. Spanning tree polytopes, represented by graphs, are solvable.

69 Equivalence of Separation and Optimization 17/27 Theorem (Equivalence of Separation and Optimization for Polytopes) Consider a family P of polytopes P = {x : Ax b} described implicitly using P bits, and satisfying a ij, b i poly( P, n). Then the separation problem is solvable in poly( P, n, y ) time for P P if and only if the linear optimization problem is solvable in poly( P, n, c ) time. Colloquially, we say such polytope families are solvable. E.g. Spanning tree polytopes, represented by graphs, are solvable. We already sketched the proof of the forward direction Separation optimization

70 Equivalence of Separation and Optimization 17/27 Theorem (Equivalence of Separation and Optimization for Polytopes) Consider a family P of polytopes P = {x : Ax b} described implicitly using P bits, and satisfying a ij, b i poly( P, n). Then the separation problem is solvable in poly( P, n, y ) time for P P if and only if the linear optimization problem is solvable in poly( P, n, c ) time. Colloquially, we say such polytope families are solvable. E.g. Spanning tree polytopes, represented by graphs, are solvable. We already sketched the proof of the forward direction Separation optimization For the other direction, we need polars

71 Recall: Polar Duality of Convex Sets One way of representing the all halfspaces containing a convex set. Polar Let S R n be a closed convex set containing the origin. The polar of S is defined as follows: S = {y : x y 1 for all x S} Note Every halfspace a x b with b 0 can be written as a normalized inequality y x 1, by dividing by b. S can be thought of as the normalized representations of halfspaces containing S. Equivalence of Separation and Optimization 18/27

72 Equivalence of Separation and Optimization 19/27 Properties of the Polar 1 If S is bounded and 0 interior(s), then the same holds for S. 2 S = S S = {x : y x 1 for all y S } S = {y : x y 1 for all x S}

73 Equivalence of Separation and Optimization 20/27 Polarity of Polytopes Polytopes Given a polytope P represented as Ax 1, the polar P is the convex hull of the rows of A. Facets of P correspond to vertices of P. Dually, vertices of P correspond to facets of P.

74 Proof Outline: Optimization Separation Equivalence of Separation and Optimization 21/27

75 Proof Outline: Optimization Separation Equivalence of Separation and Optimization 21/27

76 Proof Outline: Optimization Separation Equivalence of Separation and Optimization 21/27

77 S = {x : y x 1 for all y S } S = {y : x y 1 for all x S} Equivalence of Separation and Optimization 22/27

78 S = {x : y x 1 for all y S } S = {y : x y 1 for all x S} Lemma Separation over S reduces in constant time to optimization over S, and vice versa since S = S. Equivalence of Separation and Optimization 22/27

79 S = {x : y x 1 for all y S } S = {y : x y 1 for all x S} Lemma Separation over S reduces in constant time to optimization over S, and vice versa since S = S. Proof We are given vector x, and must check whether x S, and if not output separating hyperplane. Equivalence of Separation and Optimization 22/27

80 S = {x : y x 1 for all y S } S = {y : x y 1 for all x S} Lemma Separation over S reduces in constant time to optimization over S, and vice versa since S = S. Proof We are given vector x, and must check whether x S, and if not output separating hyperplane. x S iff y x 1 for all y S Equivalence of Separation and Optimization 22/27

81 S = {x : y x 1 for all y S } S = {y : x y 1 for all x S} Lemma Separation over S reduces in constant time to optimization over S, and vice versa since S = S. Proof We are given vector x, and must check whether x S, and if not output separating hyperplane. x S iff y x 1 for all y S equivalently, iff max y S y x 1. Equivalence of Separation and Optimization 22/27

82 S = {x : y x 1 for all y S } S = {y : x y 1 for all x S} Lemma Separation over S reduces in constant time to optimization over S, and vice versa since S = S. Proof We are given vector x, and must check whether x S, and if not output separating hyperplane. x S iff y x 1 for all y S equivalently, iff max y S y x 1. If we find y S s.t. y x > 1, then y is the separating hyperplane y z 1 < y x for every z S. Equivalence of Separation and Optimization 22/27

83 Optimization Separation Equivalence of Separation and Optimization 23/27

84 Beyond Polytopes Equivalence of Separation and Optimization 24/27 Essentially everything we proved about equivalence of separation and optimization for polytopes extends to (approximately) to arbitrary convex sets.

85 Beyond Polytopes Essentially everything we proved about equivalence of separation and optimization for polytopes extends to (approximately) to arbitrary convex sets. Problems parametrized by P, a closed convex set. Weak Optimization Problem Input: Linear objective c R n. Output: x P +ɛ, and c x max x P c x ɛ Weak Separation Problem Input: y R n Output: Decide that y P ɛ, or else find h R n with h = 1 s.t. h x < h y + ɛ for all x P. Equivalence of Separation and Optimization 24/27

86 Beyond Polytopes Essentially everything we proved about equivalence of separation and optimization for polytopes extends to (approximately) to arbitrary convex sets. Problems parametrized by P, a closed convex set. Weak Optimization Problem Input: Linear objective c R n. Output: x P +ɛ, and c x max x P c x ɛ Weak Separation Problem Input: y R n Output: Decide that y P ɛ, or else find h R n with h = 1 s.t. h x < h y + ɛ for all x P. I could have equivalently stated the weak optimization problem for convex functions instead of linear. Equivalence of Separation and Optimization 24/27

87 Equivalence of Separation and Optimization 25/27 Theorem (Equivalence of Separation and Optimization for Convex Sets) Consider a family P of convex sets described implicitly using P bits. Then the weak separation problem is solvable in poly( P, n, y ) time for P P if and only if the weak optimization problem is also solvable in poly( P, n, c ) time. The approximation in this statement is necessary, since we can t solve convex optimization problems exactly. Weak separation suffices for ellipsoid, which is only approximately optimal anyways By polarity, weak optimization is equivalent to weak separation

88 Implication: Constructive Caratheodory Equivalence of Separation and Optimization 26/27

89 Implication: Solvability is closed under intersection Equivalence of Separation and Optimization 27/27

1 Polyhedra and Linear Programming

CS 598CSC: Combinatorial Optimization Lecture date: January 21, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im 1 Polyhedra and Linear Programming In this lecture, we will cover some basic material

3. Linear Programming and Polyhedral Combinatorics

Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

Minimize subject to. x S R

Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such

Definition of a Linear Program

Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

Lecture 7: Approximation via Randomized Rounding

Lecture 7: Approximation via Randomized Rounding Often LPs return a fractional solution where the solution x, which is supposed to be in {0, } n, is in [0, ] n instead. There is a generic way of obtaining

Lecture 7 - Linear Programming

COMPSCI 530: Design and Analysis of Algorithms DATE: 09/17/2013 Lecturer: Debmalya Panigrahi Lecture 7 - Linear Programming Scribe: Hieu Bui 1 Overview In this lecture, we cover basics of linear programming,

5.1 Bipartite Matching

CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

Week 5 Integral Polyhedra

Week 5 Integral Polyhedra We have seen some examples 1 of linear programming formulation that are integral, meaning that every basic feasible solution is an integral vector. This week we develop a theory

Linear Programming I

Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

Good luck, veel succes!

Final exam Advanced Linear Programming, May 7, 13.00-16.00 Switch off your mobile phone, PDA and any other mobile device and put it far away. No books or other reading materials are allowed. This exam

Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon

lp.nb 1 Linear Programming, Lagrange Multipliers, and Duality Geoff Gordon lp.nb 2 Overview This is a tutorial about some interesting math and geometry connected with constrained optimization. It is not

CHAPTER 9. Integer Programming

CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

Geometry of Linear Programming

Chapter 2 Geometry of Linear Programming The intent of this chapter is to provide a geometric interpretation of linear programming problems. To conceive fundamental concepts and validity of different algorithms

max cx s.t. Ax c where the matrix A, cost vector c and right hand side b are given and x is a vector of variables. For this example we have x

Linear Programming Linear programming refers to problems stated as maximization or minimization of a linear function subject to constraints that are linear equalities and inequalities. Although the study

What is Linear Programming?

Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques. My T. Thai

Approximation Algorithms: LP Relaxation, Rounding, and Randomized Rounding Techniques My T. Thai 1 Overview An overview of LP relaxation and rounding method is as follows: 1. Formulate an optimization

Mathematics Notes for Class 12 chapter 12. Linear Programming

1 P a g e Mathematics Notes for Class 12 chapter 12. Linear Programming Linear Programming It is an important optimization (maximization or minimization) technique used in decision making is business and

Two General Methods to Reduce Delay and Change of Enumeration Algorithms

ISSN 1346-5597 NII Technical Report Two General Methods to Reduce Delay and Change of Enumeration Algorithms Takeaki Uno NII-2003-004E Apr.2003 Two General Methods to Reduce Delay and Change of Enumeration

Definition 11.1. Given a graph G on n vertices, we define the following quantities:

Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define

Inverse Optimization by James Orlin

Inverse Optimization by James Orlin based on research that is joint with Ravi Ahuja Jeopardy 000 -- the Math Programming Edition The category is linear objective functions The answer: When you maximize

International Doctoral School Algorithmic Decision Theory: MCDA and MOO

International Doctoral School Algorithmic Decision Theory: MCDA and MOO Lecture 2: Multiobjective Linear Programming Department of Engineering Science, The University of Auckland, New Zealand Laboratoire

Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, In which we introduce the theory of duality in linear programming.

Stanford University CS261: Optimization Handout 6 Luca Trevisan January 20, 2011 Lecture 6 In which we introduce the theory of duality in linear programming 1 The Dual of Linear Program Suppose that we

2.3 Convex Constrained Optimization Problems

42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

Lecture 2: August 29. Linear Programming (part I)

10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

The Simplex Method. yyye

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #05 1 The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities

Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

1 Solving LPs: The Simplex Algorithm of George Dantzig

Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

(67902) Topics in Theory and Complexity Nov 2, 2006. Lecture 7

(67902) Topics in Theory and Complexity Nov 2, 2006 Lecturer: Irit Dinur Lecture 7 Scribe: Rani Lekach 1 Lecture overview This Lecture consists of two parts In the first part we will refresh the definition

Chapter 15 Introduction to Linear Programming

Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

SOLVING LINEAR SYSTEM OF INEQUALITIES WITH APPLICATION TO LINEAR PROGRAMS

SOLVING LINEAR SYSTEM OF INEQUALITIES WITH APPLICATION TO LINEAR PROGRAMS Hossein Arsham, University of Baltimore, (410) 837-5268, harsham@ubalt.edu Veena Adlakha, University of Baltimore, (410) 837-4969,

8.1 Makespan Scheduling

600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Dynamic Programing: Min-Makespan and Bin Packing Date: 2/19/15 Scribe: Gabriel Kaptchuk 8.1 Makespan Scheduling Consider an instance

1. R In this and the next section we are going to study the properties of sequences of real numbers.

+a 1. R In this and the next section we are going to study the properties of sequences of real numbers. Definition 1.1. (Sequence) A sequence is a function with domain N. Example 1.2. A sequence of real

Chapter 1 Linear Programming. Paragraph 2 Developing Ideas for an Algorithm solving Linear Optimization

Chapter 1 Linear Programming Paragraph 2 Developing Ideas for an Algorithm solving Linear Optimization What we did so far How to model an optimization problem Data, Variables, Constraints, Objective We

By W.E. Diewert. July, Linear programming problems are important for a number of reasons:

APPLIED ECONOMICS By W.E. Diewert. July, 3. Chapter : Linear Programming. Introduction The theory of linear programming provides a good introduction to the study of constrained maximization (and minimization)

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

Lecture 1: Course overview, circuits, and formulas

Lecture 1: Course overview, circuits, and formulas Topics in Complexity Theory and Pseudorandomness (Spring 2013) Rutgers University Swastik Kopparty Scribes: John Kim, Ben Lund 1 Course Information Swastik

2.3 Scheduling jobs on identical parallel machines

2.3 Scheduling jobs on identical parallel machines There are jobs to be processed, and there are identical machines (running in parallel) to which each job may be assigned Each job = 1,,, must be processed

On the dual of the solvency cone

On the dual of the solvency cone Andreas Löhne Friedrich-Schiller-Universität Jena Joint work with: Birgit Rudloff (WU Wien) Wien, April, 0 Simplest solvency cone example Exchange between: Currency : Nepalese

Actually Doing It! 6. Prove that the regular unit cube (say 1cm=unit) of sufficiently high dimension can fit inside it the whole city of New York.

1: 1. Compute a random 4-dimensional polytope P as the convex hull of 10 random points using rand sphere(4,10). Run VISUAL to see a Schlegel diagram. How many 3-dimensional polytopes do you see? How many

Arrangements And Duality

Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

Absolute Value Programming

Computational Optimization and Aplications,, 1 11 (2006) c 2006 Springer Verlag, Boston. Manufactured in The Netherlands. Absolute Value Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

Factoring & Primality

Factoring & Primality Lecturer: Dimitris Papadopoulos In this lecture we will discuss the problem of integer factorization and primality testing, two problems that have been the focus of a great amount

Duality of linear conic problems

Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

Transportation Polytopes: a Twenty year Update

Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,

Discuss the size of the instance for the minimum spanning tree problem.

3.1 Algorithm complexity The algorithms A, B are given. The former has complexity O(n 2 ), the latter O(2 n ), where n is the size of the instance. Let n A 0 be the size of the largest instance that can

The multi-integer set cover and the facility terminal cover problem

The multi-integer set cover and the facility teral cover problem Dorit S. Hochbaum Asaf Levin December 5, 2007 Abstract The facility teral cover problem is a generalization of the vertex cover problem.

Notes from Week 1: Algorithms for sequential prediction

CS 683 Learning, Games, and Electronic Markets Spring 2007 Notes from Week 1: Algorithms for sequential prediction Instructor: Robert Kleinberg 22-26 Jan 2007 1 Introduction In this course we will be looking

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora

princeton univ. F 13 cos 521: Advanced Algorithm Design Lecture 6: Provable Approximation via Linear Programming Lecturer: Sanjeev Arora Scribe: One of the running themes in this course is the notion of

26 Linear Programming

The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

CS5314 Randomized Algorithms. Lecture 16: Balls, Bins, Random Graphs (Random Graphs, Hamiltonian Cycles)

CS5314 Randomized Algorithms Lecture 16: Balls, Bins, Random Graphs (Random Graphs, Hamiltonian Cycles) 1 Objectives Introduce Random Graph Model used to define a probability space for all graphs with

Approximation Algorithms

Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

Linear Programming: Introduction

Linear Programming: Introduction Frédéric Giroire F. Giroire LP - Introduction 1/28 Course Schedule Session 1: Introduction to optimization. Modelling and Solving simple problems. Modelling combinatorial

POLYNOMIAL TIME ALGORITHMS FOR MAXIMIZING THE INTERSECTION VOLUME OF POLYTOPES

POLYNOMIAL TIME ALGORITHMS FOR MAXIMIZING THE INTERSECTION VOLUME OF POLYTOPES Komei Fukuda and Takeaki Uno Dedicated to Masakazu Kojima on the occasion of his 60th birthday. Abstract: Suppose that we

Lecture 3: Finding integer solutions to systems of linear equations

Lecture 3: Finding integer solutions to systems of linear equations Algorithmic Number Theory (Fall 2014) Rutgers University Swastik Kopparty Scribe: Abhishek Bhrushundi 1 Overview The goal of this lecture

Equilibrium computation: Part 1

Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

Permutation Betting Markets: Singleton Betting with Extra Information

Permutation Betting Markets: Singleton Betting with Extra Information Mohammad Ghodsi Sharif University of Technology ghodsi@sharif.edu Hamid Mahini Sharif University of Technology mahini@ce.sharif.edu

CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992

CS268: Geometric Algorithms Handout #5 Design and Analysis Original Handout #15 Stanford University Tuesday, 25 February 1992 Original Lecture #6: 28 January 1991 Topics: Triangulating Simple Polygons

Lecture 9. 1 Introduction. 2 Random Walks in Graphs. 1.1 How To Explore a Graph? CS-621 Theory Gems October 17, 2012

CS-62 Theory Gems October 7, 202 Lecture 9 Lecturer: Aleksander Mądry Scribes: Dorina Thanou, Xiaowen Dong Introduction Over the next couple of lectures, our focus will be on graphs. Graphs are one of

I. Solving Linear Programs by the Simplex Method

Optimization Methods Draft of August 26, 2005 I. Solving Linear Programs by the Simplex Method Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston,

Using the Simplex Method in Mixed Integer Linear Programming

Integer Using the Simplex Method in Mixed Integer UTFSM Nancy, 17 december 2015 Using the Simplex Method in Mixed Integer Outline Mathematical Programming Integer 1 Mathematical Programming Optimisation

3y 1 + 5y 2. y 1 + y 2 20 y 1 0, y 2 0.

1 Linear Programming A linear programming problem is the problem of maximizing (or minimizing) a linear function subject to linear constraints. The constraints may be equalities or inequalities. 1.1 Example

Section Notes 3. The Simplex Algorithm. Applied Math 121. Week of February 14, 2011

Section Notes 3 The Simplex Algorithm Applied Math 121 Week of February 14, 2011 Goals for the week understand how to get from an LP to a simplex tableau. be familiar with reduced costs, optimal solutions,

Introduction and message of the book

1 Introduction and message of the book 1.1 Why polynomial optimization? Consider the global optimization problem: P : for some feasible set f := inf x { f(x) : x K } (1.1) K := { x R n : g j (x) 0, j =

Linear Programming is the branch of applied mathematics that deals with solving

Chapter 2 LINEAR PROGRAMMING PROBLEMS 2.1 Introduction Linear Programming is the branch of applied mathematics that deals with solving optimization problems of a particular functional form. A linear programming

Proximal mapping via network optimization

L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

Chapter 4. Trees. 4.1 Basics

Chapter 4 Trees 4.1 Basics A tree is a connected graph with no cycles. A forest is a collection of trees. A vertex of degree one, particularly in a tree, is called a leaf. Trees arise in a variety of applications.

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

Theory of Linear Programming

Theory of Linear Programming Debasis Mishra April 6, 2011 1 Introduction Optimization of a function f over a set S involves finding the maximum (minimum) value of f (objective function) in the set S (feasible

Minkowski Sum of Polytopes Defined by Their Vertices

Minkowski Sum of Polytopes Defined by Their Vertices Vincent Delos, Denis Teissandier To cite this version: Vincent Delos, Denis Teissandier. Minkowski Sum of Polytopes Defined by Their Vertices. Journal

Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24

Introduction to Algorithms Review information for Prelim 1 CS 4820, Spring 2010 Distributed Wednesday, February 24 The final exam will cover seven topics. 1. greedy algorithms 2. divide-and-conquer algorithms

4.6 Linear Programming duality

4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

1. spectral. Either global (e.g., Cheeger inequality,) or local.

CS369M: Algorithms for Modern Massive Data Set Analysis Lecture 12-11/04/2009 Introduction to Graph Partitioning Lecturer: Michael Mahoney Scribes: Noah Youngs and Weidong Shao *Unedited Notes 1 Graph

Introduction to Linear Programming.

Chapter 1 Introduction to Linear Programming. This chapter introduces notations, terminologies and formulations of linear programming. Examples will be given to show how real-life problems can be modeled

CSE 135: Introduction to Theory of Computation Decidability and Recognizability

CSE 135: Introduction to Theory of Computation Decidability and Recognizability Sungjin Im University of California, Merced 04-28, 30-2014 High-Level Descriptions of Computation Instead of giving a Turing

Linear Programming: Chapter 5 Duality

Linear Programming: Chapter 5 Duality Robert J. Vanderbei October 17, 2007 Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 http://www.princeton.edu/ rvdb Resource

Lecture 2: Universality

CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal

Online Learning and Competitive Analysis: a Unified Approach

Online Learning and Competitive Analysis: a Unified Approach Shahar Chen Online Learning and Competitive Analysis: a Unified Approach Research Thesis Submitted in partial fulfillment of the requirements

Exponential time algorithms for graph coloring

Exponential time algorithms for graph coloring Uriel Feige Lecture notes, March 14, 2011 1 Introduction Let [n] denote the set {1,..., k}. A k-labeling of vertices of a graph G(V, E) is a function V [k].

Approximating Minimum Bounded Degree Spanning Trees to within One of Optimal

Approximating Minimum Bounded Degree Spanning Trees to within One of Optimal ABSTACT Mohit Singh Tepper School of Business Carnegie Mellon University Pittsburgh, PA USA mohits@andrew.cmu.edu In the MINIMUM

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

Convex Optimization SVM s and Kernel Machines

Convex Optimization SVM s and Kernel Machines S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola and Stéphane Canu S.V.N.

Quantum Computation CMU BB, Fall 2015

Quantum Computation CMU 5-859BB, Fall 205 Homework 3 Due: Tuesday, Oct. 6, :59pm, email the pdf to pgarriso@andrew.cmu.edu Solve any 5 out of 6. [Swap test.] The CSWAP (controlled-swap) linear operator

CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.