Find a counterexample that shows that this construction does not work for general norms. Solution. We use the l 1-norm, with

Size: px
Start display at page:

Download "Find a counterexample that shows that this construction does not work for general norms. Solution. We use the l 1-norm, with"

Transcription

1 τ = 0, v = 0, u 2 µ. The first two conditions give u = x 0, µ = t 0. The third condition implies x 0 2 t 0. In this case (x 0, t 0) is in the second-order cone, so it is its own projection. u 2 = µ > 0, v 2 = τ > 0, τu = µv. We can express v as v = (τ/µ)u. From x 0 = u v, x 0 = (1 + τ/µ)u, µ = u 2, and therefore µ + τ = x 0 2. Also, t 0 = µ τ. Solving for µ and τ gives µ = (1/2)(t 0 + x 0 2), τ = (1/2)( t 0 + x 0 2). τ is only positive if t 0 < x 0 2. We obtain u = t0 + x0 2 2 x 0 2 x 0, µ = x0 2 + t0 t0 x0 2, v = x 0, τ = 2 2 x 0 2 x0 2 t The Euclidean projection of a point on a convex set yields a simple separating hyperplane (P C(x 0) x 0) T (x (1/2)(x 0 + P C(x 0))) = 0. Find a counterexample that shows that this construction does not work for general norms. Solution. We use the l 1-norm, with C = x R 2 x 1 + x 2/2 1}, x 0 = (1, 1). The projection is P C(x 0) = (1/2, 1), so the hyperplane as above, (P C(x 0) x 0) T (x (1/2)(x 0 + P C(x 0))) = 0, simplifies to x 1 = 3/4. This does not separate (1, 1) from C. 8.5 [HUL93, volume 1, page 154] Depth function and signed distance to boundary. Let C R n be a nonempty convex set, and let dist(x, C) be the distance of x to C in some norm. We already know that dist(x, C) is a convex function of x. (a) Show that the depth function, depth(x, C) = dist(x, R n \ C), is concave for x C. Solution. We will show that the depth function can be expressed as depth(x, C) = inf (S C(y) y T x), y =1 where S C is the support function of C. This proves that the depth function is concave because it is the infimum of a family of affine functions of x. We first prove the following result. Suppose a 0. The distance of a point x 0, in the norm, to the hyperplane defined by a T x = b, is given by a T x b / a. We can show this by applying Lagrange duality for the problem minimize x x 0 subject to a T x = b. The dual function is g(ν) = ( inf x x0 + ν(a T x b) ) x ( = inf x x0 + νa T (x x 0) + ν(a T x 0 b) ) x ν(a T x = 0 b) νa 1 otherwise

2 Exercises so we obtain the dual problem maximize ν(a T x 0 b) subject to ν 1/ a. If a T x 0 b, the solution is ν = 1/ a. If a T x 0 b, the solution is ν = 1/ a. In both cases the optimal value is a T x 0 b / a. We now give a geometric interpretation and proof of the expression for the depth function. Let H be the set of all halfspaces defined by supporting hyperplanes of C, and containing C. We can describe any H H by a linear inequality x T y S C(y) where y is a nonzero vector in dom S C(y). Let H H. The function dist(x, R n \ H) is affine for all x C: dist(x, R n \ H) = SC(y) xt y y. The intersection of all H in H is equal to cl C and therefore depth(x, C) = inf H H dist(x, Rn \ H) = inf y 0 (SC(y) xt y)/ y = inf (S C(y) x T y). y =1 (b) The signed distance to the boundary of C is defined as dist(x, C) x C s(x) = depth(x, C) x C. Thus, s(x) is positive outside C, zero on its boundary, and negative on its interior. Show that s is a convex function. Solution. We will show that if we extend the expression in part (a) to points x C, we obtain the signed distance: s(x) = sup (y T x S C(y)). y =1 In part (a) we have shown that this is true for x C. If x bd C, then y T x S C(y) for all unit norm y, with equality if y is the normalized normal vector to a supporting hyperplane at x, so the expression for s holds. If x cl C, then for all y with y = 1, y T x S C(y) is the distance of x to a hyperplane supporting C (as proved in part (a)), and therefore y T x S C(y) dist(x, C). Equality holds if we take y equal to the optimal solution of maximize y T x S C(y) subject to y 1 with variable y. As we have seen in the optimal value of this problem is equal to dist(x, C).

3 The geometric interpretation is as follows. As in part (a), we let H be the set of all halfspaces defined by supporting hyperplanes of C, and containing C. From part (a), we already know that for H H depth(x, C) = max s(x, H), H H where s(x, R n \ H) is the signed distance from x to H. We now have to show that for x outside of C dist(x, C) = sup s(x, H). H H By construction, we know that for all G H, we must have dist(x, C) s(x, G). Now, let B be a ball of radius dist(x, C) centered at x. Because both B and C are convex with B closed, there is a separating hyperplane H such that H H and s(x, H) = dist(x, C), hence and the desired result. Distance between sets 8.6 Let C, D be convex sets. dist(x, C) sup s(x, H), H H (a) Show that dist(c, x + D) is a convex function of x. (b) Show that dist(tc, x + td) is a convex function of (x, t) for t > 0. Solution. To prove the first, we note that dist(c, x + D) = inf (IC(u) + IC(x + v) + u (x + v) ). u,v The righthand side is convex in (u, v, x). Therefore dist(c, x + D) is convex by the minimization rule. To prove the second, we note that dist(tc, x + td) = t dist(c, x/t + D). The righthand side is the perspective of the convex function from part (a). 8.7 Separation of ellipsoids. Let E 1 and E 2 be two ellipsoids defined as E 1 = x (x x 1) T P 1 1 (x x 1) 1}, E 2 = x (x x 2) T P 1 2 (x x 2) 1}, where P 1, P 2 S n ++. Show that E 1 E 2 = if and only if there exists an a R n with P 1/2 2 a 2 + P 1/2 1 a 2 < a T (x 1 x 2). Solution. The two sets are closed and bounded, so the intersection is nonempty if and only if there is an a 0 satisfying inf a T x > sup a T x. x E 1 x E 2 The infimum is giving by the optimal value of minimize subject to a T x (x x 1) T P 1 1 (x x 1) 1. A change of variables y = P 1/2 1 (x x 1) yields minimize a T x 1 + a T P 1/2 y subject to y T y 1,

4 Exercises which has optimal value a T x 1 P 1/2 a 2. Similarly, sup x E 2 a T x = a T x 2 + P 1/2 a 2. The condition therefore reduces to a T x 1 P 1/2 a 2 > a T x 2 + P 1/2 a 2. We can also derive this result directly from duality, without using the separating hyperplane theorem. The distance between the two sets is the optimal value of the problem minimize x y 2 subject to P 1/2 1 (x x 1) 2 1 P 1/2 2 (y x 2) 2 1, with variables x and y. The optimal value is positive if and only if the intersection of the ellipsoids is empty, and zero otherwise. To derive a dual, we first reformulate the problem as minimize u 2 subject to v 2 1, w 2 1 P 1/2 1 v = x x 1 P 1/2 2 w = y x 2 u = x y, with new variables u, v, w. The Lagrangian is L(x, y, u, v, w, λ 1, λ 2, z 1, z 2, z) = u 2 + λ 1( v 2 1) + λ 2( w 2 1) + z T 1 (P 1/2 1 v x + x 1) + z T 2 (P 1/2 2 w y + x 2) + z T (u x + y) = λ 1 λ 2 + z T 1 x 1 + z T 2 x 2 (z + z 1) T x + (z z 2) T y + u 2 + z T u + λ 1 v 2 + z T 1 P 1/2 1 v + λ 2 w 2 + z T 2 P 1/2 2 w. The minimum over x is unbounded below unless z 1 = z. The minimum over y is unbounded below unless z 2 = z. Eliminating z 1 and z 2 we can therefore write the dual function as g(λ 1, λ 2, z) = λ 1 λ 2 + z T (x 2 x 1) + inf u ( u 2 + zt u) + inf v (λ1 v 2 zt P 1/2 1 v) + + inf w (λ2 w 2 + zt P 1/2 2 w). We have inf u ( u 2 + zt u) = 0 z 2 1 otherwise. This follows from the Cauchy-Schwarz inequality: if z 2 1, then z T u z 2 u 2 u 2, with equality if u = 0. If z 2 > 1, we can take u = tz with t to show that u 2 + z T u = t z 1(1 z 2)) is unbounded below. We also have inf v (λ1 v 2 zt P 1/2 1/2 0 P 1 v) = 1 z 2 λ 1 otherwise.

5 This can be shown by distinguishing two cases: if λ 1 = 0 then the infimum is zero if P 1/2 1 z = 0 and otherwise. If λ 1 < 0 the minimum is. If λ 1 > 0, we have Similarly, inf v (λ1 v 2 zt P 1/2 1 v) = λ 1 inf v ( v 2 (1/λ1)zT P 1/2 1 v) 1/2 0 P = 1 z 2 λ 1 otherwise. inf w (λ2 w 2 + zt P 1/2 1/2 0 P 2 w) = 2 z 2 λ 2 otherwise. Putting this all together, we obtain the dual problem which is equivalent to maximize λ 1 λ 2 + z T (x 2 x 1) subject to z 2 1, P 1/2 1 z 2 λ 1, P 1/2 2 z 2 λ 2, maximize P 1/2 1 z 2 P 1/2 2 z 2 + z T (x 2 x 1) subject to z 2 1. The intersection of the ellipsoids is empty if and only if the optimal value is positive, i.e., there exists a z with Setting a = z gives the desired inequality. P 1/2 1 z 2 P 1/2 2 z 2 + z T (x 2 x 1) > Intersection and containment of polyhedra. Let P 1 and P 2 be two polyhedra defined as P 1 = x Ax b}, P 2 = x F x g}, with A R m n, b R m, F R p n, g R p. Formulate each of the following problems as an LP feasibility problem, or a set of LP feasibility problems. (a) Find a point in the intersection P 1 P 2. (b) Determine whether P 1 P 2. For each problem, derive a set of linear inequalities and equalities that forms a strong alternative, and give a geometric interpretation of the alternative. Repeat the question for two polyhedra defined as P 1 = convv 1,..., v K}, P 2 = convw 1,..., w L}. Solution Inequality description. (a) Solve The alternative is Ax b, F x g. A T u + F T v = 0, u 0, v 0, b T u + g T v < 0. Interpretation: if the sets do not intersect, then they can be separated by a hyperplane with normal vector a = A T u = F T v. If Ax b and F y g, a T x = u T Ax u T b < v T g v T F x a T y.

6 For x = e i, we get the condition α = The volume of the ellipsoid is proportional to 1 + β 2(1 + nβ). and its logarithm is γ n det(i + β11 T ) 1 = γn 1 + βn, n log γ log(1 + βn) = n log(α 2 n(1 + nβ)) log(1 + βn) ( ) (1 + β) 2 = n log log(1 + βn) 4(1 + β) = n log(n/4) + 2n log(1 + β) (n + 1) log(1 + nβ). Setting the derivative equal to zero gives β = 1, and hence α = 1 n + 1, β = 1, γ = n 1 + n. We conclude that E lj is the solution set of the quadratic inequality (x 1 n + 1 1)T (I + 11 T )(x 1 n + 1 1) n 1 + n, which simplifies to x T x + (1 1 T x) 2 1. The shrunk ellipsoid is the solution set of the quadratic inequality (x 1 n + 1 1)T (I + 11 T )(x 1 n + 1 1) 1 n(1 + n), which simplifies to x T x + (1 1 T x) 2 1 n. We verify that the shrunk ellipsoid lies in C by maximizing the linear functions 1 T x, x i, i = 1,..., n subject to the quadratic inequality. The solution of is the point (1/n)1. The solution of is the point (1/n)(1 e i). maximize 1 T x subject to x T x + (1 1 T x) 2 1/n minimize x i subject to x T x + (1 1 T x) 2 1/n 8.14 Efficiency of ellipsoidal inner approximation. Let C be a polyhedron in R n described as C = x Ax b}, and suppose that x Ax b} is nonempty. (a) Show that the maximum volume ellipsoid enclosed in C, expanded by a factor n about its center, is an ellipsoid that contains C. (b) Show that if C is symmetric about the origin, i.e., of the form C = x 1 Ax 1}, then expanding the maximum volume inscribed ellipsoid by a factor n gives an ellipsoid that contains C. Solution.

7 Exercises (a) The ellipsoid E = Bu + d u 2 1} is the maximum volume inscribed ellipsoid, if B and d solve minimize log det B 1 subject to Ba i 2 b i a T i d, i = 1,..., m, or in generalized inequality notation minimize log det B 1 subject to (Ba i, b i a T i d) K 0, i = 1,..., m, where K is the second-order cone. The Lagrangian is L(B, d, u, v) = log det B 1 u T i Ba i v T (b Ad). Minimizing over B and d gives The dual problem is B 1 = 1 2 (a iu T i + u ia T i ), A T v = 0. maximize log det( (1/2) m (aiut i + u ia T i )) b T v + n subject to A T v = 0 u i 2 v i, i = 1,..., m. The optimality conditions are: primal and dual feasibility and B 1 = 1 (a iu T i + u ia T i ), u T i Ba i + v i(b i a T i d) = 0, i = 1,..., m. 2 To simplify the notation we will assume that B = I, d = 0, so the optimality conditions reduce to a i 2 b i, i = 1,..., m, A T v = 0, u i 2 v i, i = 1,..., m, and I = 1 (a iu T i + u ia T i ), u T i a i + v ib i = 0, i = 1,..., m. (8.14.A) 2 From the Cauchy-Schwarz inequality the last inequality, combined with a i 2 b i and u i 2 v i, implies that and u i = 0, v i = 0 if a i 2 < b i, and u i = ( u i 2/b i)a i, v i = u i 2 if a i 2 = b i. We need to show that x 2 n if Ax b. The optimality conditions (8.14.A) give and x T x = n = (u T i x)(a T i x) = a T i u = b T v u i 2 a i 2 (a T i x) 2 u i 2 a i 2 b 2 i.

8 Since u i = 0, v i = 0 if a i 2 < b i, the last sum further simplifies and we obtain x T x u i 2b i = b T v = n. (b) Let E = x x T Q 1 x 1} be the maximum volume ellipsoid with center at the origin inscribed in C, where Q S n ++. We are asked to show that the ellipsoid ne = x x T Q 1 x n} contains C. We first formulate this problem as a convex optimization problem. x E if x = Q 1/2 y for some y with y 2 1, so we have E C if and only if for i = 1,..., p, sup a T i Q 1/2 y = Q 1/2 a i 2 1, y 2 1 inf a T i Q 1/2 y = Q 1/2 a i 2 1, y 2 1 or in other words a T i Qa i = Q 1/2 a i We find the maximum volume inscribed ellipsoid by solving minimize log det Q 1 subject to a T i Qa i 1, i = 1,..., p. (8.14.B) The variable is the matrix Q S n. The dual function is g(λ) = inf L(Q, λ) = inf Q 0 Q 0 ( log det Q 1 + ) n λ i(a T i Qa i 1). Minimizing over Q gives and hence p Q 1 = λ ia ia T i, ( p ) p log det g(λ) = λiaiat i λi + n p (λiaiat i ) 0 otherwise. The resulting dual problem is maximize log det ( ) p p λiaiat i λi + n subject to λ 0. The KKT conditions are primal and dual feasibility (Q 0, a T i Qa i 1, λ 0), plus p Q 1 = λ ia ia T i, λ i(1 a T i Qa i) = 0, i = 1,..., p. (8.14.C) The third condition (the complementary slackness condition) implies that a T i Qa i = 1 if λ i > 0. Note that Slater s condition for (8.14.B) holds (a T i Qa i < 1 for Q = ɛi and ɛ > 0 small enough), so we have strong duality, and the KKT conditions are necessary and sufficient for optimality.

9 Exercises so we have to choose c = γ ct x A T d q where d i = 1/(b i a T i x ). We can choose c = A T d, and for q any integer satisfying and γ = q + c T x. q maxc T x Ax b} c T x, 8.19 Let x ac be the analytic center of a set of linear inequalities a T i x b i, i = 1,..., m, and define H as the Hessian of the logarithmic barrier function at x ac: H = 1 (b i a T i xac)2 aiat i. Show that the kth inequality is redundant (i.e., it can be deleted without changing the feasible set) if b k a T k x ac m(a T k H 1 a k ) 1/2. Solution. We have an enclosing ellipsoid defined by (x x ac) T H(x x ac) m(m 1). The maximum of a T k x over the enclosing ellipsoid is so if the inequality is redundant. a T k x ac + m(m 1) a T k H 1 a k a T k x ac + m(m 1) a T k H 1 a k b k, 8.20 Ellipsoidal approximation from analytic center of linear matrix inequality. Let C be the solution set of the LMI x 1A 1 + x 2A x na n B, where A i, B S m, and let x ac be its analytic center. Show that where E inner C E outer, E inner = x (x x ac) T H(x x ac) 1}, E outer = x (x x ac) T H(x x ac) m(m 1)}, and H is the Hessian of the logarithmic barrier function log det(b x 1A 1 x 2A 2 x na n) evaluated at x ac. Solution. Define F (x) = B xiai. and Fac = F (xac) The Hessian is given by i H ij = tr(f 1 ac A if 1 ac A j),

10 so we have (x x ac) T H(x x ac) = i,j (x i x ac,i)(x j x ac,j) tr(fac 1 A ifac 1 A j) = tr ( Fac 1 (F (x) F ac)fac 1 (F (x) F ) ac) = tr ( Fac 1/2 (F (x) F ac)fac 1/2 We first consider the inner ellipsoid. Suppose x E inner, i.e., tr ( ) Fac 1/2 (F (x) F ac)fac 1/2 2 = F 1/2 ac F (x)fac 1/2 I 2 1. F This implies that i.e., 1 λ i(f 1/2 ac F (x)f 1/2 ac ) 1 1, 0 λ i(f 1/2 ac F (x)f 1/2 ac ) 2 for i = 1,..., m. In particular, F (x) 0, i.e., x C. To prove that C E outer, we first note that the gradient of the logarithmic barrier function vanishes at x ac, and therefore, tr(f 1 ac A i) = 0, i = 1,..., n, ) 2. and therefore Now assume x C. Then tr ( F 1 ac (F (x) F ac) ) = 0, tr ( F 1 ac F (x) ) = m. (x x ac) T H(x ac) = tr ( Fac 1/2 (F (x) F ac)fac 1/2 = tr ( Fac 1 (F (x) F ac)fac 1 (F (x) F ) ac) ) 2 = tr ( F 1 ac F (x)f 1 ac F (x) ) 2 tr ( F 1 ac F (x) ) + tr ( F 1 ac F acf 1 ac F ac ) = tr ( Fac 1 F (x)fac 1 F (x) ) 2m + m = tr ( Fac 1/2 F (x)fac 1/2 ) 2 m ( tr(fac 1/2 F (x)fac 1/2 ) ) 2 m = m 2 m. The inequality follows by applying the inequality i λ2 i ( i λi)2 for λ 0 to the eigenvalues of Fac 1/2 F (x)fac 1/ [BYT99] Maximum likelihood interpretation of analytic center. We use the linear measurement model of page 352, y = Ax + v, where A R m n. We assume the noise components v i are IID with support [ 1, 1]. The set of parameters x consistent with the measurements y R m is the polyhedron defined by the linear inequalities 1 + y Ax 1 + y. (8.37) Suppose the probability density function of v i has the form αr(1 v 2 ) r 1 v 1 p(v) = 0 otherwise,

11 (b) Suppose a, b, t are feasible in problem (8.23), with t > 0. Then ã, b are feasible in the QP, with objective value ã 2 = a 2/t 1/t. Conversely, if ã, b are feasible in the QP, then t = 1/ ã 2, a = ã/ ã 2, b = b/ ã 2, are feasible in problem (8.23), with objective value t = 1/ ã Linear discrimination maximally robust to weight errors. Suppose we are given two sets of points x 1,..., x N } and and y 1,..., y M } in R n that can be linearly separated. In we showed how to find the affine function that discriminates the sets, and gives the largest gap in function values. We can also consider robustness with respect to changes in the vector a, which is sometimes called the weight vector. For a given a and b for which f(x) = a T x b separates the two sets, we define the weight error margin as the norm of the smallest u R n such that the affine function (a + u) T x b no longer separates the two sets of points. In other words, the weight error margin is the maximum ρ such that (a + u) T x i b, i = 1,..., N, (a + u) T y j b, i = 1,..., M, holds for all u with u 2 ρ. Show how to find a and b that maximize the weight error margin, subject to the normalization constraint a 2 1. Solution. The weight error margin is the maximum ρ such that (a + u) T x i b, i = 1,..., N, (a + u) T y j b, i = 1,..., M, for all u with u 2 ρ, i.e., a T x i ρ x i 2 b i, a T y i + ρ y i 2 b i. This shows that the weight error margin is given by a T x i b min,...,n j=1,...,m x i 2 }, b at y i. y i 2 We can maximize the weight error margin by solving the problem with variables a, b, t. maximize t subject to a T x i b t x i 2, i = 1,..., N b a T y i t y i 2, j = 1,..., M a Most spherical separating ellipsoid. We are given two sets of vectors x 1,..., x N R n, and y 1,..., y M R n, and wish to find the ellipsoid with minimum eccentricity (i.e., minimum condition number of the defining matrix) that contains the points x 1,..., x N, but not the points y 1,..., y M. Formulate this as a convex optimization problem. Solution. This can be solved as the SDP minimize γ subject to x T i P x i + q T x i + r 0, i = 1,..., N yi T P y i + q T y i + r 0, i = 1,..., M I P γi, with variables P S n, q R n, and r, γ R.

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Optimisation et simulation numérique.

Optimisation et simulation numérique. Optimisation et simulation numérique. Lecture 1 A. d Aspremont. M2 MathSV: Optimisation et simulation numérique. 1/106 Today Convex optimization: introduction Course organization and other gory details...

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

Linear Algebra Notes for Marsden and Tromba Vector Calculus

Linear Algebra Notes for Marsden and Tromba Vector Calculus Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

Lecture 2: Homogeneous Coordinates, Lines and Conics

Lecture 2: Homogeneous Coordinates, Lines and Conics Lecture 2: Homogeneous Coordinates, Lines and Conics 1 Homogeneous Coordinates In Lecture 1 we derived the camera equations λx = P X, (1) where x = (x 1, x 2, 1), X = (X 1, X 2, X 3, 1) and P is a 3 4

More information

Linear Algebra Notes

Linear Algebra Notes Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

Notes on Determinant

Notes on Determinant ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets. Norm The notion of norm generalizes the notion of length of a vector in R n. Definition. Let V be a vector space. A function α

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Two-Stage Stochastic Linear Programs

Two-Stage Stochastic Linear Programs Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic

More information

1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. 1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

More information

Chapter 6. Cuboids. and. vol(conv(p ))

Chapter 6. Cuboids. and. vol(conv(p )) Chapter 6 Cuboids We have already seen that we can efficiently find the bounding box Q(P ) and an arbitrarily good approximation to the smallest enclosing ball B(P ) of a set P R d. Unfortunately, both

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

Chapter 3: Section 3-3 Solutions of Linear Programming Problems Chapter 3: Section 3-3 Solutions of Linear Programming Problems D. S. Malik Creighton University, Omaha, NE D. S. Malik Creighton University, Omaha, NE Chapter () 3: Section 3-3 Solutions of Linear Programming

More information

Chapter 4. Duality. 4.1 A Graphical Example

Chapter 4. Duality. 4.1 A Graphical Example Chapter 4 Duality Given any linear program, there is another related linear program called the dual. In this chapter, we will develop an understanding of the dual linear program. This understanding translates

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5. PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt. An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu In fact, the great watershed in optimization isn t between linearity

More information

Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

More information

Section 1.1. Introduction to R n

Section 1.1. Introduction to R n The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

More information

The Characteristic Polynomial

The Characteristic Polynomial Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

15 Kuhn -Tucker conditions

15 Kuhn -Tucker conditions 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

More information

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture 11: 0-1 Quadratic Program and Lower Bounds Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Classification of Cartan matrices

Classification of Cartan matrices Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Pattern Analysis. Logistic Regression. 12. Mai 2009. Joachim Hornegger. Chair of Pattern Recognition Erlangen University

Pattern Analysis. Logistic Regression. 12. Mai 2009. Joachim Hornegger. Chair of Pattern Recognition Erlangen University Pattern Analysis Logistic Regression 12. Mai 2009 Joachim Hornegger Chair of Pattern Recognition Erlangen University Pattern Analysis 2 / 43 1 Logistic Regression Posteriors and the Logistic Function Decision

More information

On Minimal Valid Inequalities for Mixed Integer Conic Programs

On Minimal Valid Inequalities for Mixed Integer Conic Programs On Minimal Valid Inequalities for Mixed Integer Conic Programs Fatma Kılınç Karzan June 27, 2013 Abstract We study mixed integer conic sets involving a general regular (closed, convex, full dimensional,

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Online Convex Optimization

Online Convex Optimization E0 370 Statistical Learning heory Lecture 19 Oct 22, 2013 Online Convex Optimization Lecturer: Shivani Agarwal Scribe: Aadirupa 1 Introduction In this lecture we shall look at a fairly general setting

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Linear Programming: Theory and Applications

Linear Programming: Theory and Applications Linear Programming: Theory and Applications Catherine Lewis May 11, 2008 1 Contents 1 Introduction to Linear Programming 3 1.1 What is a linear program?...................... 3 1.2 Assumptions.............................

More information

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough.

1 Linear Programming. 1.1 Introduction. Problem description: motivate by min-cost flow. bit of history. everything is LP. NP and conp. P breakthrough. 1 Linear Programming 1.1 Introduction Problem description: motivate by min-cost flow bit of history everything is LP NP and conp. P breakthrough. general form: variables constraints: linear equalities

More information

Support Vector Machines

Support Vector Machines CS229 Lecture notes Andrew Ng Part V Support Vector Machines This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best)

More information

Largest Fixed-Aspect, Axis-Aligned Rectangle

Largest Fixed-Aspect, Axis-Aligned Rectangle Largest Fixed-Aspect, Axis-Aligned Rectangle David Eberly Geometric Tools, LLC http://www.geometrictools.com/ Copyright c 1998-2016. All Rights Reserved. Created: February 21, 2004 Last Modified: February

More information

Support Vector Machines Explained

Support Vector Machines Explained March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Can linear programs solve NP-hard problems?

Can linear programs solve NP-hard problems? Can linear programs solve NP-hard problems? p. 1/9 Can linear programs solve NP-hard problems? Ronald de Wolf Linear programs Can linear programs solve NP-hard problems? p. 2/9 Can linear programs solve

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i.

Math 115A HW4 Solutions University of California, Los Angeles. 5 2i 6 + 4i. (5 2i)7i (6 + 4i)( 3 + i) = 35i + 14 ( 22 6i) = 36 + 41i. Math 5A HW4 Solutions September 5, 202 University of California, Los Angeles Problem 4..3b Calculate the determinant, 5 2i 6 + 4i 3 + i 7i Solution: The textbook s instructions give us, (5 2i)7i (6 + 4i)(

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

Additional Exercises for Convex Optimization

Additional Exercises for Convex Optimization Additional Exercises for Convex Optimization Stephen Boyd Lieven Vandenberghe February 11, 2016 This is a collection of additional exercises, meant to supplement those found in the book Convex Optimization,

More information

Inner products on R n, and more

Inner products on R n, and more Inner products on R n, and more Peyam Ryan Tabrizian Friday, April 12th, 2013 1 Introduction You might be wondering: Are there inner products on R n that are not the usual dot product x y = x 1 y 1 + +

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U. Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

More information

Linear Algebra I. Ronald van Luijk, 2012

Linear Algebra I. Ronald van Luijk, 2012 Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.

More information

by the matrix A results in a vector which is a reflection of the given

by the matrix A results in a vector which is a reflection of the given Eigenvalues & Eigenvectors Example Suppose Then So, geometrically, multiplying a vector in by the matrix A results in a vector which is a reflection of the given vector about the y-axis We observe that

More information