An Introduction on SemiDefinite Program
|
|
|
- Arron Weaver
- 10 years ago
- Views:
Transcription
1 An Introduction on SemiDefinite Program from the viewpoint of computation Hayato Waki Institute of Mathematics for Industry, Kyushu University Combinatorial Optimization at Work, Berlin, / 43
2 Contents and Purpose of this lecture Subject SemiDefinite Program Contents Part I Formulations & Strong duality on SDP Part II Algorithm on SDP Primal-Dual Interior-Point Methods Part III Comments of Computation on SDP Survey M. Todd, Semidefinite optimization, Acta Numerica 10 (2001), pp Purpose Better understanding for the next lecture (MOSEK on SDP) by Dr. Dahl Know the difficulty in solving SDP in Part III Message : SDP is convex, but also nonlinear 1 / 43
3 Properties and applications of SDP Properties : SDP is an extension of LP Duality Theorem Solvable by primal-dual interior-point methods with up to a given tolerance Applications Combinatorial problems, e.g., Max-Cut by Goemans and Williams Control theory, e.g., H control problem Lift-and-projection approach for nonconvex quadratic problem Lasserre s hierarchy for polynomial optimization problems and complexity theory Embedding problems, e.g., sensor networks and molecular conformation Statistics and machine learning, etc... 2 / 43
4 From LP To SDP LP Primal and Dual min x c T x max (y,s) b T y s.t. a T j x = b j ( j) s.t. s = c m j=1 y ja j x R n + s R n + Minimize/Maximize linear function over the intersection the affine set and R n + R n + is closed convex cone in Rn Extension to SDP Extension to the space of symmetric matrices S n c R n C S n, a j R n A j S n Minimize/Maximize linear function over the intersection the affine set and the set of positive semidefinite matrices 3 / 43
5 LP Primal and Dual min x c T x max (y,s) b T y s.t. a T j x = b j ( j) s.t. s = c m j=1 y ja j x R n + s R n + SDP Primal and Dual min X C X max (y,s) b T y s.t. A j X = b j ( j) s.t. S = C m j=1 y ja j X S n + S S n + S n is the set of n n symmetry matrices, S n + is the set of n n symmetry positive semidefinite matrices, and n n A X := A kl X kl. k=1 l=1 4 / 43
6 1. Definition of positive semidefinite matrices X S n is positive semidefinite if for all z R n, z T Xz 0. Equivalently, all eigenvalues are nonnegative. Remark Eigendecomposition (Spectral decomposition); Q R n n (orthogonal) and λ i 0 such that λ 1 λ 2 X = Q QT... λn See textbooks of linear algebra for proof B R n n such that X = BB T 2. Zero diagonal for positive semidefinite matrices For X S n +, each X ii is nonnegative. In addition, if X ii = 0 for some i, then X ij = X ji = 0 for all j = 1,..., n. 5 / 43
7 Example of SDP C = A 3 = ( ) ( ) ( ) , A 1 1 =, A 4 2 =, 8 ( ) ( ) 9 x11 x, X = 12, b = ( ) T 9 2 x 12 x 22 Primal SDP is formulated as follows: 10x x 12 = 42, inf X 2x ( 8x 22 = 8, ) 11 + x 22 : x11 x 18x x 22 = 20, 12 S x 12 x (Fortunately) the primal solution is uniquely fixed: ( ) 5 1 X = is positive definite and obj. val. = / 43
8 Primal SDP is formulated as follows: 10x x 12 = 42, inf X 2x ( 8x 22 = 8, ) 11 + x 22 : x11 x 18x x 22 = 20, 12 S x 12 x Dual SDP is formulated as follows: { ( 2 10y1 4y 42y 1 8y y 3 : 1 + 9y 3 sup (y,s) ) } S 2 4y 1 + 9y y 2 2y + 3 A dual solution is (1/5, 37/360, 4/45) with the obj. val. = / 43
9 Application : Computation of lower bounds of nonconvex QP QP θ := inf x { } x T Qx + 2c T x : x T Q j x + 2c T j x + r j 0 (j = 1,..., m) SDP relaxation : Add the following constraint and replace x i x j X ij : ( ) 1 (1, ) x S n+1 x + X Sn+1 + {( ) ( η 0 c T rj c T ) } := inf X : j X 0, X x c Q 00 = 1, X S n+1 c j Q + j Remark Handle as SDP η θ binary x {0, 1} x 2 x = 0 MIQP with binary variables = QP 8 / 43
10 Application : Lasserre s SDP relaxation for Polynomial Optimization Problems POP : f, g j are polynomials on x R n θ := inf x Lasserre s SDP relaxation {f(x) : g j(x) 0 (j = 1,..., m)} Generates a sequence of SDP problems : {P r } r 1 Optimal value : θ r θ r+1 θ ( r) Under assumptions, θ r θ (r ) r = 2, 3, θ r θ in practice Strongly connected to sum of square polynomials 9 / 43
11 Compared with LP Similar points Weak and Strong duality holds PDIPM also works in SDP Different points SDP may have an irrational optimal solution { ( ) } 2 y E.g., sup y : S 2 y y 1 + Optimal solution y = 2, not rational Finite optimal value, but solutions { ( ) } y1 1 E.g., inf y 1 : S 2 y 1 y / 43
12 Different points (cont d) 2 types of infeasibility (LP) y; A T y R n +, bt y > 0 Primal LP is infeasible (SDP) y; A T y S n +, bt y > 0 Primal SDP is infeasible Remark : Need to consider the following cases Finite optimal value, but no optimal solutions for Primal and/or Dual Difficult to detect the infeasibility completely 11 / 43
13 Duality on SDP Weak duality for any X F P and (y, S) F D, C X b T y θ P θ D Slater conditon : S n ++ is the set of positive definite matrices Primal satisfies Slater condition if X F P such that X S n ++ Dual Slater condition if (y, S) F D such that S S n ++ Strong duality Primal satisfies Slater condition and dual is feasible. Then θp = θ D and dual has an optimal solution. Slater condition are required for both primal and dual for theoretical results on PDIPMs See survey on SDP for proof 12 / 43
14 3. Inner products on positive semidefinite matrices For all X, S S n +, X S 0. Moreover, X S = 0 iff XS = O n Proof : B s. t. X = BB T and D s.t. S = DD T. Then X S = Trace(BB T DD T ) = Trace(D T BB T D) = Trace((B T D) T (B T D)) 0 Moreover, X S = 0 B T D = O n XS = O n Proof of weak duality In fact, for X F P and (y, S) F D, m C X b T y = C y j A j X = S X 0 j=1 because both matrices are positive semidefinite. 13 / 43
15 Remark of 3 (cont d) X F P : optimal in primal and (y, S) F D : optimal in dual Then, θ P θ D = X S = 0 XS = O n XS = O n is used in PDIPM 14 / 43
16 SDP with multiple positive semidefinite cones SDP inf Xk s.t. where C k, A k j S n k Example N k=1 C k X k N A k j X k = b j (j = 1,..., m) k=1 X k S n k + (k = 1,..., N) A X d, X S n + A X+s = d, X Sn + and s S1 + (= R +) Dual sup y,s k bt y : S k = A k 0 m j=1 y j A k j S n k + (k = 1,..., N) 15 / 43
17 Remark SDP with R n +, Second order cone L n and S n + can be handled as SDP and PDIPM works L n := {(x 0, x) R n : x 2 x 0 } Free variable can be accepted A X + a T x = d, X S n +, x Rn A X + a T x 1 a T x 2 = d, X S n + and x 1, x 2 R n + 16 / 43
18 Classification of Algorithms for SDP Algorithms for SDP Ellipsoid method Interior-point methods Bundle method first-order methods, etc Interior-point methods Path-following algorithm (= Logarithmic barrier function) Potential reduction algorithm Self-dual homogeneous embeddings Path-following algorithm Primal Dual Primal-dual 17 / 43
19 Path-following method Optimality conditions : a pair of optimal solutions (X, y, S) satisfies A j X = b j, X S n +, S = C m j=1 y ja j, S S n +, XS = O n ( C X b T y = 0) Perturbed system : for µ > 0, A j X = b j, X S n ++, S = C m j=1 y ja j, S S n ++, XS = µi n Remark for any µ > 0, unique solution (X(µ), y(µ), S(µ)) Central path {(X(µ), y(µ), S(µ)) : µ > 0} is smooth curve and go to a pair of optimal solutions of primal and dual Follows the central path = Path-following method 18 / 43
20 Algorithm 1: General framework of path-following method Input: (X 0, y 0, S 0 ) F P F D such that X 0, S 0 S n ++, ϵ > 0, 0 < θ < 1 and some parameters X X 0, y y 0 and S S 0 ; while X S > ϵ do Compute direction ( X, y, S) from CPE(µ); Compute step size α P, α D > 0; X X + α P X; y y + α D y; S S + α D S; Compute µ θµ; end return (X, y, S); Remark Infeasible initial guess is acceptable # of iteration is polynomial in n, m and log(ϵ) Computational cost = Computation of direction 19 / 43
21 Computation of direction : Find ( X, y, S) such that X + X F P, (y + y, S + S) F D and A j X = 0, S m j=1 y ja j = O n, XS + XS + X S = µi n Remark X may not be symmetry. So, change XS = µi n by 1 ( PXSP 1 + P T SXP T) = µi n, 2 where P is nonsingular Possible choice of P P = S 1/2 (HRVW/KSH/M) P = X 1/2 (dual HRVW/KSH/M) P = W 1/2, W = X 1/2 (X 1/2 SX 1/2 ) 1/2 X 1/2 (NT) P =... More than 20 types of directions by Todd 20 / 43
22 Computational cost in PDIPM 1. Construction of linear system on y for HRVW/KSH/M direction, M y = (RHS), where M = (Trace(A i XA j S 1 )) 1 i,j m Use of sparsity in A j is necessary for computation of M Almost the same for other search directions 2. Solving the linear system M is dense takes O(m 3 ) computation by Cholesky decomposition M is often sparse in SDP relax for POP sparse Cholesky decomposition works well After them, S = m j=1 y ja j and obtain X. 21 / 43
23 Sparsity in SDP Example Q is nonsingular and dense. Then P 1 is equivalent to P 2 : P 1 : inf X P 2 : inf X where { C X : Ei X = 1 (i = 1,..., n), X S n } +, { } (Q T CQ) X : (Q T E i Q) X = 1 (i = 1,..., n), X S n + { 1 if p = q = i (E i ) pq = 0 o.w. (p, q = 1,..., n) 22 / 43
24 CPU time : Solved by SeDuMi 1.3 on the MacBook Air (1.7 GHz Intel Core i7) Figure : CPU time on P 1 and P 2 23 / 43
25 Software Information from SeDuMi, SDPT3 (MATLAB) SDPA (C++, MATLAB) CSDP (C, MATLAB) DSDP (C, MATLAB) MOSEK Remark Based on PDIPM for almost all software Performance depends on SDP problems Modelling languages on SDP : they can call the above software YALMIP CVX 24 / 43
26 Slater conditions Strong duality Require Slater conditions for Primal or Dual PDIPM requires Slater conditions for both Primal and Dual Sufficient conditions for optimal solutions If either Primal or Dual does not satisfy Slater conditions,... E.g., Lasserre s SDP relaxation P : inf x { x : x 2 1 0, x 0 } Gererate SDP relaxation problems P 1, P 2,..., Slater condition fails in all SDP relaxation & all optimal values are 0 SeDuMi and SDPA returns wrong value 1 All SDP relaxation problems are sensitive to numerical errors in the computation of floating points 25 / 43
27 E.g., Graph Equipartition G(V, E): a weighted undirected graph Partition the vertex set V into L and R the minimum total weight of the cut subject to L = R QOP formulation { 1 inf wij (1 x i x j ) : x R n 2 } n x i = 0, x 2 i = 1 (i = 1,..., n) i=1 26 / 43
28 E.g., Graph Equipartition (cont d) SDP relaxation problem: constant matrices W, E and E i inf {W X E X = 0, E i X = 1} X S n + Since E S n +, X Sn ++ s.t. E X = 0 Slater cond. fails Inaccurate value and/or many iterations Table : SeDuMi 1.3 with ϵ=1.0e-8 SDPLIB iter cpusec duality gap gpp e-05 gpp e-04 gpp e-04 gpp e-08 gpp e / 43
29 E.g., Graph Equipartition (cont d) inf {W X E X = 0, E i X = 1} X S n + Transformation of SDP by V: V = X V T XV 1 =: Z and E VEV T Then, X S n + Z Sn + and E X = 0 Z nn = 0 Eliminate nth row and column from transformed SDP Slater cond. holds 28 / 43
30 E.g., Graph Equipartition (cont d) Table : Numerical Results by SeDuMi 1.3 with ϵ=1.0e-8. Slater fails Slater holds Problems iter cpusec d.gap d.gap cpusec iter gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e gpp e e Comments : If does not satisfy Slater conditions,... PDIPM computes inaccurate values and/or spends many iter. But, reduce the size of SDP 29 / 43
31 Comments A simple (?) transformation generates an SDP in which Slater cond. holds More elementary approach : { } 1 n (QOP) : inf wij (1 x i x j ) : x i = 0, x 2 i = 1 x R n 2 i=1 n (QOP ) : obtained by substituting x 1 = x i in (QOP) (QOP) SDP relax. (SDP) equiv. V equiv. i=2 (QOP ) SDP relax. (SDP ) General case : separate x into basic and nonbasic variables & substitute basic variables SDP relax { } inf x T Qx + 2c T x : a T x j x = b j (j = 1,..., m), x k {0, 1} 30 / 43
32 Extension SDP inf X { C X : Aj X = b j, X S n +} Slater condition fails in Primal y R m \ {0} such that b T y 0, j y j A j S n + Moreover, if y such that b T y > 0, then Primal is infeasible Proof of ( ) : Suppose the contrary that Slater condition holds in Primal. ˆX such that A j ˆX = b j and ˆX S n b T y = j (A j ˆX)y j = j A j y j ˆX < 0(contradiction) 31 / 43
33 Facial Reduction Idea : Let W := j A jy j S n + and bt y = 0 For any feasible solutions X in Primal, W X = j (A j X)y j = b T y = 0. Primal is equivalent to inf {C X : A j X = b j, X S n X + {W} } where {W} := {X : X W = 0} The set S n + {W} has nice structure S n + {W} = { X S n : X = Q ( ) } M O Q T, M S r O O + 32 / 43
34 Idea (cont d) S n + {W} = { X S n : X = Q ( ) } M O Q T, M S r O O + Assume Q = I n. Then Primal is equivalent to inf { C X : à j X = b j, X S r X +} where à j is r r principal matrix Compare this SDP with Primal the size n r May not satisfy Slater cond. Find y and W for the smaller Primal This procedure terminates in finitely many iterations This procedure is called Facial Reduction Algorithm and acceptable for dual 33 / 43
35 Histroy of FRA Borwein-Wolkowicz in 1980 for general convex optimization Ramana, Ramana-Tunçel-Wolkowicz for SDP Pataki simplified FRA for the extension Apply FRA into SDP relax. for Graph Partition, Quadratic Assignment, Sensor Network by Wolkowicz group Apply FRA into SDP relax. for Polynomial Optimization in Waki-Muramatsu / 43
36 Summary on Slater condition Hope that both Primal and dual satisfy Slater conditions Otherwise, may not have any optimal solutions, and wrong value may be obtained Obtain inaccurate solutions even if exists optimal solutions, but, one can reduce the size of SDP FRA is a general framework to remove the difficulty in Slater cond. In modeling to SDP... Need to be careful in even dual to guarantee the existence of optimal solutions in dual A rigorous solution for FRA is necessary 35 / 43
37 Status of infeasibility Feasiblity and infeasiblity { inf C X : Aj X = b j, X S n X +} Strongly feasible if SDP satisfies Slater cond. Weakly feasible if SDP is feasible but, does not satisfies Slater cond. Strongly infeasible if improving ray d, i.e., b T d > 0, j d j A j S n +. Weakly infeasible if SDP is infeasible, but improving ray Remark Weak infeasibility does not occur in LP SOCP and conic optimization also have the four status 36 / 43
38 Example : Infeasible SDPs P 1 P 2 inf X inf X Comments { ( 1 C X : { ( C X : ) ( ) } 1 X = 0, X = 2, X S , ) ( ) } 1 X = 0, X = 2, X S P 1 is strongly infeasible because certificate y = ( 1, 1) P 2 is weakly infeasible because certificate 37 / 43
39 Characterization of weak infeasibility Weakly infeasible SDP; for all ϵ >, X S n + A j X b j < ϵ (j = 1,..., m) More elementary characterization of Weak infeasibility by recent work by Liu and Pataki Example P 2 Perturb b 1 = 0 ϵ > 0 P 2 : inf X { ( C X : Then, perturbed P 1 is feasible: ) ( ) } 1 X = ϵ, X = 2, X S X = ( ) 1/ϵ 1 1 ϵ 38 / 43
40 Pathological? (POP) : inf { x y : xy 1/2, x 1/2, y 1/2} x,y Optimal value is 1.5 Apply Lasserre s SDP hierarchy All SDP relaxation is weakly infeasible (in Waki 2012) SeDuMi and SDPA returns 1.5 for higher oder SDP relaxation Sufficient conditions of (POP) for SDP relaxation to be weakly infeasible (in Waki 2012) 39 / 43
41 Summary on infeasibility Weak infeasibility may occur in SDP, SOCP and conic optimization, but not in LP Difficult to detect this type of infeasibility by software But, software returns good values for weak infeasible SDP 40 / 43
42 Summary Introduce a part of theoretical and practical aspects in SDP Skip applications of SDP, e.g., SDP relaxation for combinatorial problems Can read papers on SDP Not so easy to handle SDP because it is convex but nonlinear programming 41 / 43
43 Further Reading I M. Anjos and JB Lasserre, Handbook of Semidefinite, Conic and Polynomial Optimization: Theory, Algorithms. International Series in Operations Research & Management Science, Springer US, E. de Klerk, Aspects of semidefinite programming : interior point algorithms and selected applications. Applied Optimization, Springer US, B. Gärtner and J. Matoušek Approximation Algorithms and Semidefinite Programming. Springer, / 43
44 Further Reading II M. Todd, Semidefinite optimization. Acta Numerica 10 (2001), pp L. Tunçel, Polyhedral and SDP Methods in Combinatorial Optimization. IFields Institute Monographs, American Mathematical Society, H. Wolkowicz, R. Saigal and L. Vandenberghe, Handbook of Semidefinite Programming. International Series in Operations Research & Management Science, Springer US, / 43
An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.
An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 [email protected] In fact, the great watershed in optimization isn t between linearity
Solving polynomial least squares problems via semidefinite programming relaxations
Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose
Lecture 11: 0-1 Quadratic Program and Lower Bounds
Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite
Conic optimization: examples and software
Conic optimization: examples and software Etienne de Klerk Tilburg University, The Netherlands Etienne de Klerk (Tilburg University) Conic optimization: examples and software 1 / 16 Outline Conic optimization
Duality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
Nonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris [email protected] Nonlinear optimization c 2006 Jean-Philippe Vert,
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
6. Cholesky factorization
6. Cholesky factorization EE103 (Fall 2011-12) triangular matrices forward and backward substitution the Cholesky factorization solving Ax = b with A positive definite inverse of a positive definite matrix
Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method
Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming
DATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.
Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is
Some representability and duality results for convex mixed-integer programs.
Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer
Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance
Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology
Mathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
Solving Large Scale Optimization Problems via Grid and Cluster Computing
Solving Large Scale Optimization Problems via Grid and Cluster Computing Katsuki Fujisawa, Masakazu Kojima Akiko Takeda and Makoto Yamashita Abstract. Solving large scale optimization problems requires
A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION
1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of
Discrete Optimization
Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using
A high-performance software package for semidefinite programs: SDPA 7
A high-performance software package for semidefinite programs: SDPA 7 Makoto Yamashita, Katsuki Fujisawa, Kazuhide Nakata, Maho Nakata, Mituhiro Fukuda, Kazuhiro Kobayashi, and Kazushige Goto + January,
Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications
Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA)
Proximal mapping via network optimization
L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:
Completely Positive Cone and its Dual
On the Computational Complexity of Membership Problems for the Completely Positive Cone and its Dual Peter J.C. Dickinson Luuk Gijben July 3, 2012 Abstract Copositive programming has become a useful tool
Nonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
Convex Optimization. Lieven Vandenberghe University of California, Los Angeles
Convex Optimization Lieven Vandenberghe University of California, Los Angeles Tutorial lectures, Machine Learning Summer School University of Cambridge, September 3-4, 2009 Sources: Boyd & Vandenberghe,
Optimisation et simulation numérique.
Optimisation et simulation numérique. Lecture 1 A. d Aspremont. M2 MathSV: Optimisation et simulation numérique. 1/106 Today Convex optimization: introduction Course organization and other gory details...
5.1 Bipartite Matching
CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson
Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.
1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that
Lecture 5 Principal Minors and the Hessian
Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and
Optimization Methods in Finance
Optimization Methods in Finance Gerard Cornuejols Reha Tütüncü Carnegie Mellon University, Pittsburgh, PA 15213 USA January 2006 2 Foreword Optimization models play an increasingly important role in financial
Practical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
Interior-Point Algorithms for Quadratic Programming
Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800
[1] F. Jarre and J. Stoer, Optimierung, Lehrbuch, Springer Verlag 2003.
References Lehrbuch: [1] F. Jarre and J. Stoer, Optimierung, Lehrbuch, Springer Verlag 2003. Referierte Zeitschriftenbeiträge: [2] F. Jarre, On the Convergence of the Method of Analytic Centers when applied
On SDP- and CP-relaxations and on connections between SDP-, CP and SIP
On SDP- and CP-relaations and on connections between SDP-, CP and SIP Georg Still and Faizan Ahmed University of Twente p 1/12 1. IP and SDP-, CP-relaations Integer program: IP) : min T Q s.t. a T j =
Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
A New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse
A New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse Bernie Lesieutre Dan Molzahn University of Wisconsin-Madison PSERC Webinar, October
Numerical Methods for Pricing Exotic Options
Numerical Methods for Pricing Exotic Options Dimitra Bampou Supervisor: Dr. Daniel Kuhn Second Marker: Professor Berç Rustem 18 June 2008 2 Numerical Methods for Pricing Exotic Options 0BAbstract 3 Abstract
Equilibrium computation: Part 1
Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium
An interval linear programming contractor
An interval linear programming contractor Introduction Milan Hladík Abstract. We consider linear programming with interval data. One of the most challenging problems in this topic is to determine or tight
The Algebraic Degree of Semidefinite Programming
Mathematical Programming manuscript No. (will be inserted by the editor) Jiawang Nie Kristian Ranestad Bernd Sturmfels The Algebraic Degree of Semidefinite Programming the date of receipt and acceptance
Notes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
Computing a Nearest Correlation Matrix with Factor Structure
Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester [email protected] http://www.ma.man.ac.uk/~higham/ Joint work with Rüdiger
Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
Interior Point Methods and Linear Programming
Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either
Integrating Benders decomposition within Constraint Programming
Integrating Benders decomposition within Constraint Programming Hadrien Cambazard, Narendra Jussien email: {hcambaza,jussien}@emn.fr École des Mines de Nantes, LINA CNRS FRE 2729 4 rue Alfred Kastler BP
CHAPTER 9. Integer Programming
CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral
Real-Time Embedded Convex Optimization
Real-Time Embedded Convex Optimization Stephen Boyd joint work with Michael Grant, Jacob Mattingley, Yang Wang Electrical Engineering Department, Stanford University Spencer Schantz Lecture, Lehigh University,
Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach
MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical
Support Vector Machines
Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric
Quadratic forms Cochran s theorem, degrees of freedom, and all that
Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, [email protected] Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us
Convex Programming Tools for Disjunctive Programs
Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible
Numerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
Definition 11.1. Given a graph G on n vertices, we define the following quantities:
Lecture 11 The Lovász ϑ Function 11.1 Perfect graphs We begin with some background on perfect graphs. graphs. First, we define some quantities on Definition 11.1. Given a graph G on n vertices, we define
The Australian Journal of Mathematical Analysis and Applications
The Australian Journal of Mathematical Analysis and Applications Volume 7, Issue, Article 11, pp. 1-14, 011 SOME HOMOGENEOUS CYCLIC INEQUALITIES OF THREE VARIABLES OF DEGREE THREE AND FOUR TETSUYA ANDO
Understanding Basic Calculus
Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other
1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2
4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the
24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
A characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
Nonlinear Iterative Partial Least Squares Method
Numerical Methods for Determining Principal Component Analysis Abstract Factors Béchu, S., Richard-Plouet, M., Fernandez, V., Walton, J., and Fairley, N. (2016) Developments in numerical treatments for
Big Data Optimization at SAS
Big Data Optimization at SAS Imre Pólik et al. SAS Institute Cary, NC, USA Edinburgh, 2013 Outline 1 Optimization at SAS 2 Big Data Optimization at SAS The SAS HPA architecture Support vector machines
Some Lecture Notes and In-Class Examples for Pre-Calculus:
Some Lecture Notes and In-Class Examples for Pre-Calculus: Section.7 Definition of a Quadratic Inequality A quadratic inequality is any inequality that can be put in one of the forms ax + bx + c < 0 ax
TOMLAB - For fast and robust largescale optimization in MATLAB
The TOMLAB Optimization Environment is a powerful optimization and modeling package for solving applied optimization problems in MATLAB. TOMLAB provides a wide range of features, tools and services for
Applied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
8 Square matrices continued: Determinants
8 Square matrices continued: Determinants 8. Introduction Determinants give us important information about square matrices, and, as we ll soon see, are essential for the computation of eigenvalues. You
! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR
Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision
Optimal Scheduling for Dependent Details Processing Using MS Excel Solver
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 8, No 2 Sofia 2008 Optimal Scheduling for Dependent Details Processing Using MS Excel Solver Daniela Borissova Institute of
Numerical Methods I Eigenvalue Problems
Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 [email protected] 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)
Statistical machine learning, high dimension and big data
Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,
7 Gaussian Elimination and LU Factorization
7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method
3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.
Exercise 1 1. Let A be an n n orthogonal matrix. Then prove that (a) the rows of A form an orthonormal basis of R n. (b) the columns of A form an orthonormal basis of R n. (c) for any two vectors x,y R
ALGEBRA 2 CRA 2 REVIEW - Chapters 1-6 Answer Section
ALGEBRA 2 CRA 2 REVIEW - Chapters 1-6 Answer Section MULTIPLE CHOICE 1. ANS: C 2. ANS: A 3. ANS: A OBJ: 5-3.1 Using Vertex Form SHORT ANSWER 4. ANS: (x + 6)(x 2 6x + 36) OBJ: 6-4.2 Solving Equations by
Copositive Cones. Qingxia Kong. Department of Decision Sciences, NUS. Email: [email protected]. Chung-Yee Lee
Scheduling Arrivals to a Stochastic Service Delivery System using Copositive Cones Qingxia Kong Department of Decision Sciences, NUS. Email: [email protected] Chung-Yee Lee Department of Industrial Engineering
A note on companion matrices
Linear Algebra and its Applications 372 (2003) 325 33 www.elsevier.com/locate/laa A note on companion matrices Miroslav Fiedler Academy of Sciences of the Czech Republic Institute of Computer Science Pod
Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008
A tutorial on: Iterative methods for Sparse Matrix Problems Yousef Saad University of Minnesota Computer Science and Engineering CRM Montreal - April 30, 2008 Outline Part 1 Sparse matrices and sparsity
Transportation Polytopes: a Twenty year Update
Transportation Polytopes: a Twenty year Update Jesús Antonio De Loera University of California, Davis Based on various papers joint with R. Hemmecke, E.Kim, F. Liu, U. Rothblum, F. Santos, S. Onn, R. Yoshida,
Classification of Cartan matrices
Chapter 7 Classification of Cartan matrices In this chapter we describe a classification of generalised Cartan matrices This classification can be compared as the rough classification of varieties in terms
Minimally Infeasible Set Partitioning Problems with Balanced Constraints
Minimally Infeasible Set Partitioning Problems with alanced Constraints Michele Conforti, Marco Di Summa, Giacomo Zambelli January, 2005 Revised February, 2006 Abstract We study properties of systems of
Integer Factorization using the Quadratic Sieve
Integer Factorization using the Quadratic Sieve Chad Seibert* Division of Science and Mathematics University of Minnesota, Morris Morris, MN 56567 [email protected] March 16, 2011 Abstract We give
Algorithm Design and Analysis
Algorithm Design and Analysis LECTURE 27 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/6/2011 S. Raskhodnikova;
Approximation Algorithms
Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms
Zeros of Polynomial Functions
Review: Synthetic Division Find (x 2-5x - 5x 3 + x 4 ) (5 + x). Factor Theorem Solve 2x 3-5x 2 + x + 2 =0 given that 2 is a zero of f(x) = 2x 3-5x 2 + x + 2. Zeros of Polynomial Functions Introduction
Identification of Hybrid Systems
Identification of Hybrid Systems Alberto Bemporad Dip. di Ingegneria dell Informazione Università degli Studi di Siena [email protected] http://www.dii.unisi.it/~bemporad Goal Sometimes a hybrid model
Algebra 1 Course Title
Algebra 1 Course Title Course- wide 1. What patterns and methods are being used? Course- wide 1. Students will be adept at solving and graphing linear and quadratic equations 2. Students will be adept
Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach
Outline Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach The University of New South Wales SPOM 2013 Joint work with V. Jeyakumar, B.S. Mordukhovich and
