An Introduction on SemiDefinite Program



Similar documents
An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM

Solving polynomial least squares problems via semidefinite programming relaxations

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Conic optimization: examples and software

Duality of linear conic problems

Nonlinear Optimization: Algorithms 3: Interior-point methods

2.3 Convex Constrained Optimization Problems

3. Linear Programming and Polyhedral Combinatorics

6. Cholesky factorization

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

DATA ANALYSIS II. Matrix Algorithms

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Some representability and duality results for convex mixed-integer programs.

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Mathematical finance and linear programming (optimization)

4.6 Linear Programming duality

Solving Large Scale Optimization Problems via Grid and Cluster Computing

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

Discrete Optimization

A high-performance software package for semidefinite programs: SDPA 7

Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications

Proximal mapping via network optimization

Completely Positive Cone and its Dual

Nonlinear Programming Methods.S2 Quadratic Programming

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

Convex Optimization. Lieven Vandenberghe University of California, Los Angeles

Optimisation et simulation numérique.

5.1 Bipartite Matching

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Lecture 5 Principal Minors and the Hessian

Optimization Methods in Finance

Practical Guide to the Simplex Method of Linear Programming

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

Interior-Point Algorithms for Quadratic Programming

[1] F. Jarre and J. Stoer, Optimierung, Lehrbuch, Springer Verlag 2003.

On SDP- and CP-relaxations and on connections between SDP-, CP and SIP

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

A New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse

Numerical Methods for Pricing Exotic Options

Equilibrium computation: Part 1

An interval linear programming contractor

The Algebraic Degree of Semidefinite Programming

Notes on Symmetric Matrices

Computing a Nearest Correlation Matrix with Factor Structure

Chapter Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the school year.

Interior Point Methods and Linear Programming

Integrating Benders decomposition within Constraint Programming

CHAPTER 9. Integer Programming

Real-Time Embedded Convex Optimization

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Support Vector Machines

Quadratic forms Cochran s theorem, degrees of freedom, and all that

Convex Programming Tools for Disjunctive Programs

Numerical Analysis Lecture Notes

Definition Given a graph G on n vertices, we define the following quantities:

The Australian Journal of Mathematical Analysis and Applications

Understanding Basic Calculus

1 Solving LPs: The Simplex Algorithm of George Dantzig

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

24. The Branch and Bound Method

A characterization of trace zero symmetric nonnegative 5x5 matrices

Nonlinear Iterative Partial Least Squares Method

Big Data Optimization at SAS

Some Lecture Notes and In-Class Examples for Pre-Calculus:

TOMLAB - For fast and robust largescale optimization in MATLAB

Applied Algorithm Design Lecture 5

8 Square matrices continued: Determinants

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Optimal Scheduling for Dependent Details Processing Using MS Excel Solver

Numerical Methods I Eigenvalue Problems

Statistical machine learning, high dimension and big data

7 Gaussian Elimination and LU Factorization

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

ALGEBRA 2 CRA 2 REVIEW - Chapters 1-6 Answer Section

Copositive Cones. Qingxia Kong. Department of Decision Sciences, NUS. Chung-Yee Lee

A note on companion matrices

Yousef Saad University of Minnesota Computer Science and Engineering. CRM Montreal - April 30, 2008

Transportation Polytopes: a Twenty year Update

Classification of Cartan matrices

Minimally Infeasible Set Partitioning Problems with Balanced Constraints

Integer Factorization using the Quadratic Sieve

Algorithm Design and Analysis

Approximation Algorithms

Zeros of Polynomial Functions

Identification of Hybrid Systems

Algebra 1 Course Title

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Transcription:

An Introduction on SemiDefinite Program from the viewpoint of computation Hayato Waki Institute of Mathematics for Industry, Kyushu University 2015-10-08 Combinatorial Optimization at Work, Berlin, 2015 0 / 43

Contents and Purpose of this lecture Subject SemiDefinite Program Contents Part I Formulations & Strong duality on SDP Part II Algorithm on SDP Primal-Dual Interior-Point Methods Part III Comments of Computation on SDP Survey M. Todd, Semidefinite optimization, Acta Numerica 10 (2001), pp. 515 560. Purpose Better understanding for the next lecture (MOSEK on SDP) by Dr. Dahl Know the difficulty in solving SDP in Part III Message : SDP is convex, but also nonlinear 1 / 43

Properties and applications of SDP Properties : SDP is an extension of LP Duality Theorem Solvable by primal-dual interior-point methods with up to a given tolerance Applications Combinatorial problems, e.g., Max-Cut by Goemans and Williams Control theory, e.g., H control problem Lift-and-projection approach for nonconvex quadratic problem Lasserre s hierarchy for polynomial optimization problems and complexity theory Embedding problems, e.g., sensor networks and molecular conformation Statistics and machine learning, etc... 2 / 43

From LP To SDP LP Primal and Dual min x c T x max (y,s) b T y s.t. a T j x = b j ( j) s.t. s = c m j=1 y ja j x R n + s R n + Minimize/Maximize linear function over the intersection the affine set and R n + R n + is closed convex cone in Rn Extension to SDP Extension to the space of symmetric matrices S n c R n C S n, a j R n A j S n Minimize/Maximize linear function over the intersection the affine set and the set of positive semidefinite matrices 3 / 43

LP Primal and Dual min x c T x max (y,s) b T y s.t. a T j x = b j ( j) s.t. s = c m j=1 y ja j x R n + s R n + SDP Primal and Dual min X C X max (y,s) b T y s.t. A j X = b j ( j) s.t. S = C m j=1 y ja j X S n + S S n + S n is the set of n n symmetry matrices, S n + is the set of n n symmetry positive semidefinite matrices, and n n A X := A kl X kl. k=1 l=1 4 / 43

1. Definition of positive semidefinite matrices X S n is positive semidefinite if for all z R n, z T Xz 0. Equivalently, all eigenvalues are nonnegative. Remark Eigendecomposition (Spectral decomposition); Q R n n (orthogonal) and λ i 0 such that λ 1 λ 2 X = Q QT... λn See textbooks of linear algebra for proof B R n n such that X = BB T 2. Zero diagonal for positive semidefinite matrices For X S n +, each X ii is nonnegative. In addition, if X ii = 0 for some i, then X ij = X ji = 0 for all j = 1,..., n. 5 / 43

Example of SDP C = A 3 = ( ) ( ) ( ) 2 10 4, A 1 1 =, A 4 2 =, 8 ( ) ( ) 9 x11 x, X = 12, b = ( 42 8 20 ) T 9 2 x 12 x 22 Primal SDP is formulated as follows: 10x 11 + 8x 12 = 42, inf X 2x ( 8x 22 = 8, ) 11 + x 22 : x11 x 18x 12 + 2x 22 = 20, 12 S x 12 x 2 + 22 (Fortunately) the primal solution is uniquely fixed: ( ) 5 1 X = is positive definite and obj. val. = 11. 1 1 6 / 43

Primal SDP is formulated as follows: 10x 11 + 8x 12 = 42, inf X 2x ( 8x 22 = 8, ) 11 + x 22 : x11 x 18x 12 + 2x 22 = 20, 12 S x 12 x 2 + 22 Dual SDP is formulated as follows: { ( 2 10y1 4y 42y 1 8y 2 + 20y 3 : 1 + 9y 3 sup (y,s) ) } S 2 4y 1 + 9y 3 1 + 8y 2 2y + 3 A dual solution is (1/5, 37/360, 4/45) with the obj. val. = 11. 7 / 43

Application : Computation of lower bounds of nonconvex QP QP θ := inf x { } x T Qx + 2c T x : x T Q j x + 2c T j x + r j 0 (j = 1,..., m) SDP relaxation : Add the following constraint and replace x i x j X ij : ( ) 1 (1, ) x S n+1 x + X Sn+1 + {( ) ( η 0 c T rj c T ) } := inf X : j X 0, X x c Q 00 = 1, X S n+1 c j Q + j Remark Handle as SDP η θ binary x {0, 1} x 2 x = 0 MIQP with binary variables = QP 8 / 43

Application : Lasserre s SDP relaxation for Polynomial Optimization Problems POP : f, g j are polynomials on x R n θ := inf x Lasserre s SDP relaxation {f(x) : g j(x) 0 (j = 1,..., m)} Generates a sequence of SDP problems : {P r } r 1 Optimal value : θ r θ r+1 θ ( r) Under assumptions, θ r θ (r ) r = 2, 3, θ r θ in practice Strongly connected to sum of square polynomials 9 / 43

Compared with LP Similar points Weak and Strong duality holds PDIPM also works in SDP Different points SDP may have an irrational optimal solution { ( ) } 2 y E.g., sup y : S 2 y y 1 + Optimal solution y = 2, not rational Finite optimal value, but solutions { ( ) } y1 1 E.g., inf y 1 : S 2 y 1 y + 2 10 / 43

Different points (cont d) 2 types of infeasibility (LP) y; A T y R n +, bt y > 0 Primal LP is infeasible (SDP) y; A T y S n +, bt y > 0 Primal SDP is infeasible Remark : Need to consider the following cases Finite optimal value, but no optimal solutions for Primal and/or Dual Difficult to detect the infeasibility completely 11 / 43

Duality on SDP Weak duality for any X F P and (y, S) F D, C X b T y θ P θ D Slater conditon : S n ++ is the set of positive definite matrices Primal satisfies Slater condition if X F P such that X S n ++ Dual Slater condition if (y, S) F D such that S S n ++ Strong duality Primal satisfies Slater condition and dual is feasible. Then θp = θ D and dual has an optimal solution. Slater condition are required for both primal and dual for theoretical results on PDIPMs See survey on SDP for proof 12 / 43

3. Inner products on positive semidefinite matrices For all X, S S n +, X S 0. Moreover, X S = 0 iff XS = O n Proof : B s. t. X = BB T and D s.t. S = DD T. Then X S = Trace(BB T DD T ) = Trace(D T BB T D) = Trace((B T D) T (B T D)) 0 Moreover, X S = 0 B T D = O n XS = O n Proof of weak duality In fact, for X F P and (y, S) F D, m C X b T y = C y j A j X = S X 0 j=1 because both matrices are positive semidefinite. 13 / 43

Remark of 3 (cont d) X F P : optimal in primal and (y, S) F D : optimal in dual Then, θ P θ D = X S = 0 XS = O n XS = O n is used in PDIPM 14 / 43

SDP with multiple positive semidefinite cones SDP inf Xk s.t. where C k, A k j S n k Example N k=1 C k X k N A k j X k = b j (j = 1,..., m) k=1 X k S n k + (k = 1,..., N) A X d, X S n + A X+s = d, X Sn + and s S1 + (= R +) Dual sup y,s k bt y : S k = A k 0 m j=1 y j A k j S n k + (k = 1,..., N) 15 / 43

Remark SDP with R n +, Second order cone L n and S n + can be handled as SDP and PDIPM works L n := {(x 0, x) R n : x 2 x 0 } Free variable can be accepted A X + a T x = d, X S n +, x Rn A X + a T x 1 a T x 2 = d, X S n + and x 1, x 2 R n + 16 / 43

Classification of Algorithms for SDP Algorithms for SDP Ellipsoid method Interior-point methods Bundle method first-order methods, etc Interior-point methods Path-following algorithm (= Logarithmic barrier function) Potential reduction algorithm Self-dual homogeneous embeddings Path-following algorithm Primal Dual Primal-dual 17 / 43

Path-following method Optimality conditions : a pair of optimal solutions (X, y, S) satisfies A j X = b j, X S n +, S = C m j=1 y ja j, S S n +, XS = O n ( C X b T y = 0) Perturbed system : for µ > 0, A j X = b j, X S n ++, S = C m j=1 y ja j, S S n ++, XS = µi n Remark for any µ > 0, unique solution (X(µ), y(µ), S(µ)) Central path {(X(µ), y(µ), S(µ)) : µ > 0} is smooth curve and go to a pair of optimal solutions of primal and dual Follows the central path = Path-following method 18 / 43

Algorithm 1: General framework of path-following method Input: (X 0, y 0, S 0 ) F P F D such that X 0, S 0 S n ++, ϵ > 0, 0 < θ < 1 and some parameters X X 0, y y 0 and S S 0 ; while X S > ϵ do Compute direction ( X, y, S) from CPE(µ); Compute step size α P, α D > 0; X X + α P X; y y + α D y; S S + α D S; Compute µ θµ; end return (X, y, S); Remark Infeasible initial guess is acceptable # of iteration is polynomial in n, m and log(ϵ) Computational cost = Computation of direction 19 / 43

Computation of direction : Find ( X, y, S) such that X + X F P, (y + y, S + S) F D and A j X = 0, S m j=1 y ja j = O n, XS + XS + X S = µi n Remark X may not be symmetry. So, change XS = µi n by 1 ( PXSP 1 + P T SXP T) = µi n, 2 where P is nonsingular Possible choice of P P = S 1/2 (HRVW/KSH/M) P = X 1/2 (dual HRVW/KSH/M) P = W 1/2, W = X 1/2 (X 1/2 SX 1/2 ) 1/2 X 1/2 (NT) P =... More than 20 types of directions by Todd 20 / 43

Computational cost in PDIPM 1. Construction of linear system on y for HRVW/KSH/M direction, M y = (RHS), where M = (Trace(A i XA j S 1 )) 1 i,j m Use of sparsity in A j is necessary for computation of M Almost the same for other search directions 2. Solving the linear system M is dense takes O(m 3 ) computation by Cholesky decomposition M is often sparse in SDP relax for POP sparse Cholesky decomposition works well After them, S = m j=1 y ja j and obtain X. 21 / 43

Sparsity in SDP Example Q is nonsingular and dense. Then P 1 is equivalent to P 2 : P 1 : inf X P 2 : inf X where { C X : Ei X = 1 (i = 1,..., n), X S n } +, { } (Q T CQ) X : (Q T E i Q) X = 1 (i = 1,..., n), X S n + { 1 if p = q = i (E i ) pq = 0 o.w. (p, q = 1,..., n) 22 / 43

CPU time : Solved by SeDuMi 1.3 on the MacBook Air (1.7 GHz Intel Core i7) Figure : CPU time on P 1 and P 2 23 / 43

Software Information from http://plato.asu.edu/ftp/sparse_sdp.html SeDuMi, SDPT3 (MATLAB) SDPA (C++, MATLAB) CSDP (C, MATLAB) DSDP (C, MATLAB) MOSEK Remark Based on PDIPM for almost all software Performance depends on SDP problems Modelling languages on SDP : they can call the above software YALMIP CVX 24 / 43

Slater conditions Strong duality Require Slater conditions for Primal or Dual PDIPM requires Slater conditions for both Primal and Dual Sufficient conditions for optimal solutions If either Primal or Dual does not satisfy Slater conditions,... E.g., Lasserre s SDP relaxation P : inf x { x : x 2 1 0, x 0 } Gererate SDP relaxation problems P 1, P 2,..., Slater condition fails in all SDP relaxation & all optimal values are 0 SeDuMi and SDPA returns wrong value 1 All SDP relaxation problems are sensitive to numerical errors in the computation of floating points 25 / 43

E.g., Graph Equipartition G(V, E): a weighted undirected graph Partition the vertex set V into L and R the minimum total weight of the cut subject to L = R QOP formulation { 1 inf wij (1 x i x j ) : x R n 2 } n x i = 0, x 2 i = 1 (i = 1,..., n) i=1 26 / 43

E.g., Graph Equipartition (cont d) SDP relaxation problem: constant matrices W, E and E i inf {W X E X = 0, E i X = 1} X S n + Since E S n +, X Sn ++ s.t. E X = 0 Slater cond. fails Inaccurate value and/or many iterations Table : SeDuMi 1.3 with ϵ=1.0e-8 SDPLIB iter cpusec duality gap gpp124-1 30 2.40-4.63e-05 gpp250-1 29 10.19-1.60e-04 gpp500-1 34 61.58-1.90e-04 gpp124-4 40 3.02-2.14e-08 gpp500-2 40 76.88-8.26e-06 27 / 43

E.g., Graph Equipartition (cont d) inf {W X E X = 0, E i X = 1} X S n + Transformation of SDP by V: 1 1 1 1 V =.... 1 X V T XV 1 =: Z and E VEV T Then, X S n + Z Sn + and E X = 0 Z nn = 0 Eliminate nth row and column from transformed SDP Slater cond. holds 28 / 43

E.g., Graph Equipartition (cont d) Table : Numerical Results by SeDuMi 1.3 with ϵ=1.0e-8. Slater fails Slater holds Problems iter cpusec d.gap d.gap cpusec iter gpp100 30 1.78-2.46e-07-4.97e-09 0.73 16 gpp124-1 30 2.34-4.63e-05-1.75e-08 1.12 19 gpp124-2 26 1.76-1.41e-06-1.11e-09 1.03 18 gpp124-3 30 2.56-4.41e-07-3.05e-09 1.01 17 gpp124-4 40 2.93-2.14e-08-9.52e-11 1.09 17 gpp250-1 29 8.81-1.60e-04-1.82e-08 4.71 21 gpp250-2 29 8.61-1.49e-05-9.74e-09 4.19 19 gpp250-3 34 9.48-3.97e-07-8.12e-10 4.08 18 gpp250-4 35 11.28-8.80e-07-7.43e-10 4.37 19 gpp500-1 34 53.45-1.90e-04-2.76e-08 31.49 24 gpp500-2 40 68.47-8.26e-06-2.20e-09 28.98 22 gpp500-3 28 54.81-1.00e-05-2.39e-09 31.35 21 gpp500-4 28 55.06-1.02e-06-8.96e-10 32.06 23 Comments : If does not satisfy Slater conditions,... PDIPM computes inaccurate values and/or spends many iter. But, reduce the size of SDP 29 / 43

Comments A simple (?) transformation generates an SDP in which Slater cond. holds More elementary approach : { } 1 n (QOP) : inf wij (1 x i x j ) : x i = 0, x 2 i = 1 x R n 2 i=1 n (QOP ) : obtained by substituting x 1 = x i in (QOP) (QOP) SDP relax. (SDP) equiv. V equiv. i=2 (QOP ) SDP relax. (SDP ) General case : separate x into basic and nonbasic variables & substitute basic variables SDP relax { } inf x T Qx + 2c T x : a T x j x = b j (j = 1,..., m), x k {0, 1} 30 / 43

Extension SDP inf X { C X : Aj X = b j, X S n +} Slater condition fails in Primal y R m \ {0} such that b T y 0, j y j A j S n + Moreover, if y such that b T y > 0, then Primal is infeasible Proof of ( ) : Suppose the contrary that Slater condition holds in Primal. ˆX such that A j ˆX = b j and ˆX S n ++. 0 b T y = j (A j ˆX)y j = j A j y j ˆX < 0(contradiction) 31 / 43

Facial Reduction Idea : Let W := j A jy j S n + and bt y = 0 For any feasible solutions X in Primal, W X = j (A j X)y j = b T y = 0. Primal is equivalent to inf {C X : A j X = b j, X S n X + {W} } where {W} := {X : X W = 0} The set S n + {W} has nice structure S n + {W} = { X S n : X = Q ( ) } M O Q T, M S r O O + 32 / 43

Idea (cont d) S n + {W} = { X S n : X = Q ( ) } M O Q T, M S r O O + Assume Q = I n. Then Primal is equivalent to inf { C X : à j X = b j, X S r X +} where à j is r r principal matrix Compare this SDP with Primal the size n r May not satisfy Slater cond. Find y and W for the smaller Primal This procedure terminates in finitely many iterations This procedure is called Facial Reduction Algorithm and acceptable for dual 33 / 43

Histroy of FRA Borwein-Wolkowicz in 1980 for general convex optimization Ramana, Ramana-Tunçel-Wolkowicz for SDP Pataki simplified FRA for the extension Apply FRA into SDP relax. for Graph Partition, Quadratic Assignment, Sensor Network by Wolkowicz group Apply FRA into SDP relax. for Polynomial Optimization in Waki-Muramatsu... 34 / 43

Summary on Slater condition Hope that both Primal and dual satisfy Slater conditions Otherwise, may not have any optimal solutions, and wrong value may be obtained Obtain inaccurate solutions even if exists optimal solutions, but, one can reduce the size of SDP FRA is a general framework to remove the difficulty in Slater cond. In modeling to SDP... Need to be careful in even dual to guarantee the existence of optimal solutions in dual A rigorous solution for FRA is necessary 35 / 43

Status of infeasibility Feasiblity and infeasiblity { inf C X : Aj X = b j, X S n X +} Strongly feasible if SDP satisfies Slater cond. Weakly feasible if SDP is feasible but, does not satisfies Slater cond. Strongly infeasible if improving ray d, i.e., b T d > 0, j d j A j S n +. Weakly infeasible if SDP is infeasible, but improving ray Remark Weak infeasibility does not occur in LP SOCP and conic optimization also have the four status 36 / 43

Example : Infeasible SDPs P 1 P 2 inf X inf X Comments { ( 1 C X : { ( C X : ) ( ) } 1 X = 0, X = 2, X S 2 1 1 +, ) ( ) } 1 X = 0, X = 2, X S 2 1 1 + P 1 is strongly infeasible because certificate y = ( 1, 1) P 2 is weakly infeasible because certificate 37 / 43

Characterization of weak infeasibility Weakly infeasible SDP; for all ϵ >, X S n + A j X b j < ϵ (j = 1,..., m) More elementary characterization of Weak infeasibility by recent work by Liu and Pataki Example P 2 Perturb b 1 = 0 ϵ > 0 P 2 : inf X { ( C X : Then, perturbed P 1 is feasible: ) ( ) } 1 X = ϵ, X = 2, X S 2 1 1 + X = ( ) 1/ϵ 1 1 ϵ 38 / 43

Pathological? (POP) : inf { x y : xy 1/2, x 1/2, y 1/2} x,y Optimal value is 1.5 Apply Lasserre s SDP hierarchy All SDP relaxation is weakly infeasible (in Waki 2012) SeDuMi and SDPA returns 1.5 for higher oder SDP relaxation Sufficient conditions of (POP) for SDP relaxation to be weakly infeasible (in Waki 2012) 39 / 43

Summary on infeasibility Weak infeasibility may occur in SDP, SOCP and conic optimization, but not in LP Difficult to detect this type of infeasibility by software But, software returns good values for weak infeasible SDP 40 / 43

Summary Introduce a part of theoretical and practical aspects in SDP Skip applications of SDP, e.g., SDP relaxation for combinatorial problems Can read papers on SDP Not so easy to handle SDP because it is convex but nonlinear programming 41 / 43

Further Reading I M. Anjos and JB Lasserre, Handbook of Semidefinite, Conic and Polynomial Optimization: Theory, Algorithms. International Series in Operations Research & Management Science, Springer US, 2012. E. de Klerk, Aspects of semidefinite programming : interior point algorithms and selected applications. Applied Optimization, Springer US, 2002. B. Gärtner and J. Matoušek Approximation Algorithms and Semidefinite Programming. Springer, 2012. 42 / 43

Further Reading II M. Todd, Semidefinite optimization. Acta Numerica 10 (2001), pp. 515 560. L. Tunçel, Polyhedral and SDP Methods in Combinatorial Optimization. IFields Institute Monographs, American Mathematical Society, 2012. H. Wolkowicz, R. Saigal and L. Vandenberghe, Handbook of Semidefinite Programming. International Series in Operations Research & Management Science, Springer US, 2000. 43 / 43