Semidefinite Programming in Polynomial Optimization
|
|
- Jessie Jefferson
- 7 years ago
- Views:
Transcription
1 Semidefinite Programming in Polynomial Optimization Monique LAURENT CWI, Amsterdam SIAM Conference on Optimization Stockholm 2005 Semidefinite Programmingin Polynomial Optimization p. 1
2 Polynomial Optimization Problem (P) p min := inf x K p(x) where K := {x R n g 1 (x) 0...g m (x) 0} and p,g 1,...,g m are polynomials in n variables Semidefinite Programmingin Polynomial Optimization p. 2
3 Unconstrained Polynomial Minimization p min := inf x R n p(x) Example: An n n matrix M is copositive if p min 0 for p(x) := n i,j=1 x2 i x2 j M ij, i.e., p(x) is nonnegative on R n Example: A sequence a 1,...,a n N can be partitioned if p min = 0 for p(x) = ( n i=1 a ix i ) 2 + n i=1 (x2 i 1)2 Semidefinite Programmingin Polynomial Optimization p. 3
4 0/1 Linear Programming min c T x s.t. Ax b, x 2 i = x i i Example: The stability number α(g) of G = (V, E) can be computed via any of the programs: α(g) = max i V x i s.t. x i + x j 1 (ij E), x 2 i = x i (i V ) 1 α(g) = min xt (I + A G )x s.t. i V x i = 1, x i 0 (i V ) Hence: (P) is an NP-hard problem Semidefinite Programmingin Polynomial Optimization p. 4
5 Strategy Approximate (P) by a hierarchy of convex (semidefinite) relaxations Shor (1987), Nesterov, Lasserre, Parrilo (2000-) Such relaxations can be constructed using representations of nonnegative polynomials as sums of squares of polynomials and the dual theory of moments Semidefinite Programmingin Polynomial Optimization p. 5
6 Underlying paradigm: Testing whether a polynomial p is nonnegative is hard but one can test whether p is a sum of squares of polynomials efficiently via semidefinite programming Semidefinite Programmingin Polynomial Optimization p. 6
7 Some notation about polynomials Multivariate polynomial: p(x) = α Z n + p α x α 1 1 xα n n }{{} x α = α Z n + p α x α deg(x α ) = α := n i=1 α i deg(p) = max deg(x α ) s.t. p α 0 We identify a polynomial p(x) with its sequence of coefficients p = (p α ) α d, if deg(p) = d Semidefinite Programmingin Polynomial Optimization p. 7
8 p(x) is a sum of squares of polynomials (SOS) if p(x) = m [u j (x)] 2 for some polynomials u 1,...,u m j=1 Then, deg(p) is even and deg(u i ) deg(p)/2 Example: x 2 + y 2 + 2xy + z 6 = (x + y) 2 + (z 3 ) 2 is SOS Semidefinite Programmingin Polynomial Optimization p. 8
9 Recognizing SOS of pol s via SDP p(x) = p α x α α 2d i.e., p(x) = j is a sum of squares of polynomials [u j (x)] 2, where deg(u j ) d p(x) = z T ( j u j u T j )z, setting z := (x β ) β d }{{} X 0 The following semidefinite program is feasible: X 0 X β,γ = p α ( α 2d) β, γ d β+γ=α Semidefinite Programmingin Polynomial Optimization p. 9
10 system in matrix variable X of order ( n+d) ( d n+d ) d, with ) equations ( n+2d 2d polynomial size fixing either n or d Semidefinite Programmingin Polynomial Optimization p. 10
11 When does a polynomial have a decomposition as a sum of squares? Hilbert [1888] classified the pairs (n,d) for which every nonnegative polynomial of degree d in n variables is a sum of squares of polynomials: n = 1 d = 2 n = 2, d = 4 E.g., the Motzkin polynomial: x 4 y 2 + x 2 y 4 3x 2 y is nonnegative but not a SOS Semidefinite Programmingin Polynomial Optimization p. 11
12 Artin [1927] solved Hilbert s 17th problem [1900] ( pi ) 2, where p i,q i R[x 1,...,x n ] p 0 on R n = p = i q i That is, p q 2 is SOS for some q R[x 1,...,x n ] Sometimes, the common denominator is known: Pólya [1928] + Reznick [1995]: For p homogeneous p > 0 on R n \ {0} = p ( n i=1 x 2 i) r SOS for some r N Semidefinite Programmingin Polynomial Optimization p. 12
13 (Dual) SOS approximations for (P) following [Lasserre 2001] Recall: K = {x R n g l (x) 0 (l = 1,...,m)} p min := inf x K p(x) = sup λ s.t. p(x) λ 0 x K Replacing the nonnegativity condition by a stronger SOS condition: formulate (SOSt) σ t := sup λ s.t. p(x) λ = s 0 (x) + m l=1 s l(x)g l (x) s 0, s l are SOS deg(s 0 ), deg(s l g l ) 2t for any t deg(p)/2, deg(g l )/2 Semidefinite Programmingin Polynomial Optimization p. 13
14 σ t σ t+1 p min Example: K = R n (unconstrained case), deg(p) = 2d σ t = σ d p min, with equality iff p(x) p min is SOS Equality in the univariate case (Shor [1987], Nesterov [2000]) Semidefinite Programmingin Polynomial Optimization p. 14
15 (Primal) Moment Approximations p min = inf x K p(x) = inf µ P K p(x)µ(dx) = inf y M K p T y p(x)µ(dx) = α p α x α µ(dx) µ P K if µ is a probability measure supported by K y M K if y α = x α µ(dx) }{{} moment of order α of the measure µ α, for some µ P K Semidefinite Programmingin Polynomial Optimization p. 15
16 Lemma: If y is the sequence of moments of µ P K then y 0 = 1 M t (y) := (y α+β ) α, β t 0 for t 0 moment matrix of order t M t (g l y) := ( γ (g l) γ y α+β+γ ) α, β t 0 for t 0 localizing matrix Moreover: supp(µ) Zeros(p) p KerM t (y) Semidefinite Programmingin Polynomial Optimization p. 16
17 Proof: For p = (p α ) α t, p T M t (y)p = p(x) 2 µ(dx) 0 p T M t (g l y)p = g l (x)p(x) 2 µ(dx) 0 Semidefinite Programmingin Polynomial Optimization p. 17
18 If we relax the moment condition y M K " by requiring positive semidefiniteness of its moment matrix, we obtain a hierarchy of tractable SDP relaxations: (MOMt) p t := inf p T y s.t. y 0 = 1, M t (y) 0 M t dl (g l y) 0 (l = 1,...,m) for any t deg(p)/2, d l := deg(g l )/2 p t p t+1 p min Semidefinite Programmingin Polynomial Optimization p. 18
19 Lemma: The semidefinite programs (SOSt) and (MOMt) are dual of each other. Weak duality: σ t p t No duality gap, e.g., if K has a nonempty interior Semidefinite Programmingin Polynomial Optimization p. 19
20 Properties of the bounds σ t p t p min For fixed t, one can compute σ t, p t in polynomial time (to any fixed precision) Asymptotic (finite) convergence to p min via some representation theorems for positive polynomials from real algebraic geometry Optimality certificate via some theorems of Curto and Fialkow about moment matrices Extracting global minimizers via the eigenvalue method for solving systems of polynomial equations Semidefinite Programmingin Polynomial Optimization p. 20
21 Asymptotic convergence Representation theorem: [Putinar 1993] (more elementary proof by [Schweighofer 2003]) If u l SOS : {x l u l (x)g l (x) 0} is compact ( ) then p > 0 on K = p = s 0 + l s lg l with s 0,s l SOS Convergence theorem: [Lasserre 2001] If ( ) then lim σ t = lim p t = p min t t Semidefinite Programmingin Polynomial Optimization p. 21
22 Note: ( ) holds, e.g., if {x g l (x) 0} is compact for some l if the equations x 2 i = x i are present in the description of K (0/1 case) if the radius R of a ball containing K is known, then add the (redundant) constraint R 2 i x2 i 0 to the description of K Note: A representation result valid for p 0 on K" would give a finite convergence result Semidefinite Programmingin Polynomial Optimization p. 22
23 Finite Convergence: Finite Variety Assume the equations: h 1 (x) = 0...h k (x) = 0 are present in the description of K and V := {x C n h 1 (x) = 0...h k (x) = 0} is finite Then: p t = p min for t large enough [Laurent 2002] Moreover: σ t = p t = p min for t large enough if h 1,...,h k form a Groebner basis [Laurent 2002] or if they generate a radical ideal [Parrilo 2002] Representation result in radical case: [Parrilo 2002] p 0 on K = p = s 0 + l s lg l (s 0,s l SOS) Semidefinite Programmingin Polynomial Optimization p. 23
24 In the 0/1 case p t = p min for t n + d where d := max l deg(g l )/2 [Lasserre 2001] The hierarchy of bounds p t refines other combinatorial hierarchies, e.g., by Lovász-Schrijver [1991] (cf. [Laurent 2003]) Note: One can use the equations h 1 (x) = 0...h k (x) = 0 to reduce the number of variables in the moment relaxations (MOMt) Semidefinite Programmingin Polynomial Optimization p. 24
25 Optimality Certificate Theorem: [Henrion-Lasserre 2005] If y is an optimum solution to the program (MOMt) satisfying: (RC) rankm t (y) = rankm t d (y), where d = max l deg(g l )/2 then: p t = p min Semidefinite Programmingin Polynomial Optimization p. 25
26 Proof By a theorem of [Curto-Fialkow 1996] (cf. [Laurent 2004]) y is the sequence of moments of an r-atomic nonnegative measure µ µ = r i=1 λ i δ xi, where λ i > 0, i λ i = 1, x i K = y = r i=1 λ i (x α i ) α 2t = p min p t = p T y = r i=1 λ i p(x i ) p min Moreover: x 1,...,x r are global minimizers over K Semidefinite Programmingin Polynomial Optimization p. 26
27 Extracting global minimizers Then, supp(µ) = {x 1,...,x r } {global minimizers} Moreover, supp(µ) = Zeros(p) p KerM t (y) Therefore: One can find the global minimizers x 1,...,x r by computing the common zeros to a system of polynomial equations Use the eigenvalue method Based on computing the eigenvalues of the r r multiplication matrices in the polynomial ring R[x 1,...,x n ]/I, where I is the ideal generated by the kernel of M t (y) Semidefinite Programmingin Polynomial Optimization p. 27
28 Implementations of the SOS/moment relaxation method GloptiPoly by Henrion, Lasserre SOSTOOLS by Prajna, Papachristodoulou, Parrilo Semidefinite Programmingin Polynomial Optimization p. 28
29 Example 1 min p = 25(x 1 2) 2 (x 2 2) 2 (x 3 1) 2 (x 4 4) 2 (x 5 1) 2 (x 6 4) 2 s.t. (x 3 3) 2 + x 4 4, (x 5 3) 2 + x 6 4 x 1 3x 2 2, x 1 + x 2 2, x 1 + x 2 6, x 1 + x 2 2, 1 x 3 5, 0 x 4 6, 1 x 5 5, 0 x 6 10, x 1,x 2 0 order t bound p t solution extracted 1 unbounded none (5,1,5,0,5,10) The global minimum is found at the relaxation of order t = 2 Semidefinite Programmingin Polynomial Optimization p. 29
30 Example 2 min p = x 1 x 2 s.t. x 2 2x 4 1 8x x x 2 4x x x2 1 96x x 1 3, 0 x 2 4 order t bound p t solution extracted 2-7 none none (2.3295,3.1785) The global minimum is found at the relaxation of order t = 4 Semidefinite Programmingin Polynomial Optimization p. 30
31 Unconstrained polynomial minimization As K = R n, the relaxation scheme just gives one bound: σ d = p d p min, with equality iff p(x) p min is SOS Idea: Get better bounds by transforming the unconstrained problem into a constrained problem If p has a minimum: p min = p grad := inf x V R grad p(x) where V R grad := {x Rn p x i = 0 (i = 1,...,n)} Semidefinite Programmingin Polynomial Optimization p. 31
32 If, moreover, a bound R is known on the norm of a global minimizer: p min = p ball := inf R 2 i x2 i 0p(x) Semidefinite Programmingin Polynomial Optimization p. 32
33 When p attains its minimum: The ball approach : p ball can be approximated via Lasserre s relaxation scheme (asymptotic convergence, as Putinar s assumption holds!) seems to work well only if the radius R of the ball is not too large... The gradient variety approach: Representation result: [Demmel, Nie, Sturmfels 2004] p > 0 on V R grad = p = s 0 + n i=1 u i p x i where s 0 SOS, u i R[x 1,...,x n ] Semidefinite Programmingin Polynomial Optimization p. 33
34 Convergence result: [Demmel, Nie, Sturmfels 2004] There is asymptotic convergence of the SOS and moment bounds to p grad Moreover: When p x 1,..., p x n generate a radical ideal, the representation result holds for p 0, implying the finite convergence Semidefinite Programmingin Polynomial Optimization p. 34
35 What if p is not known to have a minimum? Strategy: Perturb the polynomial p [Hanzon-Jibetean 2003] [Jibetean-Laurent 2004] n p ǫ (x) := p(x) + ǫ( x 2d+2 i ) for small ǫ > 0 i=1 p ǫ has a minimum and lim ǫ 0 (p ǫ ) min = p min the gradient variety of p ǫ is finite one can compute (p ǫ ) min via the gradient variety approach finite convergence of the moment bounds to (p ǫ ) min (in 2nd steps) Semidefinite Programmingin Polynomial Optimization p. 35
36 one can use the gradient equations: p ǫ = (2d + 2)x 2d+1 x i i + p x i = 0 i to reduce # variables ( only variables y α with α i 2d i) and to express (p ǫ ) min via a SDP with a single LMI the limits of global minimizers of p ǫ (as ǫ 0) are global minimizers of p (if any) Semidefinite Programmingin Polynomial Optimization p. 36
37 Examples Motzkin polynomial: p = x 2 y 2 (x 2 + y 2 3) + 1 Then, p min = 0, attained at (±1, ±1), and p is not SOS ǫ order t (r t,r t 1,r t 2 ) (p ǫ ) t extracted solutions 3 (10,6,3) none (4,4,4) (±0.9453, ±0.9453) (4,4,4) (±0.9935, ±0.9935) (4,4,4) (±0.9993, ±0.9993) (4,4,4) (±0.9999, ±0.9999) Semidefinite Programmingin Polynomial Optimization p. 37
38 Example: p = (xy 1) 2 + y 2 Then, p min = 0 is not attained ǫ order t (r t,r t 1,r t 2 ) (p ǫ ) t extracted solutions 2 (4,2,1) none (2,2,2) ±(1.3981, ) (2,2,2) ±(1.9499, ) (2,2,2) ±(2.6674, ) (2,2,2) ±(3.6085, ) (2,2,2) ±(4.8511, ) (3,2,2) ±(6.4986, ) (3,2,2) ±(8.6882, ) (8,4,2) none (7,4,2) none Semidefinite Programmingin Polynomial Optimization p. 38
39 Testing copositivity of a matrix M M copositive iff p min = 0 for p(x) := n i,j=1 x2 i x2 j M ij Example 1: M = M is copositive, e.g., since ( i x2 i )p is SOS [Parrilo 2000] Semidefinite Programmingin Polynomial Optimization p. 39
40 ǫ order t (r t,r t 1,r t 2 ) (p ǫ ) t extracted solutions 2 (5,15) none (18,6,3) none (4,4,4) (±0.7058, 0, 0, ±0.7058) (9,7,5) none Semidefinite Programmingin Polynomial Optimization p. 40
41 Example 2: M = ǫ order t (r t, r t 1, r t 2 ) (p ǫ ) t extracted solutions 1 3 (8,7,4) none 1 4 (8,8,7) ±(0.8165, , , 0, 0) (8,7,4) none (8,8,7) ±(2.5820, ,2.5820, 0,0) (8,7,4) none certificate that M is not copositive Semidefinite Programmingin Polynomial Optimization p. 41
42 Approximating the stability number via matrix copositivity and Pólya s representation result [de Klerk-Pasechnik 2002] [Parrilo 2000] n M strictly copositive, i.e., p M := x 2 i x2 j M ij > 0 on R n \ {0} i,j=1 [Pólya] = ( i x 2 i )r p M is SOS for some r N = M copositive [Motzkin-Straus 1965] α(g) = min t s.t. t(i + A G ) J is copositive Semidefinite Programmingin Polynomial Optimization p. 42
43 Define: ϑ (r) (G) := min t s.t. t(i + A G ) J K (r) where K (r) := {M ( i x2 i )r p M is SOS} Theorem: [dkp] ϑ (r) (G) = α(g) if r α(g) 2 Conjecture: [dkp] ϑ (r) (G) = α(g) if r α(g) 1 Motivation: this rate of convergence holds for other hierarchies, e.g., by Lovász-Schrijver, Lasserre Theorem: [Gvozdenović-L 2004] True for α(g) 8 Lasserre s hierarchy refines the hierarchy ϑ (r) (G) Other results by [Peña-Vera-Zuluaga 2005] Semidefinite Programmingin Polynomial Optimization p. 43
44 Yet another representation result Theorem: [Lasserre 2004] p 0 on R n = ǫ > 0 r N for which ( r ) n x 2k i p r,ǫ := p + ǫ k! k=0 i=1 is SOS Note: p r,ǫ p 1 0 as ǫ ց 0 Although there are much more nonnegative polynomials than SOS polynomials (at fixed degree, when n ) [Blekherman 2003] Semidefinite Programmingin Polynomial Optimization p. 44
45 new relaxation scheme for p min = inf x R n p(x) inf y T p r,ǫ s.t. M r (y) 0, y 0 = 1 extension to the constrained case Semidefinite Programmingin Polynomial Optimization p. 45
46 Many other interesting results unfortunately not covered here... SDP versus LP relaxations [Sherali-Adams] [Lasserre] Exploiting the symmetry to reduce the size of the SDP [Parrilo-Gaterman] [Schrijver] [Dukanovic-Rendl] [de Klerk-Pasechnik-Schrijver] [Laurent]... invariant theory, block-diagonalization of C -algebras Exploiting the sparsity of the polynomials to reduce the size of the SDP [Reznick 1978] [Kim-Kojima-Muramatsu-Waki]... Extension to matrix polynomial optimization [Kojima] [Hol-Scherer] [Henrion-Lasserre]... Semidefinite Programmingin Polynomial Optimization p. 46
2.3 Convex Constrained Optimization Problems
42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions
More informationDuality of linear conic problems
Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least
More informationAn Introduction on SemiDefinite Program
An Introduction on SemiDefinite Program from the viewpoint of computation Hayato Waki Institute of Mathematics for Industry, Kyushu University 2015-10-08 Combinatorial Optimization at Work, Berlin, 2015
More informationCompletely Positive Cone and its Dual
On the Computational Complexity of Membership Problems for the Completely Positive Cone and its Dual Peter J.C. Dickinson Luuk Gijben July 3, 2012 Abstract Copositive programming has become a useful tool
More informationAn Accelerated First-Order Method for Solving SOS Relaxations of Unconstrained Polynomial Optimization Problems
An Accelerated First-Order Method for Solving SOS Relaxations of Unconstrained Polynomial Optimization Problems Dimitris Bertsimas, Robert M. Freund, and Xu Andy Sun December 2011 Abstract Our interest
More informationSolving polynomial least squares problems via semidefinite programming relaxations
Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose
More informationFurther Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1
Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing
More informationConic optimization: examples and software
Conic optimization: examples and software Etienne de Klerk Tilburg University, The Netherlands Etienne de Klerk (Tilburg University) Conic optimization: examples and software 1 / 16 Outline Conic optimization
More information1 Norms and Vector Spaces
008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)
More informationMathematical finance and linear programming (optimization)
Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may
More information10. Proximal point method
L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing
More informationModern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
More information4.6 Linear Programming duality
4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal
More informationAn interval linear programming contractor
An interval linear programming contractor Introduction Milan Hladík Abstract. We consider linear programming with interval data. One of the most challenging problems in this topic is to determine or tight
More informationA Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems
A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose
More informationAdvances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications
Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA)
More informationLecture 11: 0-1 Quadratic Program and Lower Bounds
Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite
More informationBounding Option Prices of Multi-Assets: A Semidefinite Programming Approach
Bounding Option Prices of Multi-Assets: A Semidefinite Programming Approach Deren Han Xun Li Defeng Sun and Jie Sun Abstract Recently, semidefinite programming has been used to bound the price of a single-asset
More informationCopositive Cones. Qingxia Kong. Department of Decision Sciences, NUS. Email: qingxia@nus.edu.sg. Chung-Yee Lee
Scheduling Arrivals to a Stochastic Service Delivery System using Copositive Cones Qingxia Kong Department of Decision Sciences, NUS. Email: qingxia@nus.edu.sg Chung-Yee Lee Department of Industrial Engineering
More informationDate: April 12, 2001. Contents
2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........
More informationSensitivity analysis of utility based prices and risk-tolerance wealth processes
Sensitivity analysis of utility based prices and risk-tolerance wealth processes Dmitry Kramkov, Carnegie Mellon University Based on a paper with Mihai Sirbu from Columbia University Math Finance Seminar,
More informationSemidefinite characterisation of invariant measures for one-dimensional discrete dynamical systems
Semidefinite characterisation of invariant measures for one-dimensional discrete dynamical systems Didier Henrion 1,2,3 April 24, 2012 Abstract Using recent results on measure theory and algebraic geometry,
More informationNP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems
NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems Amir Ali Ahmadi, Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Abstract We show that unless P=NP, there exists no
More informationAn Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.
An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu In fact, the great watershed in optimization isn t between linearity
More informationPUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
More informationContinuity of the Perron Root
Linear and Multilinear Algebra http://dx.doi.org/10.1080/03081087.2014.934233 ArXiv: 1407.7564 (http://arxiv.org/abs/1407.7564) Continuity of the Perron Root Carl D. Meyer Department of Mathematics, North
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationA QCQP Approach to Triangulation. Chris Aholt, Sameer Agarwal, and Rekha Thomas University of Washington 2 Google, Inc.
A QCQP Approach to Triangulation 1 Chris Aholt, Sameer Agarwal, and Rekha Thomas 1 University of Washington 2 Google, Inc. 2 1 The Triangulation Problem X Given: -n camera matrices P i R 3 4 -n noisy observations
More informationError Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach
Outline Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach The University of New South Wales SPOM 2013 Joint work with V. Jeyakumar, B.S. Mordukhovich and
More informationChapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling
Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one
More informationCHAPTER 9. Integer Programming
CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral
More informationIntroduction to Support Vector Machines. Colin Campbell, Bristol University
Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.
More informationt := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
More informationIdeal Class Group and Units
Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals
More informationLecture 7: Finding Lyapunov Functions 1
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1
More informationSHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH
31 Kragujevac J. Math. 25 (2003) 31 49. SHARP BOUNDS FOR THE SUM OF THE SQUARES OF THE DEGREES OF A GRAPH Kinkar Ch. Das Department of Mathematics, Indian Institute of Technology, Kharagpur 721302, W.B.,
More informationApplied Algorithm Design Lecture 5
Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design
More informationContinued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
More informationNP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems
NP-hardness of Deciding Convexity of Quartic Polynomials and Related Problems Amir Ali Ahmadi, Alex Olshevsky, Pablo A. Parrilo, and John N. Tsitsiklis Abstract We show that unless P=NP, there exists no
More information3. Linear Programming and Polyhedral Combinatorics
Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the
More information16.3 Fredholm Operators
Lectures 16 and 17 16.3 Fredholm Operators A nice way to think about compact operators is to show that set of compact operators is the closure of the set of finite rank operator in operator norm. In this
More informationBounding Option Prices Using SDP With Change Of Numeraire
Computational Optimization Methods in Statistics, Econometrics and Finance - Marie Curie Research and Training Network funded by the EU Commission through MRTN-CT-2006-034270 - COMISEF WORKING PAPERS SERIES
More informationFactoring Polynomials
Factoring Polynomials Sue Geller June 19, 2006 Factoring polynomials over the rational numbers, real numbers, and complex numbers has long been a standard topic of high school algebra. With the advent
More informationMATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.
MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar
More information2.3. Finding polynomial functions. An Introduction:
2.3. Finding polynomial functions. An Introduction: As is usually the case when learning a new concept in mathematics, the new concept is the reverse of the previous one. Remember how you first learned
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More information! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.
Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of
More informationA number field is a field of finite degree over Q. By the Primitive Element Theorem, any number
Number Fields Introduction A number field is a field of finite degree over Q. By the Primitive Element Theorem, any number field K = Q(α) for some α K. The minimal polynomial Let K be a number field and
More informationProximal mapping via network optimization
L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:
More informationA NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION
1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of
More informationOn the representability of the bi-uniform matroid
On the representability of the bi-uniform matroid Simeon Ball, Carles Padró, Zsuzsa Weiner and Chaoping Xing August 3, 2012 Abstract Every bi-uniform matroid is representable over all sufficiently large
More informationComputing a Nearest Correlation Matrix with Factor Structure
Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Rüdiger
More informationMOP 2007 Black Group Integer Polynomials Yufei Zhao. Integer Polynomials. June 29, 2007 Yufei Zhao yufeiz@mit.edu
Integer Polynomials June 9, 007 Yufei Zhao yufeiz@mit.edu We will use Z[x] to denote the ring of polynomials with integer coefficients. We begin by summarizing some of the common approaches used in dealing
More information1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since
More information1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.
Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S
More information! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.
Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three
More informationAdaptive Online Gradient Descent
Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650
More informationit is easy to see that α = a
21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore
More informationSome representability and duality results for convex mixed-integer programs.
Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer
More informationStatistical Machine Learning
Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More informationWalrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
More informationFactorization Theorems
Chapter 7 Factorization Theorems This chapter highlights a few of the many factorization theorems for matrices While some factorization results are relatively direct, others are iterative While some factorization
More informationNumerical Methods for Pricing Exotic Options
Numerical Methods for Pricing Exotic Options Dimitra Bampou Supervisor: Dr. Daniel Kuhn Second Marker: Professor Berç Rustem 18 June 2008 2 Numerical Methods for Pricing Exotic Options 0BAbstract 3 Abstract
More informationLinear Programming Notes V Problem Transformations
Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
More informationOctober 3rd, 2012. Linear Algebra & Properties of the Covariance Matrix
Linear Algebra & Properties of the Covariance Matrix October 3rd, 2012 Estimation of r and C Let rn 1, rn, t..., rn T be the historical return rates on the n th asset. rn 1 rṇ 2 r n =. r T n n = 1, 2,...,
More informationAN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS
AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,
More informationNonlinear Optimization: Algorithms 3: Interior-point methods
Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,
More informationALMOST COMMON PRIORS 1. INTRODUCTION
ALMOST COMMON PRIORS ZIV HELLMAN ABSTRACT. What happens when priors are not common? We introduce a measure for how far a type space is from having a common prior, which we term prior distance. If a type
More informationPractical Guide to the Simplex Method of Linear Programming
Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear
More informationA New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse
A New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse Bernie Lesieutre Dan Molzahn University of Wisconsin-Madison PSERC Webinar, October
More informationNonlinear Programming Methods.S2 Quadratic Programming
Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More informationThe Characteristic Polynomial
Physics 116A Winter 2011 The Characteristic Polynomial 1 Coefficients of the characteristic polynomial Consider the eigenvalue problem for an n n matrix A, A v = λ v, v 0 (1) The solution to this problem
More information24. The Branch and Bound Method
24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no
More informationLECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationAround Hilbert s 17th Problem
Documenta Math. 433 Around Hilbert s 17th Problem Konrad Schmüdgen 2010 Mathematics Subject Classification: 14P10 Keywords and Phrases: Positive polynomials, sums of squares The starting point of the history
More informationDuality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725
Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T
More informationSF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
More informationEquilibrium computation: Part 1
Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationLecture Topic: Low-Rank Approximations
Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original
More informationPerron vector Optimization applied to search engines
Perron vector Optimization applied to search engines Olivier Fercoq INRIA Saclay and CMAP Ecole Polytechnique May 18, 2011 Web page ranking The core of search engines Semantic rankings (keywords) Hyperlink
More informationSeveral Views of Support Vector Machines
Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationNOV - 30211/II. 1. Let f(z) = sin z, z C. Then f(z) : 3. Let the sequence {a n } be given. (A) is bounded in the complex plane
Mathematical Sciences Paper II Time Allowed : 75 Minutes] [Maximum Marks : 100 Note : This Paper contains Fifty (50) multiple choice questions. Each question carries Two () marks. Attempt All questions.
More informationTHE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS
THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More information5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1
5 INTEGER LINEAR PROGRAMMING (ILP) E. Amaldi Fondamenti di R.O. Politecnico di Milano 1 General Integer Linear Program: (ILP) min c T x Ax b x 0 integer Assumption: A, b integer The integrality condition
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationVariational approach to restore point-like and curve-like singularities in imaging
Variational approach to restore point-like and curve-like singularities in imaging Daniele Graziani joint work with Gilles Aubert and Laure Blanc-Féraud Roma 12/06/2012 Daniele Graziani (Roma) 12/06/2012
More informationCONTROLLABILITY. Chapter 2. 2.1 Reachable Set and Controllability. Suppose we have a linear system described by the state equation
Chapter 2 CONTROLLABILITY 2 Reachable Set and Controllability Suppose we have a linear system described by the state equation ẋ Ax + Bu (2) x() x Consider the following problem For a given vector x in
More informationF. ABTAHI and M. ZARRIN. (Communicated by J. Goldstein)
Journal of Algerian Mathematical Society Vol. 1, pp. 1 6 1 CONCERNING THE l p -CONJECTURE FOR DISCRETE SEMIGROUPS F. ABTAHI and M. ZARRIN (Communicated by J. Goldstein) Abstract. For 2 < p
More informationStationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the non-negative
More information3 1. Note that all cubes solve it; therefore, there are no more
Math 13 Problem set 5 Artin 11.4.7 Factor the following polynomials into irreducible factors in Q[x]: (a) x 3 3x (b) x 3 3x + (c) x 9 6x 6 + 9x 3 3 Solution: The first two polynomials are cubics, so if
More informationMATHEMATICAL METHODS OF STATISTICS
MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS
More informationThe Algebraic Degree of Semidefinite Programming
Mathematical Programming manuscript No. (will be inserted by the editor) Jiawang Nie Kristian Ranestad Bernd Sturmfels The Algebraic Degree of Semidefinite Programming the date of receipt and acceptance
More informationChapter 5. Banach Spaces
9 Chapter 5 Banach Spaces Many linear equations may be formulated in terms of a suitable linear operator acting on a Banach space. In this chapter, we study Banach spaces and linear operators acting on
More information