Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Size: px
Start display at page:

Download "Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U."

Transcription

1 Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota

2 Interior-Point Methods: the rebirth of an old idea Suppose that f is convex and g 1,..., g m are concave min f(x) s.t. g i (x) 0 1 i m x X We want to solve this by Newton s method Constraints are difficult to handle with this method Idea: put them in the objective min f(x) + µ m i=1 φ(g i (x)) where φ is convex, nondecreasing, and φ(t) + if t 0 Then solve it for various µ 0. Φ(x) := m i=1 φ(g i (x)) is a barrier, as Φ is convex, and Φ(x) + as x X Problems: tiny convergence zone, numerical problems when µ 0

3 Interior-Point Methods: the rebirth of an old idea Reign of Augmented Lagrangian Methods In the language of the lecture of yesterday, F = x µ u 2 u i x i : µ > 0 2 i µ Narendra Karmarkar creates a new polynomial-time algorithm for Linear Programming ( ) 2 People realized it fits Fiacco and McCormick s framework

4 The blooming of Interior-Point Methods 1988 Yurii Nesterov and Arkadii Nemirovski generalize Interior-Point Methods to Convex Optimization Nonlinearity is not an issue anymore. Largest nonlinear Optimization problem solved: 10 9 variables, constraints (Gonzio, 2006) 1992 Yurii Nesterov and Mike Todd define efficient algorithms for Semidefinite Optimization. A new way of modelling appears, with applications in Mechanics, Control, Finance, Structural Design,... (Boyd, Vandenberghe)

5 Something odd in Black-Box methods for convex programming How do Black-Box methods deal with convexity? First, you realize that your problem is convex (or even strongly convex) Thus, you investigate its global properties Then you hide your problem in a mysterious black box You only interact with it through an oracle that gives you local information (if x is the current point, gives f(x), and/or f(x), and/or 2 f(x)... ) They act as if they didn t know the problem is convex

6 By the way, how do you check convexity? Directly from the definition Try this one: for x > 0, f(x) := max exp( x 2 2 ), λ max n i=1 x i A i ln(x 1 ) + 5x4 n. By using the structure of the function You know several simple convex functions: t 2,exp(t),... and several operations that preserve convexity: max, +,... And after all this work, you give this beautiful structure to a Black-Box method that doesn t care about it! But interior-point method explicitly use this structure to construct a barrier for the feasible set (see below)

7 Newton s Method under scrutiny x k+1 = x k 2 f(x k ) 1 f(x k ) We use for the induced matrix norm Theorem 1 (Kantorovich) Suppose that the function f : R n R {+ } satisfies: f is twice continuously differentiable, M > 0 such that x, y 2 f(x) 2 f(y) M x y, 2 f(x ) li 0 then, when x 0 x < 2l/3M, the iterates x k of Newton s method are well-defined and: x k+1 x M x k x 2 2(l M x k x ).

8 Newton s Method under scrutiny Kantorovitch s proof x k+1 = x k 2 f(x k ) 1 f(x k ) We have: x k+1 x = x k x 2 f(x k ) 1 f(x k ) ( 1 ) = 2 f(x k ) 1 [ 2 f(x k ) 2 f(x + t(x k x ))](x k x )dt 0 Hence, with r k := x k x(, 1 ) r k+1 2 f(x k ) 1 2 f(x k ) 2 f(x + t(x k x )) dt 2 f(x k ) 1 M 2 r2 k Mr 2 k 2(l Mr k ), 0 because 2 f(x k ) 2 f(x ) Mr k I n, and 2 f(x k ) (l Mr k )I n. Note that r k+1 < r k when r k < 2l/(3M), because r k r k+1 Mr 2 k 2(l Mr k ) < r k

9 Kantorovitch s result is very strange The iterates of Newton s Method are affine invariant Proof: Consider A invertible, x 0, x k+1 = x k 2 f(x k ) 1 f(x k ) φ(y) := f(ay), and y 0 = A 1 x 0. Note that φ(y 0 ), h = lim t 0 f(ay 0 + tah) f(ay 0 ) t = f(ay 0 ), Ah and φ(y 0 ) = A f(ay 0 ). Similarly 2 φ(y 0 ) = A 2 f(ay 0 )A. y 1 = A 1 ( x 0 2 f(ay 0 ) 1 f(ay 0 ) ) However, the convergence zone x 0 x < 2l/3M is not!

10 What s wrong with the assumptions? x k+1 = x k 2 f(x k ) 1 f(x k ) We use for the induced matrix norm Theorem 1 (Kantorovich) Suppose that the function f : R n R {+ } satisfies: f is twice continuously differentiable, M > 0 such that x, y 2 f(x) 2 f(y) M x y, 2 f(x ) li 0 then, when x 0 x < 2l/3M, the iterates x k of Newton s method are well-defined and: x k+1 x M x k x 2 2(l M x k x ).

11 Nesterov and Nemirovski s solution Instead of using the Euclidean norm, use a local norm u u x = 2 f(x)u, u This norm is affine invariant Let φ(y) := f(ay), and v = A 1 u. We have 2 φ(y)v, v = (A 2 f(y)a)(a 1 u),(a 1 u) = 2 f(x)u, u The property x, y 2 f(x) 2 f(y) M x y should then be replaced by: x, y, h 3 f(x)[h, h, x y] M h 2 x x y x

12 Self-concordancy: one of the two big properties There exists M > 0 for which: x, y, h 3 f(x)[h, h, x y] M h 2 x x y x x, h 3 f(x)[h, h, h] M h 3 x x, h 3 f(x)[h, h, h] 2 h 3 x Such functions are called self-concordant Examples (check it as exercise): ln(t) (domain: R ++ ) lndet(x) (domain: S N ++ ) ln(t 2 x 2 ) (domain: ice-cream cone) ln(t) ln(ln(t) x) (domain:{(t, x) : t exp(x)})

13 Self-concordant functions: the right thing for Newton s method These functions have MANY properties, among which: For every x domf, {y : y x x < 1} domf (interesting for Karmarkar s method) Proof:Let x dom(f), h R n. Let φ(t) = 1 h x+th = 1 2 f(x + th)h, h 1/2 Then φ(t) +0 when x + th dom(f) (because the Hessian goes to ). As long as φ(t) > 0, x + th dom(f) φ(t) = 3 f(x + th)[h, h, h] 2 2 f(x + th)h, h 3/2 Thus φ(t) < 1, and φ(t) > 0 for φ(0) < t < φ(0) i.e x ± φ(0)h dom(f) x ± h/ h x dom(f)

14 Self-concordant functions: the right thing for Newton s method These functions have MANY properties, among which: If f(x) x := 2 f(x) 1 f(x), f(x) 3 5 2, then x is in the quadratic convergence zone (automatic test, no x needed) The following method ALWAYS converges x k+1 = x k 2 f(x k ) 1 f(x k ) 1+ f(x k ) x k Exercise: the dual norm h x is 2 f(x k ) 1 h, h

15 Finally! Interior-point methods Main idea: formulate your problem in its conic form, and use as barrier for your cone a self-concordant function f min c, x min c, x + µf(x) s.t. Ax = b s.t. Ax = b x K The set of minimizers x(µ) is called the primal central path, and x(µ) x when µ 0 But wait: is c, x + µf(x) a self-concordant function? Yes!

16 How to decrease µ? Main goal: we want to decrease it linearly: µ = (1 θ)µ. Main idea: use our knowledge of the quadratic convergence zone Current point: x(µ). Target: x(µ ). We have c + µ f(x(µ)) = 0, and we want i.e. c + (1 θ)µ f(x(µ)) x(µ) < 3 5, 2 θµ f(x(µ)) x(µ) < 3 5, 2 hence, we would like to have a bound for f(x) x Note: this bound is responsible of the complexity. The smaller it is, the bigger is the decrease θ

17 The two crucial properties of barriers Self-concordancy: Bound for f(x) x : x, h 3 f(x)[h, h, h] 2 h 3 x x domf 2 f(x) 1 f(x), f(x) ν These functions are called ν-self-concordant barriers The theoretical complexity of the best IPM is O( ν ln(c/ɛ)) Newton iterations (in practice, even better: O(ln(ν)/ɛ))

18 An interior-point algorithm Algorithm 1 Let ɛ > 0, µ 0 > 0 and x 0 feasible such that c + µ 0 f(x 0 ) x Let θ := 1/( ν) and k := 0 While 2.58µ k ν ɛ 1. µ k+1 := µ k (1 θ) 2. x k+1 := x k 2 f(x k ) 1 ( f(x k ) + µ k+1 c) 3. Increment k Complexity upper bound: ( ν)ln(2.58µ 0 ν)/ɛ Proof of constants: PhD Thesis of François Glineur

19 How do you construct self-concordant barriers? 1- Basic barriers: Domain Barrier Complexity parameter R + ln(t) 1 S n + ln(det(x)) n epi x 2 ln(t 2 x 2 2 ) 2 epiexp(x) ln(t) ln(ln(t) x) 2 2- Combining barriers: Let f 1 be a barrier for K 1 with param. ν 1, and f 2 be a barrier for K 2 with param. ν 2. Then f 1 + f 2 is a barrier for K 1 K 2 with param. ν 1 + ν 2 Let f be the barrier for K with param. ν Then f (s) := sup x R n s, x f(x) is a barrier for K with param. ν

20 How do you construct self-concordant barriers? 1- Basic barriers: Domain Barrier Complexity parameter R + ln(t) 1 S+ n ln(det(x)) n epi x 2 ln(t 2 x 2 2 ) 2 epiexp(x) ln(t) ln(ln(t) x) 2 2- Combining barriers: Let f be the barrier for K with param. ν The restriction of f to a affine space S is a barrier for the set S K with param. ν

21 What about primal-dual problems? Everything is the same: min c, x s.t. Ax = b x K max y, b s.t. A y + s = c s K i.e. (What s the optimal value?) min c, x b, y min s, x + µ(f(x) + f (s)) s.t. Ax = b s.t. Ax = b A y + s = c A y + s x K, s K

22 Strangely enough, primal-dual IPM work very well All IPM optimization software (SeDuMi, MOSEK,... ) are primal-dual. Efficient IPM methods can solve: Linear problems, Second Order problems (ice-cream cone) in particular Quadratic problems, Semidefinite problems, and (sometimes) Geometric problems, i.e. involving posynomials (see on Thursday) because of the properties of their self-concordant barrier

23 And in practice? Many speed-ups and tricks are used For computing the starting point (and dealing with infeasible starting points) For solving the Newton system (reduction of variable) For updating µ Decrease µ much faster than in the theory, then do several steps targeting the central path THOUSANDS of research papers deal with these questions

24 Some references [1] - Y. Nesterov, Introductory lectures on convex optimization: a basic course, Kluwer, 2003 [2] - Y. Nesterov and A. Nemirovski, Interior Point Algorithms in Convex Programming, SIAM, 1993 [3] - J. Renegar, A Mathematical View of Interior-Point Methods in Convex Optimization, SIAM, 2001

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING

INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING ISYE 8813 Arkadi Nemirovski On sabbatical leave from

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Convex Optimization. Lieven Vandenberghe University of California, Los Angeles

Convex Optimization. Lieven Vandenberghe University of California, Los Angeles Convex Optimization Lieven Vandenberghe University of California, Los Angeles Tutorial lectures, Machine Learning Summer School University of Cambridge, September 3-4, 2009 Sources: Boyd & Vandenberghe,

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization 1 Interior-Point Methods for Linear Optimization Kees Roos Faculty of Information Technology and Systems (ITS) Optimization Group TU Delft, Mekelweg 4 P.O. Box 5031, 2628 CD Delft The Netherlands URL:

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.

An Overview Of Software For Convex Optimization. Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt. An Overview Of Software For Convex Optimization Brian Borchers Department of Mathematics New Mexico Tech Socorro, NM 87801 borchers@nmt.edu In fact, the great watershed in optimization isn t between linearity

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Optimisation et simulation numérique.

Optimisation et simulation numérique. Optimisation et simulation numérique. Lecture 1 A. d Aspremont. M2 MathSV: Optimisation et simulation numérique. 1/106 Today Convex optimization: introduction Course organization and other gory details...

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

[1] F. Jarre and J. Stoer, Optimierung, Lehrbuch, Springer Verlag 2003.

[1] F. Jarre and J. Stoer, Optimierung, Lehrbuch, Springer Verlag 2003. References Lehrbuch: [1] F. Jarre and J. Stoer, Optimierung, Lehrbuch, Springer Verlag 2003. Referierte Zeitschriftenbeiträge: [2] F. Jarre, On the Convergence of the Method of Analytic Centers when applied

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

More information

Dual Methods for Total Variation-Based Image Restoration

Dual Methods for Total Variation-Based Image Restoration Dual Methods for Total Variation-Based Image Restoration Jamylle Carter Institute for Mathematics and its Applications University of Minnesota, Twin Cities Ph.D. (Mathematics), UCLA, 2001 Advisor: Tony

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Conic optimization: examples and software

Conic optimization: examples and software Conic optimization: examples and software Etienne de Klerk Tilburg University, The Netherlands Etienne de Klerk (Tilburg University) Conic optimization: examples and software 1 / 16 Outline Conic optimization

More information

An Introduction on SemiDefinite Program

An Introduction on SemiDefinite Program An Introduction on SemiDefinite Program from the viewpoint of computation Hayato Waki Institute of Mathematics for Industry, Kyushu University 2015-10-08 Combinatorial Optimization at Work, Berlin, 2015

More information

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima. Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =

More information

Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications

Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications Advances in Convex Optimization: Interior-point Methods, Cone Programming, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA)

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

Robust Geometric Programming is co-np hard

Robust Geometric Programming is co-np hard Robust Geometric Programming is co-np hard André Chassein and Marc Goerigk Fachbereich Mathematik, Technische Universität Kaiserslautern, Germany Abstract Geometric Programming is a useful tool with a

More information

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions 3. Convex functions Convex Optimization Boyd & Vandenberghe basic properties and examples operations that preserve convexity the conjugate function quasiconvex functions log-concave and log-convex functions

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

An Accelerated First-Order Method for Solving SOS Relaxations of Unconstrained Polynomial Optimization Problems

An Accelerated First-Order Method for Solving SOS Relaxations of Unconstrained Polynomial Optimization Problems An Accelerated First-Order Method for Solving SOS Relaxations of Unconstrained Polynomial Optimization Problems Dimitris Bertsimas, Robert M. Freund, and Xu Andy Sun December 2011 Abstract Our interest

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Two-Stage Stochastic Linear Programs

Two-Stage Stochastic Linear Programs Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

A Globally Convergent Primal-Dual Interior Point Method for Constrained Optimization Hiroshi Yamashita 3 Abstract This paper proposes a primal-dual interior point method for solving general nonlinearly

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture 11: 0-1 Quadratic Program and Lower Bounds Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach Outline Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach The University of New South Wales SPOM 2013 Joint work with V. Jeyakumar, B.S. Mordukhovich and

More information

Solutions of Equations in One Variable. Fixed-Point Iteration II

Solutions of Equations in One Variable. Fixed-Point Iteration II Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

14.451 Lecture Notes 10

14.451 Lecture Notes 10 14.451 Lecture Notes 1 Guido Lorenzoni Fall 29 1 Continuous time: nite horizon Time goes from to T. Instantaneous payo : f (t; x (t) ; y (t)) ; (the time dependence includes discounting), where x (t) 2

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

The Advantages and Disadvantages of Online Linear Optimization

The Advantages and Disadvantages of Online Linear Optimization LINEAR PROGRAMMING WITH ONLINE LEARNING TATSIANA LEVINA, YURI LEVIN, JEFF MCGILL, AND MIKHAIL NEDIAK SCHOOL OF BUSINESS, QUEEN S UNIVERSITY, 143 UNION ST., KINGSTON, ON, K7L 3N6, CANADA E-MAIL:{TLEVIN,YLEVIN,JMCGILL,MNEDIAK}@BUSINESS.QUEENSU.CA

More information

Optimal shift scheduling with a global service level constraint

Optimal shift scheduling with a global service level constraint Optimal shift scheduling with a global service level constraint Ger Koole & Erik van der Sluis Vrije Universiteit Division of Mathematics and Computer Science De Boelelaan 1081a, 1081 HV Amsterdam The

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Some stability results of parameter identification in a jump diffusion model

Some stability results of parameter identification in a jump diffusion model Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course G63.2010.001 / G22.2420-001, Fall 2010 September 30th, 2010 A. Donev (Courant Institute)

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Principles of Scientific Computing Nonlinear Equations and Optimization

Principles of Scientific Computing Nonlinear Equations and Optimization Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related

More information

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1 (R) User Manual Environmental Energy Technologies Division Berkeley, CA 94720 http://simulationresearch.lbl.gov Michael Wetter MWetter@lbl.gov February 20, 2009 Notice: This work was supported by the U.S.

More information

Determining distribution parameters from quantiles

Determining distribution parameters from quantiles Determining distribution parameters from quantiles John D. Cook Department of Biostatistics The University of Texas M. D. Anderson Cancer Center P. O. Box 301402 Unit 1409 Houston, TX 77230-1402 USA cook@mderson.org

More information

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

More information

Optimal Investment with Derivative Securities

Optimal Investment with Derivative Securities Noname manuscript No. (will be inserted by the editor) Optimal Investment with Derivative Securities Aytaç İlhan 1, Mattias Jonsson 2, Ronnie Sircar 3 1 Mathematical Institute, University of Oxford, Oxford,

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Solving Method for a Class of Bilevel Linear Programming based on Genetic Algorithms

Solving Method for a Class of Bilevel Linear Programming based on Genetic Algorithms Solving Method for a Class of Bilevel Linear Programming based on Genetic Algorithms G. Wang, Z. Wan and X. Wang Abstract The paper studies and designs an genetic algorithm (GA) of the bilevel linear programming

More information

Additional Exercises for Convex Optimization

Additional Exercises for Convex Optimization Additional Exercises for Convex Optimization Stephen Boyd Lieven Vandenberghe February 11, 2016 This is a collection of additional exercises, meant to supplement those found in the book Convex Optimization,

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied

More information

Support Vector Machines with Clustering for Training with Very Large Datasets

Support Vector Machines with Clustering for Training with Very Large Datasets Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano

More information

17.3.1 Follow the Perturbed Leader

17.3.1 Follow the Perturbed Leader CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

More information

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams

Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams Scheduling Home Health Care with Separating Benders Cuts in Decision Diagrams André Ciré University of Toronto John Hooker Carnegie Mellon University INFORMS 2014 Home Health Care Home health care delivery

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization

Research Article Stability Analysis for Higher-Order Adjacent Derivative in Parametrized Vector Optimization Hindawi Publishing Corporation Journal of Inequalities and Applications Volume 2010, Article ID 510838, 15 pages doi:10.1155/2010/510838 Research Article Stability Analysis for Higher-Order Adjacent Derivative

More information

The Price of Robustness

The Price of Robustness The Price of Robustness Dimitris Bertsimas Melvyn Sim August 001; revised August, 00 Abstract A robust approach to solving linear optimization problems with uncertain data has been proposed in the early

More information

NP-Hardness Results Related to PPAD

NP-Hardness Results Related to PPAD NP-Hardness Results Related to PPAD Chuangyin Dang Dept. of Manufacturing Engineering & Engineering Management City University of Hong Kong Kowloon, Hong Kong SAR, China E-Mail: mecdang@cityu.edu.hk Yinyu

More information

x a x 2 (1 + x 2 ) n.

x a x 2 (1 + x 2 ) n. Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number

More information

On SDP- and CP-relaxations and on connections between SDP-, CP and SIP

On SDP- and CP-relaxations and on connections between SDP-, CP and SIP On SDP- and CP-relaations and on connections between SDP-, CP and SIP Georg Still and Faizan Ahmed University of Twente p 1/12 1. IP and SDP-, CP-relaations Integer program: IP) : min T Q s.t. a T j =

More information

5.1 Derivatives and Graphs

5.1 Derivatives and Graphs 5.1 Derivatives and Graphs What does f say about f? If f (x) > 0 on an interval, then f is INCREASING on that interval. If f (x) < 0 on an interval, then f is DECREASING on that interval. A function has

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management

Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management Worst-Case Conditional Value-at-Risk with Application to Robust Portfolio Management Shu-Shang Zhu Department of Management Science, School of Management, Fudan University, Shanghai 200433, China, sszhu@fudan.edu.cn

More information

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA)

SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI. (B.Sc.(Hons.), BUAA) SMOOTHING APPROXIMATIONS FOR TWO CLASSES OF CONVEX EIGENVALUE OPTIMIZATION PROBLEMS YU QI (B.Sc.(Hons.), BUAA) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z

FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z FACTORING POLYNOMIALS IN THE RING OF FORMAL POWER SERIES OVER Z DANIEL BIRMAJER, JUAN B GIL, AND MICHAEL WEINER Abstract We consider polynomials with integer coefficients and discuss their factorization

More information

Applied Algorithm Design Lecture 5

Applied Algorithm Design Lecture 5 Applied Algorithm Design Lecture 5 Pietro Michiardi Eurecom Pietro Michiardi (Eurecom) Applied Algorithm Design Lecture 5 1 / 86 Approximation Algorithms Pietro Michiardi (Eurecom) Applied Algorithm Design

More information