17. Path-following methods

Size: px
Start display at page:

Download "17. Path-following methods"

Transcription

1 L. Vandenberghe EE236C (Spring 2016) 17. Path-following methods central path short-step barrier method predictor-corrector method 17-1

2 Introduction Primal-dual pair of conic LPs minimize subject to c T x Ax b maximize b T z subject to A T z + c = 0 z 0 A R m n with rank(a) = n inequalities are with respect to proper cone K and its dual cone K we will assume primal and dual problem are strictly feasible This lecture feasible methods that follow the central path to find the solution complexity analysis based on theory of self-concordant functions Path-following methods 17-2

3 Outline central path short-step barrier method predictor-corrector method

4 Barrier for the feasible set Definition: as a barrier function for the feasible set we will use ψ(x) = φ(b Ax) where φ is a θ-normal barrier for K Notation (in this lecture): v x = (v T 2 ψ(x) 1 v) 1/2 Properties ψ is self-concordant with domain {x Ax b} Newton decrement of ψ is bounded by θ, i.e., ψ(x) 2 x = ψ(x)t 2 ψ(x) 1 ψ(x) θ x dom ψ (proof on next page) Path-following methods 17-3

5 Proof of bound on Newton decrement: gradient and Hessian of ψ are (with s = b Ax) ψ(x) = A T φ(s), 2 ψ(x) = A T 2 φ(s)a from page 16-24, φ(s) T 2 φ(s) 1 φ(s) = θ; therefore ψ(x) T 2 ψ(x) 1 ψ(x) = sup v = sup v sup w ( v T 2 ψ(x)v + 2 ψ(x) T v) ( (Av) T 2 φ(s)(av) 2 φ(s) T Av) ( w T 2 φ(s)w + 2 φ(s) T w) = φ(s) T 2 φ(s) 1 φ(s) = θ Path-following methods 17-4

6 Central path Definition: the set of minimizers x (t), for t > 0, of tc T x + ψ(x) = tc T x + φ(b Ax) Optimality conditions A T φ(s) = tc, s = b Ax implies that z = (1/t) φ(s) is strictly dual feasible by weak duality, c T x (t) p c T x + b T z = z T s = θ t hence, c T x (t) p as t Path-following methods 17-5

7 Existence and uniqueness Centering problem minimize subject to tc T x + φ(s) Ax + s = b Lagrange dual (with dual cone barrier φ of page 16-27) maximize tb T z φ (z) + θ log t subject to A T z + c = 0 strictly feasible z for dual conic LP is feasible for dual centering problem if dual conic LP is strictly feasible, tc T x + φ(b Ax) is bounded below from self-concordance theory (page 16-12), x (t) exists and is unique Path-following methods 17-6

8 Dual points in neighborhood of central path Newton step x for tc T x + ψ(x) = tc T x + φ(b Ax) satisfies Newton equation A T 2 φ(s)a x = tc + A T φ(s), s = b Ax Newton decrement is λ t (x) = ( x T 2 ψ(x) x ) 1/2 Dual feasible point: define z = 1 t ( φ(s) 2 φ(s)a x ) satisfies A T z + c = 0 by definition satisfies z 0 if λ t (x) < 1 (see next page) Path-following methods 17-7

9 Proof. z 0 follows from Dikin ellipsoid theorem Newton decrement is λ t (x) 2 = x T 2 ψ(x) x = x T A T 2 φ(s)a x = v T 2 φ(s) 1 v where v = 2 φ(s)a x define u = φ(s); then 2 φ (u) = 2 φ(s) 1 (see page 16-28) and λ t (x) 2 = v T 2 φ (u)v by Dikin ellipsoid theorem λ t (x) < 1 implies u + v = φ(s) + 2 φ(s)a x 0 Path-following methods 17-8

10 Duality gap in neighborhood of central path c T x p ( 1 + λ t(x) θ ) θ t if λ t (x) < 1 from weak duality, using the dual point z on page 17-7 s T z = 1 t 1 t ( θ s T 2 φ(s)a x ) (θ + 2 φ(s) 1/2 s 2 2 φ(s) 1/2 A x 2 ) = θ + θ λ t (x) t implies c T x p 2θ/t, since θ 1 holds for any θ-normal barrier φ (φ is unbounded below, so its Newton decrement θ 1 everywhere) Path-following methods 17-9

11 Outline central path short-step barrier method predictor-corrector method

12 Short-step methods General idea: keep the iterates in the region of quadratic convergence for tc T x + ψ(x), by limiting the rate at which t is increased (hence, short-step ) Quadratic convergence results (from self-concordance theory) if λ t (x) 1/4, a full Newton step gives λ t (x + ) 2λ t (x) 2 started at a point with λ t (x) 1/4, an accuracy ɛ cent is reached in log 2 log 2 (1/ɛ cent ) iterations for practical purposes this is a constant (4 6 for ɛ cent ) Path-following methods 17-10

13 Short-step method with exact centering simplifying assumptions: x (t) is computed exactly a central point x (t 0 ) is given Algorithm: define a tolerance ɛ (0, 1) and parameter starting at t = t 0, repeat until θ/t ɛ: µ = θ compute x (µt) by Newton s method started at x (t) set t := µt Path-following methods 17-11

14 Newton iterations for recentering Newton decrement at x = x (t) for new value t + = µt is λ t +(x) = µtc + ψ(x) x = µ(tc + ψ(x)) (µ 1) ψ(x) x = (µ 1) ψ(x) x (µ 1) θ = 1/4 line 3 follows because tc + ψ(x) = 0 for x = x (t) line 4 follows from ψ(x) x θ (see page 17-3) Conclusion number of iterations to compute x (t + ) from x (t) is bounded by a small constant Path-following methods 17-12

15 Iteration complexity Number of outer iterations: t (k) = µ k t 0 θ/ɛ when k log(θ/(ɛt 0)) log µ Cumulative number of Newton iterations ( ( )) θ θ O log ɛt 0 (we used log µ (log 2)/(4 θ) by concavity of log(1 + u)) multiply by flops per iteration to get polynomial worst-case complexity θ dependence is lowest known complexity for interior-point methods Path-following methods 17-13

16 Short-step method with inexact centering Improvements of short-step method with exact centering keep iterates in region of quadratic region, but avoid complete centering at each iteration: make small increase in t, followed by one Newton step Algorithm: define a tolerance ɛ (0, 1) and parameters select x and t with λ t (x) β repeat until 2θ/t ɛ: β = 1 8, µ = θ t := µt, x := x 2 ψ(x) 1 (tc + ψ(x)) Path-following methods 17-14

17 Newton decrement after update we first show that λ t (x) β at the end of each iteration if λ t (x) β and t + = µt, then λ t +(x) = t + c + ψ(x)) x = µ(tc + ψ(x)) (µ 1) ψ(x) x µ tc + ψ(x) x + (µ 1) ψ(x) x µβ + (µ 1) θ = 1 4 from theory of Newton s method for self-concordant functions (page 16-16) λ t +(x + ) 2λ t +(x) = β Path-following methods 17-15

18 Iteration complexity from page 17-9, stopping criterion implies c T x p ɛ stopping criterion is satisified when t (k) t 0 = µ k 2θ, k log(2θ/(ɛt 0)) ɛt 0 log µ taking the logarithm on both sides gives an upper bound of ( ( )) θ θ O log ɛt 0 iterations (using log µ log 2/(1 + 8 θ)) Path-following methods 17-16

19 Outline central path short-step barrier method predictor-corrector method

20 Predictor-corrector methods Short-step methods stay in narrow neighborhood of central path (defined by limit on λ t ) make small, fixed increases t + = µt as a result, quite slow in practice Predictor-corrector method select new t using a linear approximation to central path ( predictor ) recenter with new t ( corrector ) allows faster and adaptive increases in t Path-following methods 17-17

21 Global convergence bound for centering problem minimize f t (x) = tc T x + φ(b Ax) Convergence result (damped Newton algorithm of page started at x) #iterations f t(x) inf u f t (u) ω(η) + log 2 log 2 (1/ɛ cent ) ɛ cent is accuracy in centering ω(η) = η log(1 + η) and η (0, 1/4] for practical purposes, second term is a small constant first term depends on unknown optimal value inf u f t (u) Path-following methods 17-18

22 Bound from duality Dual centering problem (see page 17-6) maximize tb T z φ (z) + θ log t subject to A T z + c = 0 strictly feasible z provides lower bound on inf u f t (u): inf u f t(u) tb T z φ (z) + θ log t Bound on centering cost: f t (x) inf u f t (u) V t (x, s, z) where V t (x, s, z) = t(c T x + b T z) + φ(s) + φ (z) θ log t = ts T z + φ(s) + φ (z) θ log t Path-following methods 17-19

23 Potential function Definition (for strictly feasible x, s, z) Ψ(x, s, z) = inf t V t (x, s, z) = θ log st z θ (optimal t is t = argmin t V t (x, s, z) = θ/s T z) + φ(s) + φ (z) + θ Properties homogeneous of degree zero: Ψ(αx, αs, αz) = Ψ(x, s, z) for α > 0 nonnegative for all strictly feasible x, s, z zero only if x, s, z are centered can be used as a global measure of proximity to the central path Path-following methods 17-20

24 Tangent to central path Central path equation [ 0 s (t) ] = [ 0 A T A 0 ] [ x (t) z (t) ] + [ c b ] z (t) = 1 t φ(s (t)) Derivatives ẋ = dx (t)/dt, ṡ = ds /dt, ż = dz (t)/dt satisfy [ 0 ṡ ] = [ 0 A T A 0 ] [ ẋ ż ] ż = 1 t z (t) 1 t 2 φ(s (t))ṡ Tangent direction: derivatives scaled by t (to simplify notation) x t = tẋ, s t = tṡ, z t = tż Path-following methods 17-21

25 Predictor equations with x = x (t), s = s (t), z = z (t) (1/t) 2 φ(s) 0 I 0 0 A T I A 0 s t x t z t = z 0 0 (1) Equivalent equations I 0 (1/t) 2 φ (z) 0 0 A T I A 0 s t x t z t = s 0 0 (2) equivalence follows from primal-dual relations on central path z = 1 t φ(s), s = 1 t φ (z), 1 t 2 φ(s) = t 2 φ (z) 1 Path-following methods 17-22

26 Properties of tangent direction from 2nd and 3rd block in (1): s T t z t = 0 from first block in (1) and 2 φ(s)s = φ(s): s T z t + z T s t = s T z hence, gap in tangent direction is (s + α s t ) T (z + α z t ) = (1 α)s T z from first block in (1) s t 2 s = s T t 2 φ(s) s t = tz T s t similarly, from first block in (2) z t 2 z = z T t 2 φ (z) z t = ts T z t Path-following methods 17-23

27 Predictor-corrector method with exact centering Simplifying assumptions: exact centering, a central point x (t 0 ) is given Algorithm: define tolerance ɛ (0, 1), parameter β > 0, and set t := t 0, (x, s, z) := (x (t 0 ), s (t 0 ), z (t 0 )) repeat until θ/t ɛ: compute tangent direction ( x t, s t, z t ) at (x, s, z) set (x, s, z) := (x, s, z) + α( x t, s t, z t ) with α determined from Ψ(x + α x t, s + α s t, z + α z t ) = β set t := θ/(s T z) and compute (x, s, z) := (x (t), s (t), z (t)) Path-following methods 17-24

28 Iteration complexity Potential function in tangent direction (proof on next page) Ψ(x + α x t, s + α s t, z + α s t ) ω (α θ) = α θ log(1 α θ) Lower bound on predictor step length: since ω is an increasing function α γ/ θ where ω (γ) = β Reduction in duality gap after one predictor/corrector cycle t/t + = 1 α 1 γ/ θ exp( γ/ θ) Cumulative Newton iterations: t (k) θ/ɛ after O ( ( )) θ θ log t 0 ɛ Newton iterations Path-following methods 17-25

29 Proof of upper bound on Ψ (with s + = s + α s t, z + = z + α z t ) bounds on φ(s + ) and φ (z + ): from the inequality on page 16-8, φ(s + ) φ(s) α φ(s) T s t + ω (α s t s ) = αtz T s t + ω (α s t s ) φ (z + ) φ (z) α φ(z) T z t + ω (α z t z ) = αts T z t + ω (α z t z ) add the inequalities and use properties on page φ(s + ) φ(s) + φ (z + ) φ (z) αθ + ω (α s t s ) + ω (α z t z ) αθ + ω (α ( 1/2) s t 2 s + z t 2 z) = αθ + ω (α θ) since (s + ) T z + = (1 α)s T z, Ψ(x +, s +, z + ) θ log(1 α) + αθ + ω (α θ) ω (α θ) Path-following methods 17-26

30 References Yu. Nesterov, Introductory Lectures on Convex Optimization. A Basic Course (2004), chapter 4. Yu. Nesterov, Towards nonsymmetric conic optimization, Optimization Methods and Software (2012). Path-following methods 17-27

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U. Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING

INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING GEORGIA INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES INTERIOR POINT POLYNOMIAL TIME METHODS IN CONVEX PROGRAMMING ISYE 8813 Arkadi Nemirovski On sabbatical leave from

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

17.3.1 Follow the Perturbed Leader

17.3.1 Follow the Perturbed Leader CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

Numeraire-invariant option pricing

Numeraire-invariant option pricing Numeraire-invariant option pricing Farshid Jamshidian NIB Capital Bank N.V. FELAB, University of Twente Nov-04 Numeraire-invariant option pricing p.1/20 A conceptual definition of an option An Option can

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach

Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach Outline Error Bound for Classes of Polynomial Systems and its Applications: A Variational Analysis Approach The University of New South Wales SPOM 2013 Joint work with V. Jeyakumar, B.S. Mordukhovich and

More information

Systems with Persistent Memory: the Observation Inequality Problems and Solutions

Systems with Persistent Memory: the Observation Inequality Problems and Solutions Chapter 6 Systems with Persistent Memory: the Observation Inequality Problems and Solutions Facts that are recalled in the problems wt) = ut) + 1 c A 1 s ] R c t s)) hws) + Ks r)wr)dr ds. 6.1) w = w +

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

MINIMIZATION OF ENTROPY FUNCTIONALS UNDER MOMENT CONSTRAINTS. denote the family of probability density functions g on X satisfying

MINIMIZATION OF ENTROPY FUNCTIONALS UNDER MOMENT CONSTRAINTS. denote the family of probability density functions g on X satisfying MINIMIZATION OF ENTROPY FUNCTIONALS UNDER MOMENT CONSTRAINTS I. Csiszár (Budapest) Given a σ-finite measure space (X, X, µ) and a d-tuple ϕ = (ϕ 1,..., ϕ d ) of measurable functions on X, for a = (a 1,...,

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Interior-Point Methods for Linear Optimization

Interior-Point Methods for Linear Optimization 1 Interior-Point Methods for Linear Optimization Kees Roos Faculty of Information Technology and Systems (ITS) Optimization Group TU Delft, Mekelweg 4 P.O. Box 5031, 2628 CD Delft The Netherlands URL:

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

14. Nonlinear least-squares

14. Nonlinear least-squares 14 Nonlinear least-squares EE103 (Fall 2011-12) definition Newton s method Gauss-Newton method 14-1 Nonlinear least-squares minimize r i (x) 2 = r(x) 2 r i is a nonlinear function of the n-vector of variables

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

Distributed Machine Learning and Big Data

Distributed Machine Learning and Big Data Distributed Machine Learning and Big Data Sourangshu Bhattacharya Dept. of Computer Science and Engineering, IIT Kharagpur. http://cse.iitkgp.ac.in/~sourangshu/ August 21, 2015 Sourangshu Bhattacharya

More information

The integrating factor method (Sect. 2.1).

The integrating factor method (Sect. 2.1). The integrating factor method (Sect. 2.1). Overview of differential equations. Linear Ordinary Differential Equations. The integrating factor method. Constant coefficients. The Initial Value Problem. Variable

More information

Convex analysis and profit/cost/support functions

Convex analysis and profit/cost/support functions CALIFORNIA INSTITUTE OF TECHNOLOGY Division of the Humanities and Social Sciences Convex analysis and profit/cost/support functions KC Border October 2004 Revised January 2009 Let A be a subset of R m

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

Efficient Curve Fitting Techniques

Efficient Curve Fitting Techniques 15/11/11 Life Conference and Exhibition 11 Stuart Carroll, Christopher Hursey Efficient Curve Fitting Techniques - November 1 The Actuarial Profession www.actuaries.org.uk Agenda Background Outline of

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

4 Lyapunov Stability Theory

4 Lyapunov Stability Theory 4 Lyapunov Stability Theory In this section we review the tools of Lyapunov stability theory. These tools will be used in the next section to analyze the stability properties of a robot controller. We

More information

Arrangements And Duality

Arrangements And Duality Arrangements And Duality 3.1 Introduction 3 Point configurations are tbe most basic structure we study in computational geometry. But what about configurations of more complicated shapes? For example,

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Zeros of a Polynomial Function

Zeros of a Polynomial Function Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

Roots of Equations (Chapters 5 and 6)

Roots of Equations (Chapters 5 and 6) Roots of Equations (Chapters 5 and 6) Problem: given f() = 0, find. In general, f() can be any function. For some forms of f(), analytical solutions are available. However, for other functions, we have

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

Parametric Curves. (Com S 477/577 Notes) Yan-Bin Jia. Oct 8, 2015

Parametric Curves. (Com S 477/577 Notes) Yan-Bin Jia. Oct 8, 2015 Parametric Curves (Com S 477/577 Notes) Yan-Bin Jia Oct 8, 2015 1 Introduction A curve in R 2 (or R 3 ) is a differentiable function α : [a,b] R 2 (or R 3 ). The initial point is α[a] and the final point

More information

Invariant Option Pricing & Minimax Duality of American and Bermudan Options

Invariant Option Pricing & Minimax Duality of American and Bermudan Options Invariant Option Pricing & Minimax Duality of American and Bermudan Options Farshid Jamshidian NIB Capital Bank N.V. FELAB, Applied Math Dept., Univ. of Twente April 2005, version 1.0 Invariant Option

More information

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties

(Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties Lecture 1 Convex Sets (Basic definitions and properties; Separation theorems; Characterizations) 1.1 Definition, examples, inner description, algebraic properties 1.1.1 A convex set In the school geometry

More information

Problem Set 6 - Solutions

Problem Set 6 - Solutions ECO573 Financial Economics Problem Set 6 - Solutions 1. Debt Restructuring CAPM. a Before refinancing the stoc the asset have the same beta: β a = β e = 1.2. After restructuring the company has the same

More information

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)! Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following

More information

Study Guide 2 Solutions MATH 111

Study Guide 2 Solutions MATH 111 Study Guide 2 Solutions MATH 111 Having read through the sample test, I wanted to warn everyone, that I might consider asking questions involving inequalities, the absolute value function (as in the suggested

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. #-approximation algorithm. Approximation Algorithms 11 Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of three

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang

A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS. In memory of Rou-Huai Wang A PRIORI ESTIMATES FOR SEMISTABLE SOLUTIONS OF SEMILINEAR ELLIPTIC EQUATIONS XAVIER CABRÉ, MANEL SANCHÓN, AND JOEL SPRUCK In memory of Rou-Huai Wang 1. Introduction In this note we consider semistable

More information

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson

JUST THE MATHS UNIT NUMBER 1.8. ALGEBRA 8 (Polynomials) A.J.Hobson JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials

More information

Method To Solve Linear, Polynomial, or Absolute Value Inequalities:

Method To Solve Linear, Polynomial, or Absolute Value Inequalities: Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

How To Calculate A Cpt Trader

How To Calculate A Cpt Trader WEAK INSIDER TRADING AND BEHAVIORAL FINANCE L. Campi 1 M. Del Vigna 2 1 Université Paris XIII 2 Department of Mathematics for Economic Decisions University of Florence 5 th Florence-Ritsumeikan Workshop

More information

14.451 Lecture Notes 10

14.451 Lecture Notes 10 14.451 Lecture Notes 1 Guido Lorenzoni Fall 29 1 Continuous time: nite horizon Time goes from to T. Instantaneous payo : f (t; x (t) ; y (t)) ; (the time dependence includes discounting), where x (t) 2

More information

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL Exit Time problems and Escape from a Potential Well Escape From a Potential Well There are many systems in physics, chemistry and biology that exist

More information

ATheoryofComplexity,Conditionand Roundoff

ATheoryofComplexity,Conditionand Roundoff ATheoryofComplexity,Conditionand Roundoff Felipe Cucker Berkeley 2014 Background L. Blum, M. Shub, and S. Smale [1989] On a theory of computation and complexity over the real numbers: NP-completeness,

More information

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS

AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS AN INTRODUCTION TO NUMERICAL METHODS AND ANALYSIS Revised Edition James Epperson Mathematical Reviews BICENTENNIAL 0, 1 8 0 7 z ewiley wu 2007 r71 BICENTENNIAL WILEY-INTERSCIENCE A John Wiley & Sons, Inc.,

More information

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

More information

26 Linear Programming

26 Linear Programming The greatest flood has the soonest ebb; the sorest tempest the most sudden calm; the hottest love the coldest end; and from the deepest desire oftentimes ensues the deadliest hate. Th extremes of glory

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

Lecture 14: Section 3.3

Lecture 14: Section 3.3 Lecture 14: Section 3.3 Shuanglin Shao October 23, 2013 Definition. Two nonzero vectors u and v in R n are said to be orthogonal (or perpendicular) if u v = 0. We will also agree that the zero vector in

More information

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima. Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =

More information

Exact shape-reconstruction by one-step linearization in electrical impedance tomography

Exact shape-reconstruction by one-step linearization in electrical impedance tomography Exact shape-reconstruction by one-step linearization in electrical impedance tomography Bastian von Harrach harrach@math.uni-mainz.de Institut für Mathematik, Joh. Gutenberg-Universität Mainz, Germany

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Quasi-static evolution and congested transport

Quasi-static evolution and congested transport Quasi-static evolution and congested transport Inwon Kim Joint with Damon Alexander, Katy Craig and Yao Yao UCLA, UW Madison Hard congestion in crowd motion The following crowd motion model is proposed

More information

Solutions of Equations in One Variable. Fixed-Point Iteration II

Solutions of Equations in One Variable. Fixed-Point Iteration II Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Week 1: Introduction to Online Learning

Week 1: Introduction to Online Learning Week 1: Introduction to Online Learning 1 Introduction This is written based on Prediction, Learning, and Games (ISBN: 2184189 / -21-8418-9 Cesa-Bianchi, Nicolo; Lugosi, Gabor 1.1 A Gentle Start Consider

More information

EXHAUSTIBLE RESOURCES

EXHAUSTIBLE RESOURCES T O U L O U S E S C H O O L O F E C O N O M I C S NATURAL RESOURCES ECONOMICS CHAPTER III EXHAUSTIBLE RESOURCES CHAPTER THREE : EXHAUSTIBLE RESOURCES Natural resources economics M1- TSE INTRODUCTION The

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

Indiana State Core Curriculum Standards updated 2009 Algebra I

Indiana State Core Curriculum Standards updated 2009 Algebra I Indiana State Core Curriculum Standards updated 2009 Algebra I Strand Description Boardworks High School Algebra presentations Operations With Real Numbers Linear Equations and A1.1 Students simplify and

More information

An Introduction on SemiDefinite Program

An Introduction on SemiDefinite Program An Introduction on SemiDefinite Program from the viewpoint of computation Hayato Waki Institute of Mathematics for Industry, Kyushu University 2015-10-08 Combinatorial Optimization at Work, Berlin, 2015

More information

Understanding Basic Calculus

Understanding Basic Calculus Understanding Basic Calculus S.K. Chung Dedicated to all the people who have helped me in my life. i Preface This book is a revised and expanded version of the lecture notes for Basic Calculus and other

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Statistical machine learning, high dimension and big data

Statistical machine learning, high dimension and big data Statistical machine learning, high dimension and big data S. Gaïffas 1 14 mars 2014 1 CMAP - Ecole Polytechnique Agenda for today Divide and Conquer principle for collaborative filtering Graphical modelling,

More information

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve SOLUTIONS Problem. Find the critical points of the function f(x, y = 2x 3 3x 2 y 2x 2 3y 2 and determine their type i.e. local min/local max/saddle point. Are there any global min/max? Partial derivatives

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

ORIENTATIONS. Contents

ORIENTATIONS. Contents ORIENTATIONS Contents 1. Generators for H n R n, R n p 1 1. Generators for H n R n, R n p We ended last time by constructing explicit generators for H n D n, S n 1 by using an explicit n-simplex which

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12

CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12 CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.

More information

Erdős on polynomials

Erdős on polynomials Erdős on polynomials Vilmos Totik University of Szeged and University of South Florida totik@mail.usf.edu Vilmos Totik (SZTE and USF) Polynomials 1 / * Erdős on polynomials Vilmos Totik (SZTE and USF)

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.

Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t. LECTURE 7: BLACK SCHOLES THEORY 1. Introduction: The Black Scholes Model In 1973 Fisher Black and Myron Scholes ushered in the modern era of derivative securities with a seminal paper 1 on the pricing

More information