Constrained optimization: indirect methods

Size: px
Start display at page:

Download "Constrained optimization: indirect methods"

Transcription

1 Constrained optimization: indirect methods Jussi Hakanen Post-doctoral researcher

2 On constrained optimization We have seen how to characterize optimal solutions in constrained optimization KKT optimality conditions include the balance of forces ( f x, g i x, i I and h j (x )) and complementarity conditions (μ i g i x = 0 i) Regularity of x need to be assumed Now, we are interested in how to find such solutions

3 Methods for constrained optimization Many methods utilize knowledge about the constraints Linear inequalities or linear equalities Nonlinear inequalities or equalities For example, if a linear constraint is active at some point, you know that by taking steps along the direction of the constraint, it remains active For nonlinear constraints, you don t have such a direction Methods for constrained optimization can be characterized based on how they treat constraints

4 Classification of the methods Indirect methods: the constrained problem is converted into a sequence of unconstrained problems whose solutions will approach to the solution of the constrained problem, the intermediate solutions need not to be feasible Direct methods: the constraints are taking into account explicitly, intermediate solutions are feasible

5 Transforming the optimization problem Constraints of the problem can be transformed if needed g i x 0 g i x + y i 2 = 0, where y i is a slack variable; constraint is active if y i = 0 By adding y i 2 no need to add y i 0 If g i (x) is linear, then linearity is preserved by g i x + y i = 0, y i 0 g i x 0 g i x 0 h i x = 0 h i x 0 & h i x 0

6 Examples of indirect methods Penalty function methods Lagrangian methods

7 Penalty function methods Include constraints into the objective function with the help of penalty functions that penalize constraint violations or even approaching the boundary of S Different types Penalty function: penalize for constraint violations Barrier function: prevents leaving the feasible region Exact penalty function Resulting unconstrained problems can be solved by using the methods presented earlier in the course

8 Penalty function methods Generate a sequence of points that approach the feasible region from outside Constrained problem is converted into min f x + r α(x), x Rn where α(x) is a penalty function and r is a penalty parameter Requirements: α x 0 x R n and α x = 0 if and only if x S

9 On convergence When r, the solutions x r of penalty function problems converge to a constrained minimizer (x r x and rα x r 0) All the functions should be continuous For each r, there should exist a solution for penalty functions problem and {x r } belongs to a compact subset of R n

10 Examples of penalty functions Can you give an example of a penalty function α(x)? For equality constraints l i=1 or h i x = 0 α x = h i x 2 α x = l i=1 h i (x) p, p 2 For inequality constraints g i x 0 α x = i=1 max 0, g i (x) or m α x = max 0, g i x p, p 2 m i=1

11 How to choose r? Should be large enough in order for the solutions be close enough to the feasible region If r is too large, there could be numerical problems in solving the penalty problems For large values of r, the emphasis is on finding feasible solutions and, thus, the solution can be feasible but far from optimum Typically r is updated iteratively Different parameters can be used for different constraints (e.g. g i r i, g j r j ) For the sake of simplicity, same parameter is used here for all the constraints

12 Algorithm 1) Choose the final tolerance ε > 0 and a starting point x 1. Choose r 1 > 0 (not too large) and set h = 1. 2) Solve min f x + x Rn rh α(x) with some method for unconstrained problems (x h as a starting point). Let the solution be x h+1 = x(r h ). 3) Test optimality: If r h α x h+1 < ε, stop. Solution x h+1 is close enough to optimum. Otherwise, set r h+1 > r h (e.g. r h+1 = κr h, where κ can be initialized to be e.g. 10). Set h = h + 1 and go to 2).

13 From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example min x s. t. x Let α x = max [0, ( x + 2)] 2 Then α x = 0, if x 2 x + 2 2, if x < 2 Minimum of f + rα is at 2 1 2r

14 Barrier function method Prevents leaving the feasible region Suitable only for problems with equality constraints Set x g i x < 0 i} should not be empty Problem to be solved is min Θ r s. t. r 0, r where Θ r = inf f x + rβ x g i x < 0 i] x β is a barrier function: β x 0 when g i x < 0 i and β x when x approaches boundary of S Constraints g i x < 0 can be omitted since β in the boundary of S

15 On convergence Denote Θ r = f x r + rβ(x r ) Under some assumptions, the solutions x r of barrier problems converge to a constrained minimizer (x r x and rβ x r 0) when r 0 + All functions should be continuous x g i x < 0 i}

16 Properties of barrier functions Nonnegative and continuous in x g i x < 0 i} Approaches when the boundary of the feasible region is approached from inside Ideally: β = 0 in x g i x < 0 i} and β = in the boundary Guarantees staying in the feasible region This kind of discontinuity causes problems for any numerical method Examples of barrier functions m β x = 1 i=1 g i x m i=1 β x = ln (min[1, g i (x)])

17 Algorithm 1) Choose the final tolerance ε > 0 and a starting point x 1 s.t. g i x < 0 i. Choose r 1 > 0, not too small (and a parameter 0 < τ < 1 for reducing r). Set h = 1. 2) Solve min f x + r h β(x) s. t. g i x < 0 i x by using the starting point x h. Let the solution be x h+1. 3) Test optimality: If r h β x h+1 < ε, stop. Solution x h+1 is close enough to optimum. Otherwise, set r h+1 < r h (e.g. r h+1 = τr h ). Set h = h + 1 and go to 2).

18 From Miettinen: Nonlinear optimization, 2007 (in Finnish) min x s. t. x Let β x = 1 when x 1 x+1 Minimum of f x + rβ x = x + r x 1 1 is at 1 + r Example

19 Summary: penalty and barrier function methods Penalty and barrier functions usually differentiable Minimum is obtained in a limit Penalty function: r h Barrier function: r h 0 Choosing the sequence r h essential for convergence If r h or r h 0 too slowly, a large number of unconstrained problems need to be solved If r h or r h 0 too fast, solutions of successive unconstrained problems are far from each other and solution time increases

20 Exact penalty function Idea is to have a method where the solution could be found with a small amount of iterations Suitable for both equality and inequality constraints An exact penalty function problem is e.g. of the form m l i=1 ) min f x + r( max[0, g x R n i x ] + i=1 h i (x)

21 Exact penalty function method Theorem: Consider a point x where the necessary KKT conditions hold. Let the corresponding Lagrange multipliers be μ and ν. Assume that objective and inequality constraint functions are convex and equality constraint functions are affine. Then x is a solution of the exact penalty function problem with r max[μ i, i = 1,, m, ν i, i = 1,, l] Solution can be obtained with a finite value for the penalty parameter r Algorithm is similar to penalty function method except for that r h is increased only if necessary E.g. when the feasible region is not approached fast enough

22 Properties of exact penalty function Not differentiable in points x where g i x = 0 or h i x = 0 Gradient based methods are not suitable If r and the starting point could be chosen appropriately, only one minimization would be required in principle If r is too large and the starting point is not close enough to the optimum, minimizing the exact penalty function problem could become difficult

23 Example min f x = x x 2 s. t. x 1 + x 2 1 = 0 Optimal solution is x = 1, 1 T, ν = 2x = 2x 2 = 1 Exact penalty function problem: min x x R n x r x 1 + x 2 1 Solution: x = r, r T when 0 r < 1 and x = 1, 1 T when r 1 (obtained by using KKT conditions of an equivalent differentiable problem where the absolute value term is replaced with a new variable and two inequality constraints) Thus, the solution can be found with r 1 (= ν )

24 From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example: barrier function 2 min f x = x 1 x 2 s. t. x 2 1 +x x = , T, the constraint is active in x (a) level curves of f(x) and the boundary of S Logarithmic barrier function: (b) r = 0.2, (c) r = 0.001

25 From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example: penalty function 2 min f x = x 1 x 2 s. t. x 2 1 +x x = , T, the constraint is active in x Quadratic penalty function: (a) r = 1, (b) r = 100

26 From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example: exact penalty function min f x = x 1 x 2 2 s. t. x 1 2 +x x = , T, the constraint is active in x, μ = Exact penalty function: (a) r = 1.2, (b) r = 5, (c) r = 100

27 Lagrangian function Consider problem min f(x) s. t. h i x = 0, i = 1,, l Lagrangian function l L x, ν = f x + ν i h i (x) i=1 KKT conditions l f x + i=1 ν i h i (x) = 0 h i x = 0, i = 1,, l Let x be a minimizer and ν corresponding Lagrange multiplier

28 Properties of Lagrangian KKT conditions: x is a critical point of the Lagrangian function x is not necessarily a minimizer for L(x, ν ) Thus, minimizing the Lagrangian function doesn t necessarily give a minimum for f(x) Hessian xx 2 L(x, ν ) may be indefinite a saddle point Improve Lagrangian function!

29 Augmented Lagrangian function Augmented Lagrangian function: l L A x, ν, ρ = f x + i=1 ν i h i (x) + 1 ρ l h 2 i=1 i x 2, ρ > 0 Lagrangian function + quadratic penalty function A point (x, ν ) is a critical point of the augmented Lagrangian x L A x, ν, ρ = 0 and 1 ρ 2 l i=1 h i x 2 = 0 Hessian: 2 xx L A x, ν, ρ = 2 xx L x, ν + ρ h x T h(x ) It can be shown that for ρ > ρ, 2 xx L A x, ν, ρ is positive definite x is a local minimizer of L A x, ν, ρ Need to know ν

30 Properties of L A (x, ν, ρ) Differentiable if the original functions are x is a minimizer of L A x, ν, ρ for finite ρ Lagrangian function + quadratic penalty function

31 Algorithm 1) Choose the final tolerance ε > 0. Choose x 1, ν 1 i (i = 1,, l) and ρ. Set h = 1. 2) Test optimality: if optimality conditions are satisfied, stop. The solution is x h. 3) Solve (with a suitable method) min x R n A x, ν h, ρ by using x h as a starting point. Let the solution be x h+1. 4) Update Lagrange multipliers: e.g. ν h+1 = ν h + ρh x h+1. 5) Increase ρ if necessary: e.g. if h(x h ) h x h+1 < ε. 6) Set h = h + 1 and go to 2). Note: x h x only if ν h ν

32 From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example 2 min f x = x 1 x 2 s. t. x 2 1 +x x = , T, the constraint is active in x Lagrangian function: saddle point in x

33 From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example (cont.) Augmented Lagrangian function (a) ρ = 0.075, (b) ρ = 0.2, (c) ρ = 100

34 From Miettinen: Nonlinear optimization, 2007 (in Finnish) Example (cont.) Augmented Lagrangian function ν = , ρ = 0.2 (a) ν = 0.5, (b) ν = 0.9, (c) ν = 1.0

35 Topic of the lectures next week Mon, Feb 10 th : Constrained optimization: gradient projection, active set method Wed, Feb 12 th : Constrained optimization, SQP method & Matlab Study this before the lecture! Questions to be considered What is the basic idea of gradient projection? What is the basic idea of active set methods? What is the basic idea of Sequential Quadratic Programming (SQP)?

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

In this section, we will consider techniques for solving problems of this type.

In this section, we will consider techniques for solving problems of this type. Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Big Data Analytics. Lucas Rego Drumond

Big Data Analytics. Lucas Rego Drumond Big Data Analytics Lucas Rego Drumond Information Systems and Machine Learning Lab (ISMLL) Institute of Computer Science University of Hildesheim, Germany Going For Large Scale Going For Large Scale 1

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Lecture 2: The SVM classifier

Lecture 2: The SVM classifier Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

New insights on the mean-variance portfolio selection from de Finetti s suggestions. Flavio Pressacco and Paolo Serafini, Università di Udine

New insights on the mean-variance portfolio selection from de Finetti s suggestions. Flavio Pressacco and Paolo Serafini, Università di Udine New insights on the mean-variance portfolio selection from de Finetti s suggestions Flavio Pressacco and Paolo Serafini, Università di Udine Abstract: In this paper we offer an alternative approach to

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Distributed Machine Learning and Big Data

Distributed Machine Learning and Big Data Distributed Machine Learning and Big Data Sourangshu Bhattacharya Dept. of Computer Science and Engineering, IIT Kharagpur. http://cse.iitkgp.ac.in/~sourangshu/ August 21, 2015 Sourangshu Bhattacharya

More information

15 Kuhn -Tucker conditions

15 Kuhn -Tucker conditions 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

More information

Parameter Estimation for Bingham Models

Parameter Estimation for Bingham Models Dr. Volker Schulz, Dmitriy Logashenko Parameter Estimation for Bingham Models supported by BMBF Parameter Estimation for Bingham Models Industrial application of ceramic pastes Material laws for Bingham

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1 (R) User Manual Environmental Energy Technologies Division Berkeley, CA 94720 http://simulationresearch.lbl.gov Michael Wetter MWetter@lbl.gov February 20, 2009 Notice: This work was supported by the U.S.

More information

A Globally Convergent Primal-Dual Interior Point Method for Constrained Optimization Hiroshi Yamashita 3 Abstract This paper proposes a primal-dual interior point method for solving general nonlinearly

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Ideal Class Group and Units

Ideal Class Group and Units Chapter 4 Ideal Class Group and Units We are now interested in understanding two aspects of ring of integers of number fields: how principal they are (that is, what is the proportion of principal ideals

More information

Solutions of Equations in One Variable. Fixed-Point Iteration II

Solutions of Equations in One Variable. Fixed-Point Iteration II Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION Ş. İlker Birbil Sabancı University Ali Taylan Cemgil 1, Hazal Koptagel 1, Figen Öztoprak 2, Umut Şimşekli

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Stationarity Results for Generating Set Search for Linearly Constrained Optimization

Stationarity Results for Generating Set Search for Linearly Constrained Optimization SANDIA REPORT SAND2003-8550 Unlimited Release Printed October 2003 Stationarity Results for Generating Set Search for Linearly Constrained Optimization Tamara G. Kolda, Robert Michael Lewis, and Virginia

More information

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U. Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

More information

Table 1: Summary of the settings and parameters employed by the additive PA algorithm for classification, regression, and uniclass.

Table 1: Summary of the settings and parameters employed by the additive PA algorithm for classification, regression, and uniclass. Online Passive-Aggressive Algorithms Koby Crammer Ofer Dekel Shai Shalev-Shwartz Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel {kobics,oferd,shais,singer}@cs.huji.ac.il

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

The Envelope Theorem 1

The Envelope Theorem 1 John Nachbar Washington University April 2, 2015 1 Introduction. The Envelope Theorem 1 The Envelope theorem is a corollary of the Karush-Kuhn-Tucker theorem (KKT) that characterizes changes in the value

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Lecture 11. 3.3.2 Cost functional

Lecture 11. 3.3.2 Cost functional Lecture 11 3.3.2 Cost functional Consider functions L(t, x, u) and K(t, x). Here L : R R n R m R, K : R R n R sufficiently regular. (To be consistent with calculus of variations we would have to take L(t,

More information

17.3.1 Follow the Perturbed Leader

17.3.1 Follow the Perturbed Leader CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

More information

Optimal energy trade-off schedules

Optimal energy trade-off schedules Optimal energy trade-off schedules Neal Barcelo, Daniel G. Cole, Dimitrios Letsios, Michael Nugent, Kirk R. Pruhs To cite this version: Neal Barcelo, Daniel G. Cole, Dimitrios Letsios, Michael Nugent,

More information

Dual Methods for Total Variation-Based Image Restoration

Dual Methods for Total Variation-Based Image Restoration Dual Methods for Total Variation-Based Image Restoration Jamylle Carter Institute for Mathematics and its Applications University of Minnesota, Twin Cities Ph.D. (Mathematics), UCLA, 2001 Advisor: Tony

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

Study Guide 2 Solutions MATH 111

Study Guide 2 Solutions MATH 111 Study Guide 2 Solutions MATH 111 Having read through the sample test, I wanted to warn everyone, that I might consider asking questions involving inequalities, the absolute value function (as in the suggested

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton

More information

5.1 Bipartite Matching

5.1 Bipartite Matching CS787: Advanced Algorithms Lecture 5: Applications of Network Flow In the last lecture, we looked at the problem of finding the maximum flow in a graph, and how it can be efficiently solved using the Ford-Fulkerson

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 17, No. 4, pp. 943 968 c 2006 Society for Industrial and Applied Mathematics STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Optimization of Supply Chain Networks

Optimization of Supply Chain Networks Optimization of Supply Chain Networks M. Herty TU Kaiserslautern September 2006 (2006) 1 / 41 Contents 1 Supply Chain Modeling 2 Networks 3 Optimization Continuous optimal control problem Discrete optimal

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

CS 688 Pattern Recognition Lecture 4. Linear Models for Classification

CS 688 Pattern Recognition Lecture 4. Linear Models for Classification CS 688 Pattern Recognition Lecture 4 Linear Models for Classification Probabilistic generative models Probabilistic discriminative models 1 Generative Approach ( x ) p C k p( C k ) Ck p ( ) ( x Ck ) p(

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Massive Data Classification via Unconstrained Support Vector Machines

Massive Data Classification via Unconstrained Support Vector Machines Massive Data Classification via Unconstrained Support Vector Machines Olvi L. Mangasarian and Michael E. Thompson Computer Sciences Department University of Wisconsin 1210 West Dayton Street Madison, WI

More information

Computing a Nearest Correlation Matrix with Factor Structure

Computing a Nearest Correlation Matrix with Factor Structure Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Rüdiger

More information

A Simple Introduction to Support Vector Machines

A Simple Introduction to Support Vector Machines A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear

More information

The Multiplicative Weights Update method

The Multiplicative Weights Update method Chapter 2 The Multiplicative Weights Update method The Multiplicative Weights method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

1 if 1 x 0 1 if 0 x 1

1 if 1 x 0 1 if 0 x 1 Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or

More information

Insurance. Michael Peters. December 27, 2013

Insurance. Michael Peters. December 27, 2013 Insurance Michael Peters December 27, 2013 1 Introduction In this chapter, we study a very simple model of insurance using the ideas and concepts developed in the chapter on risk aversion. You may recall

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information

Solving Linear Programs

Solving Linear Programs Solving Linear Programs 2 In this chapter, we present a systematic procedure for solving linear programs. This procedure, called the simplex method, proceeds by moving from one feasible solution to another,

More information

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/54

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/54 CHAPTER 7 APPLICATIONS TO MARKETING Chapter 7 p. 1/54 APPLICATIONS TO MARKETING State Equation: Rate of sales expressed in terms of advertising, which is a control variable Objective: Profit maximization

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

We shall turn our attention to solving linear systems of equations. Ax = b

We shall turn our attention to solving linear systems of equations. Ax = b 59 Linear Algebra We shall turn our attention to solving linear systems of equations Ax = b where A R m n, x R n, and b R m. We already saw examples of methods that required the solution of a linear system

More information

Variational approach to restore point-like and curve-like singularities in imaging

Variational approach to restore point-like and curve-like singularities in imaging Variational approach to restore point-like and curve-like singularities in imaging Daniele Graziani joint work with Gilles Aubert and Laure Blanc-Féraud Roma 12/06/2012 Daniele Graziani (Roma) 12/06/2012

More information

Optimization Theory for Large Systems

Optimization Theory for Large Systems Optimization Theory for Large Systems LEON S. LASDON CASE WESTERN RESERVE UNIVERSITY THE MACMILLAN COMPANY COLLIER-MACMILLAN LIMITED, LONDON Contents 1. Linear and Nonlinear Programming 1 1.1 Unconstrained

More information

Support Vector Machines

Support Vector Machines CS229 Lecture notes Andrew Ng Part V Support Vector Machines This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best)

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Online Learning of Optimal Strategies in Unknown Environments

Online Learning of Optimal Strategies in Unknown Environments 1 Online Learning of Optimal Strategies in Unknown Environments Santiago Paternain and Alejandro Ribeiro Abstract Define an environment as a set of convex constraint functions that vary arbitrarily over

More information

Math 120 Final Exam Practice Problems, Form: A

Math 120 Final Exam Practice Problems, Form: A Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,

More information

Identification algorithms for hybrid systems

Identification algorithms for hybrid systems Identification algorithms for hybrid systems Giancarlo Ferrari-Trecate Modeling paradigms Chemistry White box Thermodynamics System Mechanics... Drawbacks: Parameter values of components must be known

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

A FIRST COURSE IN OPTIMIZATION THEORY

A FIRST COURSE IN OPTIMIZATION THEORY A FIRST COURSE IN OPTIMIZATION THEORY RANGARAJAN K. SUNDARAM New York University CAMBRIDGE UNIVERSITY PRESS Contents Preface Acknowledgements page xiii xvii 1 Mathematical Preliminaries 1 1.1 Notation

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models

INTEGER PROGRAMMING. Integer Programming. Prototype example. BIP model. BIP models Integer Programming INTEGER PROGRAMMING In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is

More information

Numerical Verification of Optimality Conditions in Optimal Control Problems

Numerical Verification of Optimality Conditions in Optimal Control Problems Numerical Verification of Optimality Conditions in Optimal Control Problems Dissertation zur Erlangung des naturwissenschaftlichen Doktorgrades der Julius-Maximilians-Universität Würzburg vorgelegt von

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms or: How I Learned to Stop Worrying and Deal with NP-Completeness Ong Jit Sheng, Jonathan (A0073924B) March, 2012 Overview Key Results (I) General techniques: Greedy algorithms

More information

TOMLAB - For fast and robust largescale optimization in MATLAB

TOMLAB - For fast and robust largescale optimization in MATLAB The TOMLAB Optimization Environment is a powerful optimization and modeling package for solving applied optimization problems in MATLAB. TOMLAB provides a wide range of features, tools and services for

More information