Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen"

Transcription

1 (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11

2 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Problem Statement Nonlinear Programming We consider the problem where min f(x) x X f : R n R is a continuous (and usually differentiable) function of n variables x R n X = R n or (more generally) X is a subset of R n. If X = R n, the problem is called unconstrained If f is linear and X is polyhedral, the problem is a linear programming problem. Otherwise it is a nonlinear programming problem.

3 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Problem Statement Constrained Inequality Constrained Problem We consider the problem min x R n f(x) subject to h(x) = 0, g(x) 0 where f : R n R, h : R n R m, and g : R n R r are continuously differentiable functions. Here h = (h 1, h 2,..., h m ) are the equality constraints, and g = (g 1, g 2,..., g r ) are the inequality constraints.

4 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Topics Covered Unconstrained Optimization Derivative-Free Optimization Gradient Methods Newton s Method and Variations Least-Squares Problems Conjugate Gradient Method Constrained Optimization Conditional Gradient Method Gradient Projection Method Penalty and Augmented Lagrangian Methods Interior-Point Methods

5 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions LOCAL AND GLOBAL MINIMA Local and Global Minima f(x) Strict Local Minimum Local Minima Strict Global Minimum Quelle: Bertsekas Unconstrained local and global minima in one dimension Unconstrained IGPM, local RWTH and Aachen global Numerisches minima Rechnen in one dimension. x

6 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Necessary Optimality Conditions First Order Necessary Conditions Let x be an unconstrained local minimum of f : R n R, and assume that f is continuously differentiable in an open neighbourhood of x, then f(x ) = 0. Second Order Necessary Conditions Let x be an unconstrained local minimum of f : R n R, and assume that f is twice continuously differentiable in an open neighbourhood of x, then and f(x ) = 0 2 f is positive semidefinite.

7 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Sufficient Optimality Conditions Second Order Sufficient Conditions Let f : R n R be twice continuously differentiable in an open neighbourhood of x and suppose that x satisfies the conditions and f(x ) = 0 2 f is positive definite. Then x is a strict unconstrained local minimum of f. In particular, there exists scalars γ > 0 and ɛ > 0 such that f(x) f(x ) + γ 2 x x 2, x with x x < ɛ.

8 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Problem Statement Constrained Optimization Problem We consider the problem where min f(x) x X X R n is nonempty, convex, and closed f : R n R is a continuously differentiable function over X.

9 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Problem Statement Constrained Optimization Problem We consider the problem where min f(x) x X X R n is nonempty, convex, and closed f : R n R is a continuously differentiable function over X. Proposition If f is a convex function, then a local minimum of f over X is a global minimum. If in addition f is strictly convex over X, then there exists at most one global minimum of f over X.

10 also sufficient for x Proposition (Optimality Condition) to Constrained Optimization Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Necessary (a) and If x Constraint set X Sufficient is a local Conditions minimum of f over X, then Optimality Conditions f(x ) (x x ) 0, x X. (a) If x is a local minimum of f over X, then f(x ) T (x x ) 0, x X. (b) If f is convex over X, then this condition is also sufficient for x to minimize f over X. Surfaces of equal cost f(x) (b) If f is convex over X, then the condition of part (a) is also sufficient for x to minimize f over X. x!f(x * ) x * At the an a to 9 sibl X. x Constraint set X!f(x * ) x * Surfaces of equal cost f(x) Constraint set X At a local!f(x * ) minimum x, the gradient f (x ) makes x an angle less * than or equal to 90 degrees with x all feasible variations x x, x X. Quelle: Bertsekas Illu opt X i is a f the

11 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Optimization Subject to Bounds Consider a positive orthant constraint X = {x x 0}. The necessary optimality condition for x = (x 1,..., x n ) to be a local minimum is n f(x ) (x i x i x ) 0, x i 0, i = 1,..., n. i i=1 Consider two cases: Fix i. Let x j = x j for j i and x i = x i + 1: f(x ) x i 0, i. If x i > 0, let also x j = x j for j i and x i = 1 2 x i. Then f(x ) x i 0, so f(x ) x i = 0, if x i > 0.

12 0, i. Constrained Optimization Problem Statement Optimization over x a Convex i Set Unconstrained Optimality Conditions Constrained Optimality Conditions If x i > 0, let also x j = x j for j i and x i = 1 2 x i. Optimization Then f(xsubject )/ x i to Bounds 0, so Optimality conditions for an orthant constraint: at a minimum, all partial derivatives f(x ) x i are nonnegative, and they are zero for the inactive constraint indices, i.e., the indices with x i > 0. x i =0, if x i > 0. f(x * ) f(x * ) x * x* = 0 Quelle: Bertsekas Note: if all constraints are inactive, we obtain the unconstrained optimality condition f(x ) = 0.

13 Problem Statement Unconstrained Optimality Conditions Constrained Optimality Conditions Optimization Subject to Bounds Consider the constraints X = {x α i x i β i, i = 1,..., n} where α i and β i are scalars. If x is a local minimum, then f(x ) x i 0, if x i = α i, f(x ) x i 0, if x i = β i, f(x ) x i = 0, if α i < x i < β i.

14 Conditional Gradient Method Constrained Optimization Gradient Projection Methods Feasible Direction Methods Feasible Directions Conditional Gradient Method Gradient Projection Method A feasible direction at an x X is a vector d 0 such that x + αd is feasible for all suff. small α > 0 A feasible direction at an x X is a vector d 0 such that x + αd is feasible for all sufficiently small α > 0. x 2 Feasible directions at x x Constraint set X d x 1 Quelle: Bertsekas Note: the set of feasible directions at x is the set of all α(z x) Note: the set of feasible directions at x is the where z X, z x, and α > 0. set of all α(z x) where z X, z x, and α > 0

15 Feasible Directions Conditional Gradient Method Gradient Projection Method Feasible Direction Methods A feasible direction method x k+1 = x k + α k d k where d k is a feasible descent directions ( f(x k ) T d k < 0), and α k > 0 and such that x k+1 X. Alternative definition x k+1 = x k + α k ( x k x k ) where α k (0, 1] and if x k is nonstationary, x k X, f(x k ) T ( x k x k ) < 0. Stepsize rules: Limited minimization, constant α k = 1, Armijo rule.

16 Conditional Gradient Method Feasible Directions Conditional Gradient Method Gradient Projection Method x k+1 = x k + α k (x k Iteration where x k+1 = x k + α k ( x k x k ) x k = arg min x X f(xk ) T (x x k ) x k = arg m x Assume that X is c to exist by Weierstra x k is a point in X that lies "furthest along" the negative gradient direction f(x k ). Subproblem simpler to solve than original if f is nonlinear X specified by linear equality or inequality constraints Linear Programm f(x) Constraint set X x _ x Surfaces of equal cost Quelle: Bertsekas

17 Constraint Feasible set X Directions x Conditional Gradient Method Conditional Gradient Method Gradient Projection Method Operation of the method with limited minimization stepsize Surfaces of equal cost rule. Possibly slow convergence _ x Illustration of the d of the conditional g method. Constraint set X _ x 1 x 0 x 1 x 2 x* _ x 0 Operation of the m Slow (sublinear) co Surfaces of equal cost Quelle: Bertsekas

18 Feasible Directions Conditional Gradient Method Gradient Projection Method Gradient Projection Method Gradient projection methods determine the feasible direction by using a quadratic cost subproblem Simplest variant: where x k+1 = x k + α k ( x k x k ) x k = [x k s k f(x k )] + and [ ] + denotes the projection on the set X, α k (0, 1] is a stepsize, and s k is a positive scalar Stepsize rules for α k (assuming s k s): Limited minimization, Armijo along the feasible direction, constant stepsize. Also, Armijo along the projection arc (α k 1, s k : variable).

19 Gradient Projection Method Feasible Directions Conditional Gradient Method Gradient Projection Method Illustration of gradient projection method for the case where α k = 1 for all k, and thus x k+1 = x k = [x k s k f(x k )] +. x k+1 = x k + α k (x k x k ) x k = [ x k s k f(x k ) ] + where, [ ] + denotes projection on the se (0, 1] is a stepsize, and s k is a positive s x k+1 = x k - s k f(x k ) x k+2 - s k+2 f(x k+2 ) Constraint set X x k+3 x k+2 x k+1 x k x k+1 - s k+1 f(x k+1 ) Gradient projectio tions for the case α k 1, x k+1 If α k < 1, x k+1 i line segment conne and x k. Quelle: Bertsekas Note: if x k X we obtain the unconstrained steepest descent iteration. Stepsize rules for α k (assuming s k s minimization, Numerisches Armijo Rechnen along the feasible

20 Feasible Directions Conditional Gradient Method Gradient Projection Method Gradient Projection Method For practical purposes, the projection operation should be fairly simple. Example: Constraints are bounds on the variables, X = {x α i x i β i, i = 1,..., n} where α i and β i are scalars. The ith component of the projection of a vector x is given by α i if x i α i, [x] + i = β i if x i β i, otherwise. x i

21 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Lagrangian Function Equality Constrained Problem We consider the problem min x R n f(x) subject to h(x) = 0, where f : R n R and h : R n R m are continuously differentiable functions.

22 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Lagrangian Function Equality Constrained Problem We consider the problem min x R n f(x) subject to h(x) = 0, where f : R n R and h : R n R m are continuously differentiable functions. We then define the Lagrangian function L : R n R m R given by L(x, λ) = f(x) + m λ i h i (x) i=1 where the scalars λ 1,..., λ m are the Lagrange multipliers.

23 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Lagrange Multiplier Theorem Lagrange Multiplier Theorem Necessary Conditions Let x be a local minimum of f subject to h(x) = 0, and assume that the constraint gradients h 1 (x ),..., h m (x ) are linearly independent. Then there exists a unique vector λ = (λ 1,..., λ m ) called a Lagrange multiplier vector, such that x L(x, λ ) = f(x ) + m λ i h i(x ) = 0 and i=1 λ L(x, λ ) = 0. If in addition f and h are twice continuously differentiable, we have y T 2 xx y 0, y V (x ), where V (x ) is the subspace of first order feasible variations V (x ) = {y h i (x ) T y = 0, i = 1,..., m}.

24 If in addition f and h are twice cont. differentiable, ( ) Constrained Optimization m Lagrange y Multiplier 2 f (x )+ Theorem λ Example x 2 i=1 i 2 h i (x ) Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods y 0, y s.t. h(x ) y = 0 2 h(x) = 0 minimize x 1 + x 2 f(x * ) = (1,1) 0 x * = (-1,-1) 2 x 1 subject to x x 2 =2. The Lagrange multiplier is λ = 1/2. h(x * ) = (-2,-2) x 2 h 2 (x) = 0 minimize x 1 + x 2 h 2 (x * ) = (-4,0) h 1 (x * ) = (-2,0) f(x * ) = (1,1) 1 2 h 1 (x) = 0 x 1 2 s. t. (x 1 1) 2 + x 2 1 =0 2 (x 1 2) 2 + x 2 4 =0 Quelle: Bertsekas

25 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Lagrange Multiplier Theorem Lagrange Multiplier Theorem Sufficient Conditions Assume that f and h are twice continuously differentiable, and let x R n and λ R m satisfy x L(x, λ ) = 0, λ L(x, λ ) = 0, y T 2 xx y 0, y 0, y V (x ). Then x is a strict local minimum of f subject to h(x) = 0. In fact, there exists scalars γ > 0 and ɛ > 0 such that f(x) f(x ) + γ 2 x x 2, x with h(x) = 0 and x x < ɛ.

26 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Lagrange Multiplier Theorem Lagrange Multiplier Theorem Sufficient Conditions Assume that f and h are twice continuously differentiable, and let x R n and λ R m satisfy x L(x, λ ) = 0, λ L(x, λ ) = 0, y T 2 xx y 0, y 0, y V (x ). Then x is a strict local minimum of f subject to h(x) = 0. In fact, there exists scalars γ > 0 and ɛ > 0 such that f(x) f(x ) + γ 2 x x 2, x with h(x) = 0 and x x < ɛ. Approach can be extended to treat both equality and inequality constraints Karush-Kuhn-Tucker (KKT) necessary optimality conditions More general: Fritz John optimality conditions

27 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Lagrangian Function Inequality Constrained Problem We consider the problem min x X f(x) subject to g(x) 0, where f : R n R and g : R n R r are continuously differentiable functions and X is a closed set. The interior of the set is defined by S = {x X g j (x) < 0, j = 0,..., r}. We assume that S is nonempty and any feasible point is in the closure of S.

28 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Barrier Method Consider a barrier function, that is continuous and goes to as any one of the constraints g j (x) approaches 0 from negative values. The two most common examples are B(x) = r ln ( g j (x)), Barrier Method: j=1 B(x) = r j=1 1 g j (x), logarithmic inverse x k = arg min x S {f(x) + ɛk B(x)}, k = 0, 1,... where the parameter sequence {ɛ k } satisfies 0 < ɛ k+1 < ɛ k for all k and ɛ k 0.

29 Barrier Method { } Constrained r Optimization Lagrange Multiplier Theory r Barrier and Interior Point Methods 1 Penalty and Augmented Lagrangian Methods B(x) = ln g j (x), B(x) =. g j (x) j=1 j=1 Barrier Method: Barrier term ɛ k B(x) { goes to zero } for all interior points x S as ɛ k x k = arg min f (x)+ 0 k B(x), k = 0, 1,..., x S Every limit point of a sequence {x k } generated by a barrier method where the is a global parameter minimum sequence of the original { k } satisfies contrained0 < problem. k+1 < k for all k and k 0. ε B(x) ε' B(x) ε' < ε Boundary of S Boundary of S S Quelle: Bertsekas

30 Decrease faster than dictated by complexity Optimization analysis. over a Convex Set Barrier and Interior Point Methods Constrained Optimization Lagrange Multiplier Theory Penalty and Augmented Lagrangian Methods Require more than one Newton step per (approximate) Long-step minimization. methods Short-step and Use line search as in unconstrained Newton s method. Require much smaller number of (approximate) minimizations. Following approximately the central path by decreasing ɛ k slowly as in (a) or quickly as in (b). In (a) a single Newton step is required in each approximate minimization at the expense of a large number of approximate minimizations. x * x * Central Path Central Path x k+2 x(ε k+2 ) x k+1 x(ε k+1 ) x k x(ε k ) x S x k+2 x(εk+2 ) x k+1 x(ε k+1 ) x k x(ε k ) x S (a) (b) Quelle: Bertsekas

31 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Quadratic Penalty Method Equality Constrained Problem We consider the problem min x X f(x) subject to h(x) = 0, where f : R n R and h : R n R m are continuously differentiable functions, and X R n.

32 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Quadratic Penalty Method Equality Constrained Problem We consider the problem min x X f(x) subject to h(x) = 0, where f : R n R and h : R n R m are continuously differentiable functions, and X R n. We then define the augmented Lagrangian function L c : R n R m R given by L c (x, λ) = f(x) + λ T h(x) + c 2 h(x) 2 where c is a positive penalty parameter.

33 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Two Convergence Mechanisms Unconstrained minimization of L c (, λ) can yield points close to x by: Taking λ close to λ. For c sufficiently large, x is a strict local minimum of the augmented Lagragian L c (, λ ) corresponding to λ, i.e., L c (x, λ ) L c (x, λ ) + γ 2 x x 2, for all x with x x < ɛ, and for some γ > 0 and ɛ > 0. Taking c very large. For high c, there is a high cost for infeasibility, so the unconstrained minima of L c (, λ) will be nearly feasible. We have { f(x) if x X and h(x) = 0, L c (, λ) otherwise.

34 EXAMPLE CONTINUED Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Example 2 2 min x 1 + x 2, x =1, λ = 1 x 1 =1 Problem min 1 2 (x2 1 + x2 2 ) s.t. x 1 = 1 Augmented Lagrangian L c (x, λ) = 1 2 (x2 1 + x2 2 ) +λ(x 1 1) + c 2 (x 1 1) 2 Unconstrained Minimum x 1 (λ, c) = c λ c+1 x 2 (λ, c) = 0 For c > 0 (λ = 1): lim λ λ x 1(λ, c) = 1 = x 1 lim λ λ x 2(λ, c) = 0 = x 2 x 2 c = 1 λ = 0 1/2 0 1 x 1 x 2 0 1/2 c = 1 λ = 0 1 x 2 x 2 c = 1 λ = - 1/2 3/4 0 1 x 1 c = 10 λ = 0 x 1 0 x 10/ Quelle: Bertsekas

35 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods Multiplier Methods The multiplier method finds x k = arg min x R n L c k(x, λk ) f(x) + (λ k ) T h(x) + ck 2 h(x) 2 and update λ k using Key advantages λ k+1 = λ k + c k h(x k ) Less ill-conditioning: it is not necessary that c k (only that c k exceeds some threshold) Faster convergence when λ k is updated than when λ k is kept constant (wether c k or not)

36 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods The End

37 Lagrange Multiplier Theory Barrier and Interior Point Methods Penalty and Augmented Lagrangian Methods The End Ich danke Ihnen für die Aufmerksamkeit und wünsche Ihnen viel Glück bei der Prüfung und im weiteren Studium. Bei Fragen, Kommentaren,... : Tel.:

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

Introduction to Convex Optimization for Machine Learning

Introduction to Convex Optimization for Machine Learning Introduction to Convex Optimization for Machine Learning John Duchi University of California, Berkeley Practical Machine Learning, Fall 2009 Duchi (UC Berkeley) Convex Optimization for Machine Learning

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Convex Optimization SVM s and Kernel Machines

Convex Optimization SVM s and Kernel Machines Convex Optimization SVM s and Kernel Machines S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola and Stéphane Canu S.V.N.

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

In this section, we will consider techniques for solving problems of this type.

In this section, we will consider techniques for solving problems of this type. Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving

More information

HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba

HOMEWORK 4 SOLUTIONS. All questions are from Vector Calculus, by Marsden and Tromba HOMEWORK SOLUTIONS All questions are from Vector Calculus, by Marsden and Tromba Question :..6 Let w = f(x, y) be a function of two variables, and let x = u + v, y = u v. Show that Solution. By the chain

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1

GenOpt (R) Generic Optimization Program User Manual Version 3.0.0β1 (R) User Manual Environmental Energy Technologies Division Berkeley, CA 94720 http://simulationresearch.lbl.gov Michael Wetter MWetter@lbl.gov February 20, 2009 Notice: This work was supported by the U.S.

More information

15 Kuhn -Tucker conditions

15 Kuhn -Tucker conditions 5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes

More information

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}. Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian

More information

Online Learning of Optimal Strategies in Unknown Environments

Online Learning of Optimal Strategies in Unknown Environments 1 Online Learning of Optimal Strategies in Unknown Environments Santiago Paternain and Alejandro Ribeiro Abstract Define an environment as a set of convex constraint functions that vary arbitrarily over

More information

The Envelope Theorem 1

The Envelope Theorem 1 John Nachbar Washington University April 2, 2015 1 Introduction. The Envelope Theorem 1 The Envelope theorem is a corollary of the Karush-Kuhn-Tucker theorem (KKT) that characterizes changes in the value

More information

Derivative Free Optimization

Derivative Free Optimization Department of Mathematics Derivative Free Optimization M.J.D. Powell LiTH-MAT-R--2014/02--SE Department of Mathematics Linköping University S-581 83 Linköping, Sweden. Three lectures 1 on Derivative Free

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

PRIMAL-DUAL METHODS FOR LINEAR PROGRAMMING

PRIMAL-DUAL METHODS FOR LINEAR PROGRAMMING PRIMAL-DUAL METHODS FOR LINEAR PROGRAMMING Philip E. GILL, Walter MURRAY, Dulce B. PONCELEÓN and Michael A. SAUNDERS Technical Report SOL 91-3 Revised March 1994 Abstract Many interior-point methods for

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

A Globally Convergent Primal-Dual Interior Point Method for Constrained Optimization Hiroshi Yamashita 3 Abstract This paper proposes a primal-dual interior point method for solving general nonlinearly

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

Dual Methods for Total Variation-Based Image Restoration

Dual Methods for Total Variation-Based Image Restoration Dual Methods for Total Variation-Based Image Restoration Jamylle Carter Institute for Mathematics and its Applications University of Minnesota, Twin Cities Ph.D. (Mathematics), UCLA, 2001 Advisor: Tony

More information

College of William & Mary Department of Computer Science

College of William & Mary Department of Computer Science College of William & Mary Department of Computer Science WM-CS-2005-01 Implementing Generating Set Search Methods for Linearly Constrained Minimization Robert Michael Lewis, Anne Shepherd, and Virginia

More information

Metric Spaces. Chapter 7. 7.1. Metrics

Metric Spaces. Chapter 7. 7.1. Metrics Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

More information

Notes on metric spaces

Notes on metric spaces Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

More information

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U. Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

Lecture 2: The SVM classifier

Lecture 2: The SVM classifier Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima. Lecture 0: Convexity and Optimization We say that if f is a once continuously differentiable function on an interval I, and x is a point in the interior of I that x is a critical point of f if f (x) =

More information

Constrained optimization.

Constrained optimization. ams/econ 11b supplementary notes ucsc Constrained optimization. c 2010, Yonatan Katznelson 1. Constraints In many of the optimization problems that arise in economics, there are restrictions on the values

More information

Lecture 4: Equality Constrained Optimization. Tianxi Wang

Lecture 4: Equality Constrained Optimization. Tianxi Wang Lecture 4: Equality Constrained Optimization Tianxi Wang wangt@essex.ac.uk 2.1 Lagrange Multiplier Technique (a) Classical Programming max f(x 1, x 2,..., x n ) objective function where x 1, x 2,..., x

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Stationarity Results for Generating Set Search for Linearly Constrained Optimization

Stationarity Results for Generating Set Search for Linearly Constrained Optimization SANDIA REPORT SAND2003-8550 Unlimited Release Printed October 2003 Stationarity Results for Generating Set Search for Linearly Constrained Optimization Tamara G. Kolda, Robert Michael Lewis, and Virginia

More information

Solutions of Equations in One Variable. Fixed-Point Iteration II

Solutions of Equations in One Variable. Fixed-Point Iteration II Solutions of Equations in One Variable Fixed-Point Iteration II Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Parameter Estimation: A Deterministic Approach using the Levenburg-Marquardt Algorithm

Parameter Estimation: A Deterministic Approach using the Levenburg-Marquardt Algorithm Parameter Estimation: A Deterministic Approach using the Levenburg-Marquardt Algorithm John Bardsley Department of Mathematical Sciences University of Montana Applied Math Seminar-Feb. 2005 p.1/14 Outline

More information

Stochastic Inventory Control

Stochastic Inventory Control Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the

More information

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION

BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION BIG DATA PROBLEMS AND LARGE-SCALE OPTIMIZATION: A DISTRIBUTED ALGORITHM FOR MATRIX FACTORIZATION Ş. İlker Birbil Sabancı University Ali Taylan Cemgil 1, Hazal Koptagel 1, Figen Öztoprak 2, Umut Şimşekli

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Cyber-Security Analysis of State Estimators in Power Systems

Cyber-Security Analysis of State Estimators in Power Systems Cyber-Security Analysis of State Estimators in Electric Power Systems André Teixeira 1, Saurabh Amin 2, Henrik Sandberg 1, Karl H. Johansson 1, and Shankar Sastry 2 ACCESS Linnaeus Centre, KTH-Royal Institute

More information

Vectors, Gradient, Divergence and Curl.

Vectors, Gradient, Divergence and Curl. Vectors, Gradient, Divergence and Curl. 1 Introduction A vector is determined by its length and direction. They are usually denoted with letters with arrows on the top a or in bold letter a. We will use

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

Minimize subject to. x S R

Minimize subject to. x S R Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such

More information

Constrained Least Squares

Constrained Least Squares Constrained Least Squares Authors: G.H. Golub and C.F. Van Loan Chapter 12 in Matrix Computations, 3rd Edition, 1996, pp.580-587 CICN may05/1 Background The least squares problem: min Ax b 2 x Sometimes,

More information

Inner Product Spaces

Inner Product Spaces Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

17.3.1 Follow the Perturbed Leader

17.3.1 Follow the Perturbed Leader CS787: Advanced Algorithms Topic: Online Learning Presenters: David He, Chris Hopman 17.3.1 Follow the Perturbed Leader 17.3.1.1 Prediction Problem Recall the prediction problem that we discussed in class.

More information

The equivalence of logistic regression and maximum entropy models

The equivalence of logistic regression and maximum entropy models The equivalence of logistic regression and maximum entropy models John Mount September 23, 20 Abstract As our colleague so aptly demonstrated ( http://www.win-vector.com/blog/20/09/the-simplerderivation-of-logistic-regression/

More information

Some Optimization Fundamentals

Some Optimization Fundamentals ISyE 3133B Engineering Optimization Some Optimization Fundamentals Shabbir Ahmed E-mail: sahmed@isye.gatech.edu Homepage: www.isye.gatech.edu/~sahmed Basic Building Blocks min or max s.t. objective as

More information

LAGRANGIAN RELAXATION TECHNIQUES FOR LARGE SCALE OPTIMIZATION

LAGRANGIAN RELAXATION TECHNIQUES FOR LARGE SCALE OPTIMIZATION LAGRANGIAN RELAXATION TECHNIQUES FOR LARGE SCALE OPTIMIZATION Kartik Sivaramakrishnan Department of Mathematics NC State University kksivara@ncsu.edu http://www4.ncsu.edu/ kksivara SIAM/MGSA Brown Bag

More information

Solving polynomial least squares problems via semidefinite programming relaxations

Solving polynomial least squares problems via semidefinite programming relaxations Solving polynomial least squares problems via semidefinite programming relaxations Sunyoung Kim and Masakazu Kojima August 2007, revised in November, 2007 Abstract. A polynomial optimization problem whose

More information

Least-Squares Intersection of Lines

Least-Squares Intersection of Lines Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

α α λ α = = λ λ α ψ = = α α α λ λ ψ α = + β = > θ θ β > β β θ θ θ β θ β γ θ β = γ θ > β > γ θ β γ = θ β = θ β = θ β = β θ = β β θ = = = β β θ = + α α α α α = = λ λ λ λ λ λ λ = λ λ α α α α λ ψ + α =

More information

ON A GLOBALIZATION PROPERTY

ON A GLOBALIZATION PROPERTY APPLICATIONES MATHEMATICAE 22,1 (1993), pp. 69 73 S. ROLEWICZ (Warszawa) ON A GLOBALIZATION PROPERTY Abstract. Let (X, τ) be a topological space. Let Φ be a class of realvalued functions defined on X.

More information

A FIRST COURSE IN OPTIMIZATION THEORY

A FIRST COURSE IN OPTIMIZATION THEORY A FIRST COURSE IN OPTIMIZATION THEORY RANGARAJAN K. SUNDARAM New York University CAMBRIDGE UNIVERSITY PRESS Contents Preface Acknowledgements page xiii xvii 1 Mathematical Preliminaries 1 1.1 Notation

More information

Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series

Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series 1 Problem 1 (10 pts) Find the radius of convergence and interval of convergence of the series a n n=1 n(x + 2) n 5 n 1. n(x + 2)n Solution: Do the ratio test for the absolute convergence. Let a n =. Then,

More information

The Multiplicative Weights Update method

The Multiplicative Weights Update method Chapter 2 The Multiplicative Weights Update method The Multiplicative Weights method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game

More information

c 2006 Society for Industrial and Applied Mathematics

c 2006 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 17, No. 4, pp. 943 968 c 2006 Society for Industrial and Applied Mathematics STATIONARITY RESULTS FOR GENERATING SET SEARCH FOR LINEARLY CONSTRAINED OPTIMIZATION TAMARA G. KOLDA, ROBERT

More information

Optimization in R n Introduction

Optimization in R n Introduction Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

More information

Chapter 1. Metric Spaces. Metric Spaces. Examples. Normed linear spaces

Chapter 1. Metric Spaces. Metric Spaces. Examples. Normed linear spaces Chapter 1. Metric Spaces Metric Spaces MA222 David Preiss d.preiss@warwick.ac.uk Warwick University, Spring 2008/2009 Definitions. A metric on a set M is a function d : M M R such that for all x, y, z

More information

The Method of Lagrange Multipliers

The Method of Lagrange Multipliers The Method of Lagrange Multipliers S. Sawyer October 25, 2002 1. Lagrange s Theorem. Suppose that we want to maximize (or imize a function of n variables f(x = f(x 1, x 2,..., x n for x = (x 1, x 2,...,

More information

Optimal energy trade-off schedules

Optimal energy trade-off schedules Optimal energy trade-off schedules Neal Barcelo, Daniel G. Cole, Dimitrios Letsios, Michael Nugent, Kirk R. Pruhs To cite this version: Neal Barcelo, Daniel G. Cole, Dimitrios Letsios, Michael Nugent,

More information

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4 Economics 00a / HBS 4010 / HKS API-111 FALL 010 Solutions to Practice Problems for Lectures 1 to 4 1.1. Quantity Discounts and the Budget Constraint (a) The only distinction between the budget line with

More information

Lecture Topic: Low-Rank Approximations

Lecture Topic: Low-Rank Approximations Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original

More information

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

MATH 425, PRACTICE FINAL EXAM SOLUTIONS. MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

A Distributed Line Search for Network Optimization

A Distributed Line Search for Network Optimization 01 American Control Conference Fairmont Queen Elizabeth, Montréal, Canada June 7-June 9, 01 A Distributed Line Search for Networ Optimization Michael Zargham, Alejandro Ribeiro, Ali Jadbabaie Abstract

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

14. Nonlinear least-squares

14. Nonlinear least-squares 14 Nonlinear least-squares EE103 (Fall 2011-12) definition Newton s method Gauss-Newton method 14-1 Nonlinear least-squares minimize r i (x) 2 = r(x) 2 r i is a nonlinear function of the n-vector of variables

More information

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4) ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton

More information

Preprint 2009-02. Ayalew Getachew Mersha, Stephan Dempe Feasible Direction Method for Bilevel Programming Problem ISSN 1433-9307

Preprint 2009-02. Ayalew Getachew Mersha, Stephan Dempe Feasible Direction Method for Bilevel Programming Problem ISSN 1433-9307 Fakultät für Mathematik und Informatik Preprint 2009-02 Ayalew Getachew Mersha, Stephan Dempe Feasible Direction Method for Bilevel Programming Problem ISSN 1433-9307 Ayalew Getachew Mersha, Stephan Dempe

More information

Vector and Matrix Norms

Vector and Matrix Norms Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty

More information

4. Expanding dynamical systems

4. Expanding dynamical systems 4.1. Metric definition. 4. Expanding dynamical systems Definition 4.1. Let X be a compact metric space. A map f : X X is said to be expanding if there exist ɛ > 0 and L > 1 such that d(f(x), f(y)) Ld(x,

More information

CSCI567 Machine Learning (Fall 2014)

CSCI567 Machine Learning (Fall 2014) CSCI567 Machine Learning (Fall 2014) Drs. Sha & Liu {feisha,yanliu.cs}@usc.edu September 22, 2014 Drs. Sha & Liu ({feisha,yanliu.cs}@usc.edu) CSCI567 Machine Learning (Fall 2014) September 22, 2014 1 /

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 10 Boundary Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information