Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725"

Transcription

1 Duality in General Programs Ryan Tibshirani Convex Optimization /

2 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T u h T v subject to Ax = b subject to A T u G T v = c Gx h v 0 Primal LP Dual LP Explanation: for any u and v 0, and x primal feasible, u T (Ax b) + v T (Gx h) 0, i.e., ( A T u G T v) T x b T u h T v So if c = A T u G T v, we get a bound on primal optimal value 2

3 Explanation # 2: for any u and v 0, and x primal feasible c T x c T x + u T (Ax b) + v T (Gx h) := L(x, u, v) So if C denotes primal feasible set, f primal optimal value, then for any u and v 0, f min x C L(x, u, v) min x L(x, u, v) := g(u, v) In other words, g(u, v) is a lower bound on f for any u and v 0. Note that { b T u h T v if c = A T u G T v g(u, v) = otherwise This second explanation reproduces the same dual, but is actually completely general and applies to arbitrary optimization problems (even nonconvex ones) 3

4 Outline Today: Lagrange dual function Langrange dual problem Weak and strong duality Examples Preview of duality uses 4

5 Lagrangian Consider general minimization problem min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j = 1,... r Need not be convex, but of course we will pay special attention to convex case We define the Lagrangian as L(x, u, v) = f(x) + m u i h i (x) + i=1 r v j l j (x) j=1 New variables u R m, v R r, with u 0 (implicitly, we define L(x, u, v) = for u < 0) 5

6 Important property: for any u 0 and v, f(x) L(x, u, v) at each feasible x Why? For feasible x, The Lagrange dual function 217 L(x, u, v) = f(x) + m u i h i (x) + } {{ } 0 i=1 r j=1 v j l j (x) f(x) } {{ } = x Figure 5.1 Lower bound from a dual feasible point. The solid curve shows the objective function f 0,andthedashedcurveshowstheconstraintfunctionf 1. The feasible set is the interval [ 0.46, 0.46], which is indicated by the two Solid line is f Dashed line is h, hence feasible set [ 0.46, 0.46] Each dotted line shows L(x, u, v) for different choices of u 0 and v (From B & V page 217) 6

7 Lagrange dual function 3 Let C denote primal feasible set, f denote 1 primal optimal value. Minimizing L(x, u, v) over all x gives a lower 0 bound: f min x C L(x, u, v) min x 2 L(x, u, v) := g(u, v) We call g(u, v) the Lagrange dual function, and it gives a lower bound on f for any u 0 and v, called dual feasible u, v Dashed horizontal line is f Dual variable λ is (our u) Solid line shows g(λ) (From B & V page 217) x Figure 5.1 Lower bound from a dual feasible point. The solid curve shows the objective function f0, andthedashedcurveshowstheconstraintfunctionf1. The feasible set is the interval [ 0.46, 0.46], which is indicated by the two dotted vertical lines. The optimal point and value are x = 0.46, p =1.54 (shown as a circle). The dotted curves show L(x,λ) forλ =0.1, 0.2,...,1.0. Each of these has a minimum value smaller than p,sinceonthefeasibleset (and for λ 0) we have L(x, λ) f0(x). g(λ) λ Figure 5.2 The dual function g for the problem in figure 5.1. Neither f0 nor f1 is convex, but the dual function is concave. The horizontal dashed line shows p,theoptimalvalueoftheproblem. 7

8 Consider quadratic program: where Q 0. Lagrangian: Example: quadratic program 1 min x R n 2 xt Qx + c T x subject to Ax = b, x 0 L(x, u, v) = 1 2 xt Qx + c T x u T x + v T (Ax b) Lagrange dual function: g(u, v) = min x R n L(x, u, v) = 1 2 (c u+at v) T Q 1 (c u+a T v) b T v For any u 0 and any v, this is lower a bound on primal optimal value f 8

9 Same problem but now Q 0. Lagrangian: 1 min x R n 2 xt Qx + c T x subject to Ax = b, x 0 L(x, u, v) = 1 2 xt Qx + c T x u T x + v T (Ax b) Lagrange dual function: 1 2 (c u + AT v) T Q + (c u + A T v) b T v g(u, v) = if c u + A T v null(q) otherwise where Q + denotes generalized inverse of Q. For any u 0, v, and c u + A T v null(q), g(u, v) is a nontrivial lower bound on f 9

10 10 Example: quadratic program in 2D We choose f(x) to be quadratic in 2 variables, subject to x 0. Dual function g(u) is also quadratic in 2 variables, also subject to u 0 primal Dual function g(u) provides a bound on f for every u 0 f / g dual Largest bound this gives us: turns out to be exactly f... coincidence? x1 / u1 x2 / u2 More on this later, via KKT conditions

11 Given primal problem Lagrange dual problem min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j = 1,... r Our constructed dual function g(u, v) satisfies f g(u, v) for all u 0 and v. Hence best lower bound is given by maximizing g(u, v) over all dual feasible u, v, yielding Lagrange dual problem: max u,v g(u, v) subject to u 0 Key property, called weak duality: if dual optimal value is g, then f g Note that this always holds (even if primal problem is nonconvex) 11

12 12 Another key property: the dual problem is a convex optimization problem (as written, it is a concave maximization problem) Again, this is always true (even when primal problem is not convex) By definition: { g(u, v) = min f(x) + x = max x m u i h i (x) + i=1 { f(x) r j=1 m u i h i (x) i=1 } v j l j (x) r j=1 } v j l j (x) } {{ } pointwise maximum of convex functions in (u, v) I.e., g is concave in (u, v), and u 0 is a convex constraint, hence dual problem is a concave maximization problem

13 f 13 Example: nonconvex quartic minimization Define f(x) = x 4 50x x (nonconvex), minimize subject to constraint x 4.5 Primal Dual g x v Dual function g can be derived explicitly, via closed-form equation for roots of a cubic equation

14 14 Form of g is quite complicated: where for i = 1, 2, 3, F i (u) = g(u) = min F i 4 (u) 50Fi 2 (u) + 100F i (u), i=1,2,3 a ( i /3 432(100 u) ( (100 u) ) ) 1/2 1/ /3 1 (432(100 u) ( (100 u) ) 1/2 ) 1/3, and a 1 = 1, a 2 = ( 1 + i 3)/2, a 3 = ( 1 i 3)/2 Without the context of duality it would be difficult to tell whether or not g is concave... but we know it must be!

15 15 Strong duality Recall that we always have f g (weak duality). On the other hand, in some problems we have observed that actually which is called strong duality f = g Slater s condition: if the primal is a convex problem (i.e., f and h 1,... h m are convex, l 1,... l r are affine), and there exists at least one strictly feasible x R n, meaning h 1 (x) < 0,... h m (x) < 0 and l 1 (x) = 0,... l r (x) = 0 then strong duality holds This is a pretty weak condition. (Further refinement: only require strict inequalities over functions h i that are not affine)

16 16 LPs: back to where we started For linear programs: Easy to check that the dual of the dual LP is the primal LP Refined version of Slater s condition: strong duality holds for an LP if it is feasible Apply same logic to its dual LP: strong duality holds if it is feasible Hence strong duality holds for LPs, except when both primal and dual are infeasible In other words, we nearly always have strong duality for LPs

17 17 Example: support vector machine dual Given y { 1, 1} n, X R n p, rows x 1,... x n, recall the support vector machine problem: min β,β 0,ξ subject to 1 2 β C n i=1 ξ i ξ i 0, i = 1,... n y i (x T i β + β 0 ) 1 ξ i, i = 1,... n Introducing dual variables v, w 0, we form the Lagrangian: L(β, β 0, ξ, v, w) = 1 n n 2 β C ξ i v i ξ i + i=1 i=1 n ( w i 1 ξi y i (x T i β + β 0 ) ) i=1

18 18 Minimizing over β, β 0, ξ gives Lagrange dual function: { 1 g(v, w) = 2 wt X XT w + 1 T w if w = C1 v, w T y = 0 otherwise where X = diag(y)x. Thus SVM dual problem, eliminating slack variable v, becomes max 1 w 2 wt X XT w + 1 T w subject to 0 w C1, w T y = 0 Check: Slater s condition is satisfied, and we have strong duality. Further, from study of SVMs, might recall that at optimality β = X T w This is not a coincidence, as we ll later via the KKT conditions

19 19 Duality gap Given primal feasible x and dual feasible u, v, the quantity f(x) g(u, v) is called the duality gap between x and u, v. Note that f(x) f f(x) g(u, v) so if the duality gap is zero, then x is primal optimal (and similarly, u, v are dual optimal) From an algorithmic viewpoint, provides a stopping criterion: if f(x) g(u, v) ɛ, then we are guaranteed that f(x) f ɛ Very useful, especially in conjunction with iterative methods... more dual uses in coming lectures

20 20 Dual norms Let x be a norm, e.g., l p norm: x p = ( n i=1 x i p ) 1/p, for p 1 Trace norm: X tr = r i=1 σ i(x) We define its dual norm x as x = max z 1 zt x Gives us the inequality z T x z x, like Cauchy-Schwartz. Back to our examples, l p norm dual: ( x p ) = x q, where 1/p + 1/q = 1 Trace norm dual: ( X tr ) = X op = σ max (X) Dual norm of dual norm: it turns out that x = x... we ll see connections to duality (including this one) in coming lectures

21 21 References S. Boyd and L. Vandenberghe (2004), Convex optimization, Chapter 5 R. T. Rockafellar (1970), Convex analysis, Chapters 28 30

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

Introduction to Convex Optimization for Machine Learning

Introduction to Convex Optimization for Machine Learning Introduction to Convex Optimization for Machine Learning John Duchi University of California, Berkeley Practical Machine Learning, Fall 2009 Duchi (UC Berkeley) Convex Optimization for Machine Learning

More information

Definition of a Linear Program

Definition of a Linear Program Definition of a Linear Program Definition: A function f(x 1, x,..., x n ) of x 1, x,..., x n is a linear function if and only if for some set of constants c 1, c,..., c n, f(x 1, x,..., x n ) = c 1 x 1

More information

LAGRANGIAN RELAXATION TECHNIQUES FOR LARGE SCALE OPTIMIZATION

LAGRANGIAN RELAXATION TECHNIQUES FOR LARGE SCALE OPTIMIZATION LAGRANGIAN RELAXATION TECHNIQUES FOR LARGE SCALE OPTIMIZATION Kartik Sivaramakrishnan Department of Mathematics NC State University kksivara@ncsu.edu http://www4.ncsu.edu/ kksivara SIAM/MGSA Brown Bag

More information

Convex Optimization SVM s and Kernel Machines

Convex Optimization SVM s and Kernel Machines Convex Optimization SVM s and Kernel Machines S.V.N. Vishy Vishwanathan vishy@axiom.anu.edu.au National ICT of Australia and Australian National University Thanks to Alex Smola and Stéphane Canu S.V.N.

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach

Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach MASTER S THESIS Recovery of primal solutions from dual subgradient methods for mixed binary linear programming; a branch-and-bound approach PAULINE ALDENVIK MIRJAM SCHIERSCHER Department of Mathematical

More information

Two-Stage Stochastic Linear Programs

Two-Stage Stochastic Linear Programs Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Several Views of Support Vector Machines

Several Views of Support Vector Machines Several Views of Support Vector Machines Ryan M. Rifkin Honda Research Institute USA, Inc. Human Intention Understanding Group 2007 Tikhonov Regularization We are considering algorithms of the form min

More information

Proximal mapping via network optimization

Proximal mapping via network optimization L. Vandenberghe EE236C (Spring 23-4) Proximal mapping via network optimization minimum cut and maximum flow problems parametric minimum cut problem application to proximal mapping Introduction this lecture:

More information

Absolute Value Programming

Absolute Value Programming Computational Optimization and Aplications,, 1 11 (2006) c 2006 Springer Verlag, Boston. Manufactured in The Netherlands. Absolute Value Programming O. L. MANGASARIAN olvi@cs.wisc.edu Computer Sciences

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

24. The Branch and Bound Method

24. The Branch and Bound Method 24. The Branch and Bound Method It has serious practical consequences if it is known that a combinatorial problem is NP-complete. Then one can conclude according to the present state of science that no

More information

Minimize subject to. x S R

Minimize subject to. x S R Chapter 12 Lagrangian Relaxation This chapter is mostly inspired by Chapter 16 of [1]. In the previous chapters, we have succeeded to find efficient algorithms to solve several important problems such

More information

Duality of linear conic problems

Duality of linear conic problems Duality of linear conic problems Alexander Shapiro and Arkadi Nemirovski Abstract It is well known that the optimal values of a linear programming problem and its dual are equal to each other if at least

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Distributed Machine Learning and Big Data

Distributed Machine Learning and Big Data Distributed Machine Learning and Big Data Sourangshu Bhattacharya Dept. of Computer Science and Engineering, IIT Kharagpur. http://cse.iitkgp.ac.in/~sourangshu/ August 21, 2015 Sourangshu Bhattacharya

More information

Support Vector Machines Explained

Support Vector Machines Explained March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),

More information

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION 1 A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION Dimitri Bertsekas M.I.T. FEBRUARY 2003 2 OUTLINE Convexity issues in optimization Historical remarks Our treatment of the subject Three unifying lines of

More information

Regression Using Support Vector Machines: Basic Foundations

Regression Using Support Vector Machines: Basic Foundations Regression Using Support Vector Machines: Basic Foundations Technical Report December 2004 Aly Farag and Refaat M Mohamed Computer Vision and Image Processing Laboratory Electrical and Computer Engineering

More information

Linear Programming in Matrix Form

Linear Programming in Matrix Form Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm,

More information

Mathematical finance and linear programming (optimization)

Mathematical finance and linear programming (optimization) Mathematical finance and linear programming (optimization) Geir Dahl September 15, 2009 1 Introduction The purpose of this short note is to explain how linear programming (LP) (=linear optimization) may

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

Discrete Optimization

Discrete Optimization Discrete Optimization [Chen, Batson, Dang: Applied integer Programming] Chapter 3 and 4.1-4.3 by Johan Högdahl and Victoria Svedberg Seminar 2, 2015-03-31 Todays presentation Chapter 3 Transforms using

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology

More information

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.

Summer course on Convex Optimization. Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U. Summer course on Convex Optimization Fifth Lecture Interior-Point Methods (1) Michel Baes, K.U.Leuven Bharath Rangarajan, U.Minnesota Interior-Point Methods: the rebirth of an old idea Suppose that f is

More information

The Method of Lagrange Multipliers

The Method of Lagrange Multipliers The Method of Lagrange Multipliers S. Sawyer October 25, 2002 1. Lagrange s Theorem. Suppose that we want to maximize (or imize a function of n variables f(x = f(x 1, x 2,..., x n for x = (x 1, x 2,...,

More information

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method

LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Support Vector Machines

Support Vector Machines Support Vector Machines Charlie Frogner 1 MIT 2011 1 Slides mostly stolen from Ryan Rifkin (Google). Plan Regularization derivation of SVMs. Analyzing the SVM problem: optimization, duality. Geometric

More information

Optimal energy trade-off schedules

Optimal energy trade-off schedules Optimal energy trade-off schedules Neal Barcelo, Daniel G. Cole, Dimitrios Letsios, Michael Nugent, Kirk R. Pruhs To cite this version: Neal Barcelo, Daniel G. Cole, Dimitrios Letsios, Michael Nugent,

More information

1 Portfolio mean and variance

1 Portfolio mean and variance Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

MA107 Precalculus Algebra Exam 2 Review Solutions

MA107 Precalculus Algebra Exam 2 Review Solutions MA107 Precalculus Algebra Exam 2 Review Solutions February 24, 2008 1. The following demand equation models the number of units sold, x, of a product as a function of price, p. x = 4p + 200 a. Please write

More information

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4 Economics 00a / HBS 4010 / HKS API-111 FALL 010 Solutions to Practice Problems for Lectures 1 to 4 1.1. Quantity Discounts and the Budget Constraint (a) The only distinction between the budget line with

More information

Support Vector Machines

Support Vector Machines CS229 Lecture notes Andrew Ng Part V Support Vector Machines This set of notes presents the Support Vector Machine (SVM) learning algorithm. SVMs are among the best (and many believe are indeed the best)

More information

Lecture 4: Equality Constrained Optimization. Tianxi Wang

Lecture 4: Equality Constrained Optimization. Tianxi Wang Lecture 4: Equality Constrained Optimization Tianxi Wang wangt@essex.ac.uk 2.1 Lagrange Multiplier Technique (a) Classical Programming max f(x 1, x 2,..., x n ) objective function where x 1, x 2,..., x

More information

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes

3.3. Solving Polynomial Equations. Introduction. Prerequisites. Learning Outcomes Solving Polynomial Equations 3.3 Introduction Linear and quadratic equations, dealt within Sections 3.1 and 3.2, are members of a class of equations, called polynomial equations. These have the general

More information

Lecture 6: Logistic Regression

Lecture 6: Logistic Regression Lecture 6: CS 194-10, Fall 2011 Laurent El Ghaoui EECS Department UC Berkeley September 13, 2011 Outline Outline Classification task Data : X = [x 1,..., x m]: a n m matrix of data points in R n. y { 1,

More information

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture 11: 0-1 Quadratic Program and Lower Bounds Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

More information

Lecture 2: The SVM classifier

Lecture 2: The SVM classifier Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

MOSEK modeling manual

MOSEK modeling manual MOSEK modeling manual August 12, 2014 Contents 1 Introduction 1 2 Linear optimization 3 2.1 Introduction....................................... 3 2.1.1 Basic notions.................................. 3

More information

Lecture 13 Linear quadratic Lyapunov theory

Lecture 13 Linear quadratic Lyapunov theory EE363 Winter 28-9 Lecture 13 Linear quadratic Lyapunov theory the Lyapunov equation Lyapunov stability conditions the Lyapunov operator and integral evaluating quadratic integrals analysis of ARE discrete-time

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

Solutions for Review Problems

Solutions for Review Problems olutions for Review Problems 1. Let be the triangle with vertices A (,, ), B (4,, 1) and C (,, 1). (a) Find the cosine of the angle BAC at vertex A. (b) Find the area of the triangle ABC. (c) Find a vector

More information

Notes on Symmetric Matrices

Notes on Symmetric Matrices CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

LINES AND PLANES CHRIS JOHNSON

LINES AND PLANES CHRIS JOHNSON LINES AND PLANES CHRIS JOHNSON Abstract. In this lecture we derive the equations for lines and planes living in 3-space, as well as define the angle between two non-parallel planes, and determine the distance

More information

In this section, we will consider techniques for solving problems of this type.

In this section, we will consider techniques for solving problems of this type. Constrained optimisation roblems in economics typically involve maximising some quantity, such as utility or profit, subject to a constraint for example income. We shall therefore need techniques for solving

More information

Duality in Linear Programming

Duality in Linear Programming Duality in Linear Programming 4 In the preceding chapter on sensitivity analysis, we saw that the shadow-price interpretation of the optimal simplex multipliers is a very useful concept. First, these shadow

More information

TOPIC 4: DERIVATIVES

TOPIC 4: DERIVATIVES TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the

More information

Lecture 3: Linear Programming Relaxations and Rounding

Lecture 3: Linear Programming Relaxations and Rounding Lecture 3: Linear Programming Relaxations and Rounding 1 Approximation Algorithms and Linear Relaxations For the time being, suppose we have a minimization problem. Many times, the problem at hand can

More information

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov DISCRETE AND CONTINUOUS Website: http://aimsciences.org DYNAMICAL SYSTEMS Supplement Volume 2005 pp. 345 354 OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN E.V. Grigorieva Department of Mathematics

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology Handout 6 18.433: Combinatorial Optimization February 20th, 2009 Michel X. Goemans 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the

More information

Study Guide 2 Solutions MATH 111

Study Guide 2 Solutions MATH 111 Study Guide 2 Solutions MATH 111 Having read through the sample test, I wanted to warn everyone, that I might consider asking questions involving inequalities, the absolute value function (as in the suggested

More information

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS Sensitivity Analysis 3 We have already been introduced to sensitivity analysis in Chapter via the geometry of a simple example. We saw that the values of the decision variables and those of the slack and

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

Some Optimization Fundamentals

Some Optimization Fundamentals ISyE 3133B Engineering Optimization Some Optimization Fundamentals Shabbir Ahmed E-mail: sahmed@isye.gatech.edu Homepage: www.isye.gatech.edu/~sahmed Basic Building Blocks min or max s.t. objective as

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

Roots of Polynomials

Roots of Polynomials Roots of Polynomials (Com S 477/577 Notes) Yan-Bin Jia Sep 24, 2015 A direct corollary of the fundamental theorem of algebra is that p(x) can be factorized over the complex domain into a product a n (x

More information

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression Logistic Regression Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max

More information

Lecture 7: Finding Lyapunov Functions 1

Lecture 7: Finding Lyapunov Functions 1 Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.243j (Fall 2003): DYNAMICS OF NONLINEAR SYSTEMS by A. Megretski Lecture 7: Finding Lyapunov Functions 1

More information

3 Does the Simplex Algorithm Work?

3 Does the Simplex Algorithm Work? Does the Simplex Algorithm Work? In this section we carefully examine the simplex algorithm introduced in the previous chapter. Our goal is to either prove that it works, or to determine those circumstances

More information

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.

Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year. This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra

More information

Chap 4 The Simplex Method

Chap 4 The Simplex Method The Essence of the Simplex Method Recall the Wyndor problem Max Z = 3x 1 + 5x 2 S.T. x 1 4 2x 2 12 3x 1 + 2x 2 18 x 1, x 2 0 Chap 4 The Simplex Method 8 corner point solutions. 5 out of them are CPF solutions.

More information

Example Degree Clip Impl. Int Sub. 1 3 2.5 1 10 15 2 3 1.8 1 5 6 3 5 1 1.7 3 5 4 10 1 na 2 4. Table 7.1: Relative computation times

Example Degree Clip Impl. Int Sub. 1 3 2.5 1 10 15 2 3 1.8 1 5 6 3 5 1 1.7 3 5 4 10 1 na 2 4. Table 7.1: Relative computation times Chapter 7 Curve Intersection Several algorithms address the problem of computing the points at which two curves intersect. Predominant approaches are the Bézier subdivision algorithm [LR80], the interval

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Insurance. Michael Peters. December 27, 2013

Insurance. Michael Peters. December 27, 2013 Insurance Michael Peters December 27, 2013 1 Introduction In this chapter, we study a very simple model of insurance using the ideas and concepts developed in the chapter on risk aversion. You may recall

More information

A New Quantitative Behavioral Model for Financial Prediction

A New Quantitative Behavioral Model for Financial Prediction 2011 3rd International Conference on Information and Financial Engineering IPEDR vol.12 (2011) (2011) IACSIT Press, Singapore A New Quantitative Behavioral Model for Financial Prediction Thimmaraya Ramesh

More information

Separation Properties for Locally Convex Cones

Separation Properties for Locally Convex Cones Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam

More information

Planar Curve Intersection

Planar Curve Intersection Chapter 7 Planar Curve Intersection Curve intersection involves finding the points at which two planar curves intersect. If the two curves are parametric, the solution also identifies the parameter values

More information

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface?

Curves and Surfaces. Goals. How do we draw surfaces? How do we specify a surface? How do we approximate a surface? Curves and Surfaces Parametric Representations Cubic Polynomial Forms Hermite Curves Bezier Curves and Surfaces [Angel 10.1-10.6] Goals How do we draw surfaces? Approximate with polygons Draw polygons

More information

Apr 23, 2015. Calculus with Algebra and Trigonometry II Lecture 23Final Review: Apr Curve 23, 2015 sketching 1 / and 19pa

Apr 23, 2015. Calculus with Algebra and Trigonometry II Lecture 23Final Review: Apr Curve 23, 2015 sketching 1 / and 19pa Calculus with Algebra and Trigonometry II Lecture 23 Final Review: Curve sketching and parametric equations Apr 23, 2015 Calculus with Algebra and Trigonometry II Lecture 23Final Review: Apr Curve 23,

More information

Equilibrium computation: Part 1

Equilibrium computation: Part 1 Equilibrium computation: Part 1 Nicola Gatti 1 Troels Bjerre Sorensen 2 1 Politecnico di Milano, Italy 2 Duke University, USA Nicola Gatti and Troels Bjerre Sørensen ( Politecnico di Milano, Italy, Equilibrium

More information

A Survey of Kernel Clustering Methods

A Survey of Kernel Clustering Methods A Survey of Kernel Clustering Methods Maurizio Filippone, Francesco Camastra, Francesco Masulli and Stefano Rovetta Presented by: Kedar Grama Outline Unsupervised Learning and Clustering Types of clustering

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of.

Application. Outline. 3-1 Polynomial Functions 3-2 Finding Rational Zeros of. Polynomial. 3-3 Approximating Real Zeros of. Polynomial and Rational Functions Outline 3-1 Polynomial Functions 3-2 Finding Rational Zeros of Polynomials 3-3 Approximating Real Zeros of Polynomials 3-4 Rational Functions Chapter 3 Group Activity:

More information

Numerical methods for American options

Numerical methods for American options Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment

More information

The Multiplicative Weights Update method

The Multiplicative Weights Update method Chapter 2 The Multiplicative Weights Update method The Multiplicative Weights method is a simple idea which has been repeatedly discovered in fields as diverse as Machine Learning, Optimization, and Game

More information