Numerical Optimization

Size: px
Start display at page:

Download "Numerical Optimization"

Transcription

1 Numerical Optimization Unit 8: Quadratic Programming, Active Set Method, and Sequential Quadratic Programming Che-Rung Lee Scribe: April 27, 2011 (UNIT 8) Numerical Optimization April 27, / 20

2 Quadratic programming The General form min g( x) = 1 x 2 x T G x + x T c The Lagrangian s.t. a T i x = b i i E a T i x b i i I L( x, λ) = 1 2 x T G x + x T c λ T (A x b) a 1 a 2 A =. Rm n (assuming m n) a m (UNIT 8) Numerical Optimization April 27, / 20

3 KKT Condition KKT condition L( x, λ) = 0 a T i x = b i a T i x b i λ i 0 i E i I i I λ i ( a T i x b i ) = 0, i I 1 If G is positive definite and x, λ satisfy KKT conditions, then x is the global solution of the optimization problem (Homework) (UNIT 8) Numerical Optimization April 27, / 20

4 KKT Matrix Let s first consider the equality constraints only L( x, λ) = 0 G x AT λ = c A x = b [ ] [ ] [ ] G A T x c A 0 λ = b [ ] [ ] [ ] G A T x c A 0 = λ (1) b [ ] G A T The matrix is called the KKT matrix. A 0 If A has full row-rank and the reduced Hessian Z T GZ is positive definite, where span{z} is the null space of span{a T } then the KKT matrix is nonsingular. (Homework) If there are only equality constraints, solve (1) directly can get optimal solution. (UNIT 8) Numerical Optimization April 27, / 20

5 Variable elimination 1/2 A general strategy for linear equality constraints is variable elimination. Variable elimination Let A x = b be the linear equality constraints. A R m n, x R n, and b R m. we assume m < n. We can choose m linearly independent columns to be basic variables and use them to solve the constraints. Others are called nonbasic variables, setting to 0. Let ( ) xb AP = [B N], = P T x x N where P is a permutation matrix. b = A x = APP T x = B x B + N x N. (UNIT 8) Numerical Optimization April 27, / 20

6 Variable elimination 2/2 Therefore, x B = B 1 b B 1 N x N. The original constrained problem becomes an unconstrained problem ([ min x f ( x) B s.t. A x = = min f 1 b B 1 ]) N x N b x N x N For nonlinear equality constraints, variable elimination may not feasible. For example, min x,y x 2 + y 2 s.t. (x 1) 3 = y 2 The solution is at (x, y) = (1, 0). Using variable elimination, the problems becomes min x x 2 + (x 1) 3 which is unbounded. For inequality constraints, we can use the active set method. (UNIT 8) Numerical Optimization April 27, / 20

7 Active set method Active set method Active set method solves constrained optimization problems by searching solutions in the feasible sets. If constraints are linear and one can guess the active constrains for the optimal solution, then one can use the active constraints to reduce the number of unknowns, and then perform algorithms for unconstrained optimization problems. Problem: how to guess the set of active constraints. Linear programming is an active set method. (UNIT 8) Numerical Optimization April 27, / 20

8 Active set method for convex QP Consider the following example Example min x g( x) = (x 1 1) 2 + (x 2 2.5) 2 y s.t. x 1 2x (1) x 1 2x (2) x 1 + 2x (3) x 1, x 2 0 (4),(5) ( ) 2 Initial step x 0 = 0 The working set W 0 = {(3), (5)} (1) (2) (4) (3) (5) x (UNIT 8) Numerical Optimization April 27, / 20

9 Solve the EQP Example min g( x) = x1 2 2x x2 2 5x + 25 x 4 ( ) ( ) T min x x + x + 29 x ( ) ( ) s.t. x = Since it has equality constraints only, using KKT system to solve the QP ( ) 2 K = x = 5 λ (UNIT 8) Numerical Optimization April 27, / 20

10 Optimality check Example The solution of the KKT system is x 1 = ( 2 0 ) (, 2 λ 1 = 1 Both Lagrangian multipliers are negative This is not the optimal solution. Remove one of the constraint W 1 = {(5)} and solve the KKT system again Let p 1 = x 2 x 1 = ( x2 λ 2 ( 1 0 ) ) ( ) =, x 2 = ( 1 0 direction, and search α 1 [0, 1], such that x + 2 ( 1 0 ) ), λ 2 = 5 ) be the search = x 1 + α 1 p 1 is feasible. (UNIT 8) Numerical Optimization April 27, / 20

11 Feasibility check Example ( 2 α1 The feasibility check: x 1 + α 1 p 1 = 0 ( ) Move to x 2 = x =. 0 ) α 1 = 1 But λ 2 < 0, it is not the optimal solution. Remove one more constraint W 2 = ( ) ( ) ( 2 0 ( ) 2 1 x3 = x = 2.5 ( ) ( ) ( ) Let p 2 = = be the search direction ( ) x = x 2 + α 2 p 2 =, α 2.5α 2 [0, 1] 2 ) (UNIT 8) Numerical Optimization April 27, / 20

12 Adding new constraints Example For α 2 = 1, constraint (1) will be invalided: x 1 2x α ( ) ( ) 1 1 Move to x 3 = =, and add (1) to the working 1.5 set, W 3 = {(1)}. Solve the KKT conditions: ( x λ 4 ) = ( ) 1.4 x 4 = 1.7 λ4 = 0.8 Since λ 0 and all constraints are satisfied, it is the optimal solution. (UNIT 8) Numerical Optimization April 27, / 20

13 Gradient projection method A special case of inequality constrains are bounded constraints min q( x) = 1 x 2 xg x + x T c s.t. l x u (which means li x i u i forall i) which can be solved by gradient project method. Algorithm: Gradient projection method 1 Given x 0. 2 For k = 0, 1, 2,... until converge (a) Find a search direction g. (b) Construct a piece wise linear function x(t) = p( x + t g, l, u) (c) In each line sequent of x(t) find the optimal solution x c (d) Use { x c as an initial guess to solve min x q( x) xi = x s.t. i c i A( x c ) l i x i u i i / A( x c ) (UNIT 8) Numerical Optimization April 27, / 20

14 In the algorithm 1/2 For 2(a), the search direction can be any descent direction, such as q. For 2(b), the piecewise linear function is computed as p( x, l, u) i = l i if x i < l i x i if x i [l i, u i ] u i if x i > u i For each element x i, compute t i as (l i x i )/g i if g i < 0, and l i > t i = (u i x i )/g i if g i > 0, and u i < + otherwise Sort { t 1, t 2,..., t n } to get t 0 = 0 < t 1 < t 2 <... < t m 1 < = t m For each { [t j 1, t j ], x(t) = x(t j 1 ) + (t t j 1 ) p j 1, where p j 1 gi if t i = j 1 < t i 0 otherwise (UNIT 8) Numerical Optimization April 27, / 20

15 In the algorithm 2/2 For 2(c), we search optimal solution segment by segment [t j 1, t j ]. Let t = t t j 1, x( t) = x(t j 1 ) + t p j 1. q(x( t)) = c T (x(t j 1 ) + t p j 1 ) (x(t j 1 + t p j 1 ) T G(x(t j 1 + t p j 1 )) = 1 2 a( pj 1 ) t 2 + b( p j 1 ) t + c( p j 1 ) for some function a, b, c of p j 1. Optimal t = b( pj 1 ) a( p j 1 and t = t j 1 + t. ) (UNIT 8) Numerical Optimization April 27, / 20

16 Sequential quadratic programming Recall the Newton s method for unconstrained problem. It builds a quadratic model at each x K and solve the quadratic problem at every step. SQP uses similar idea: It builds a QP at each step, f : R n R, c : R n R m min x f ( x) s.t. c( x) = 0 Let A( x) be the Jacobian of c( x): A( x) = ( c 1 c 2 c m ) T The Lagrangian L( x, λ) = f ( x ) λ T c( x) The KKT condition: f ( x ) A( x ) T λ = 0, c( x ) = 0 (UNIT 8) Numerical Optimization April 27, / 20

17 Newton s method [ x L( x, λ) c( x) ] [ Let F ( x, f ( x) A( x) T λ λ) = = c( x) solution x, λ must satisfy the KKT condition F ( x, λ ) = 0. Using Newton s method to solve F = 0. [ The Jacobian F ( x, xx L A( x) λ) = T ] A( x) 0 The Newton step [ ] [ ] [ ] xk+1 xk pk = λk+1 + λk, lk ]. The optimal where [ xx L A T k A k 0 ] [ ] [ pk fk + A = T k λ k lk c k (We use A k = A( x k ), f k = f ( x k ), and c k = c( x k ).) ] (2) (UNIT 8) Numerical Optimization April 27, / 20

18 Alternative formulation To simply that, we examine the first equation xx L p k A T k l k = f k + A T k λ k Since A(x k ) T ( λ k + l k ) = A(x k ) T λk+1, we can rewrite (2) as [ xx L A T k A k 0 ] [ ] [ ] p k fk = λ k+1 c k (3) If A k is of full row-rank and Z T 2 xxlz is positive definition, where Z is the null space of spana k, then the above equation solves the following QP 1 min p 2 pt xx L p + x fk T p s.t. A k p + c k = 0 For inequality, we can are the similar technique: A k p + c k 0 (UNIT 8) Numerical Optimization April 27, / 20

19 The sequential quadratic programming Algorithm: The sequential quadratic programming 1 Given x 0. 2 For k = 0, 1, 2,... until converge 1 Solve Subject to 2 Set x k+1 = x k + p k 1 min p k 2 pt k 2 xxl p k + fk T p k c i ( x k ) T p k + c i ( x k ) = 0 c i ( x k ) T p k + c i ( x k ) 0 i E i I (UNIT 8) Numerical Optimization April 27, / 20

20 Problem of SQP For inequality constraints, linearization may cause inconsistent problem. For example, consider the constraints The feasible region is x 2. c 1 : x 1 0 c 2 : x The linearization of c 1 and c 2 at x = 1 becomes c1 T p + c 1(x) 0 c2 T p + c 2(x) 0 p 0 2p 3 0, for which feasible region is empty. (UNIT 8) Numerical Optimization April 27, / 20

Nonlinear Programming Methods.S2 Quadratic Programming

Nonlinear Programming Methods.S2 Quadratic Programming Nonlinear Programming Methods.S2 Quadratic Programming Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard A linearly constrained optimization problem with a quadratic objective

More information

Nonlinear Optimization: Algorithms 3: Interior-point methods

Nonlinear Optimization: Algorithms 3: Interior-point methods Nonlinear Optimization: Algorithms 3: Interior-point methods INSEAD, Spring 2006 Jean-Philippe Vert Ecole des Mines de Paris Jean-Philippe.Vert@mines.org Nonlinear optimization c 2006 Jean-Philippe Vert,

More information

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2

IEOR 4404 Homework #2 Intro OR: Deterministic Models February 14, 2011 Prof. Jay Sethuraman Page 1 of 5. Homework #2 IEOR 4404 Homework # Intro OR: Deterministic Models February 14, 011 Prof. Jay Sethuraman Page 1 of 5 Homework #.1 (a) What is the optimal solution of this problem? Let us consider that x 1, x and x 3

More information

Interior-Point Algorithms for Quadratic Programming

Interior-Point Algorithms for Quadratic Programming Interior-Point Algorithms for Quadratic Programming Thomas Reslow Krüth Kongens Lyngby 2008 IMM-M.Sc-2008-19 Technical University of Denmark Informatics and Mathematical Modelling Building 321, DK-2800

More information

Big Data - Lecture 1 Optimization reminders

Big Data - Lecture 1 Optimization reminders Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Big Data - Lecture 1 Optimization reminders S. Gadat Toulouse, Octobre 2014 Schedule Introduction Major issues Examples Mathematics

More information

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2

. P. 4.3 Basic feasible solutions and vertices of polyhedra. x 1. x 2 4. Basic feasible solutions and vertices of polyhedra Due to the fundamental theorem of Linear Programming, to solve any LP it suffices to consider the vertices (finitely many) of the polyhedron P of the

More information

Linear Programming Notes V Problem Transformations

Linear Programming Notes V Problem Transformations Linear Programming Notes V Problem Transformations 1 Introduction Any linear programming problem can be rewritten in either of two standard forms. In the first form, the objective is to maximize, the material

More information

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725

Duality in General Programs. Ryan Tibshirani Convex Optimization 10-725/36-725 Duality in General Programs Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: duality in linear programs Given c R n, A R m n, b R m, G R r n, h R r : min x R n c T x max u R m, v R r b T

More information

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method Robert M. Freund February, 004 004 Massachusetts Institute of Technology. 1 1 The Algorithm The problem

More information

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA This handout presents the second derivative test for a local extrema of a Lagrange multiplier problem. The Section 1 presents a geometric motivation for the

More information

4.6 Linear Programming duality

4.6 Linear Programming duality 4.6 Linear Programming duality To any minimization (maximization) LP we can associate a closely related maximization (minimization) LP. Different spaces and objective functions but in general same optimal

More information

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR Solutions Of Some Non-Linear Programming Problems A PROJECT REPORT submitted by BIJAN KUMAR PATEL for the partial fulfilment for the award of the degree of Master of Science in Mathematics under the supervision

More information

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005

LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 LECTURE: INTRO TO LINEAR PROGRAMMING AND THE SIMPLEX METHOD, KEVIN ROSS MARCH 31, 2005 DAVID L. BERNICK dbernick@soe.ucsc.edu 1. Overview Typical Linear Programming problems Standard form and converting

More information

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

constraint. Let us penalize ourselves for making the constraint too big. We end up with a Chapter 4 Constrained Optimization 4.1 Equality Constraints (Lagrangians) Suppose we have a problem: Maximize 5, (x 1, 2) 2, 2(x 2, 1) 2 subject to x 1 +4x 2 =3 If we ignore the constraint, we get the

More information

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max SOLUTION OF LINEAR PROGRAMMING PROBLEMS THEOREM 1 If a linear programming problem has a solution, then it must occur at a vertex, or corner point, of the feasible set, S, associated with the problem. Furthermore,

More information

Interior Point Methods and Linear Programming

Interior Point Methods and Linear Programming Interior Point Methods and Linear Programming Robert Robere University of Toronto December 13, 2012 Abstract The linear programming problem is usually solved through the use of one of two algorithms: either

More information

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen (für Informatiker) M. Grepl J. Berger & J.T. Frings Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2010/11 Problem Statement Unconstrained Optimality Conditions Constrained

More information

Support Vector Machine (SVM)

Support Vector Machine (SVM) Support Vector Machine (SVM) CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin

More information

Parameter Estimation for Bingham Models

Parameter Estimation for Bingham Models Dr. Volker Schulz, Dmitriy Logashenko Parameter Estimation for Bingham Models supported by BMBF Parameter Estimation for Bingham Models Industrial application of ceramic pastes Material laws for Bingham

More information

Linear Programming. March 14, 2014

Linear Programming. March 14, 2014 Linear Programming March 1, 01 Parts of this introduction to linear programming were adapted from Chapter 9 of Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest and Stein [1]. 1

More information

10. Proximal point method

10. Proximal point method L. Vandenberghe EE236C Spring 2013-14) 10. Proximal point method proximal point method augmented Lagrangian method Moreau-Yosida smoothing 10-1 Proximal point method a conceptual algorithm for minimizing

More information

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above:

Reduced echelon form: Add the following conditions to conditions 1, 2, and 3 above: Section 1.2: Row Reduction and Echelon Forms Echelon form (or row echelon form): 1. All nonzero rows are above any rows of all zeros. 2. Each leading entry (i.e. left most nonzero entry) of a row is in

More information

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance

Advanced Lecture on Mathematical Science and Information Science I. Optimization in Finance Advanced Lecture on Mathematical Science and Information Science I Optimization in Finance Reha H. Tütüncü Visiting Associate Professor Dept. of Mathematical and Computing Sciences Tokyo Institute of Technology

More information

A Simple Introduction to Support Vector Machines

A Simple Introduction to Support Vector Machines A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear

More information

Nonlinear Algebraic Equations Example

Nonlinear Algebraic Equations Example Nonlinear Algebraic Equations Example Continuous Stirred Tank Reactor (CSTR). Look for steady state concentrations & temperature. s r (in) p,i (in) i In: N spieces with concentrations c, heat capacities

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

Chapter 7 Nonlinear Systems

Chapter 7 Nonlinear Systems Chapter 7 Nonlinear Systems Nonlinear systems in R n : X = B x. x n X = F (t; X) F (t; x ; :::; x n ) B C A ; F (t; X) =. F n (t; x ; :::; x n ) When F (t; X) = F (X) is independent of t; it is an example

More information

Lecture 2: The SVM classifier

Lecture 2: The SVM classifier Lecture 2: The SVM classifier C19 Machine Learning Hilary 2015 A. Zisserman Review of linear classifiers Linear separability Perceptron Support Vector Machine (SVM) classifier Wide margin Cost function

More information

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints Chapter 6 Linear Programming: The Simplex Method Introduction to the Big M Method In this section, we will present a generalized version of the simplex method that t will solve both maximization i and

More information

Optimization - Elements of Calculus - Basics of constrained and unconstrained optimization

Optimization - Elements of Calculus - Basics of constrained and unconstrained optimization 27. OKTOBER 2010 Optimization - Elements of Calculus - Basics of constrained and unconstrained optimization @ Optimization the general problem Optimization minimize, subject to Ω = (,, ), optimization

More information

Lecture 5 Principal Minors and the Hessian

Lecture 5 Principal Minors and the Hessian Lecture 5 Principal Minors and the Hessian Eivind Eriksen BI Norwegian School of Management Department of Economics October 01, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 5 Principal Minors and

More information

Practical Guide to the Simplex Method of Linear Programming

Practical Guide to the Simplex Method of Linear Programming Practical Guide to the Simplex Method of Linear Programming Marcel Oliver Revised: April, 0 The basic steps of the simplex algorithm Step : Write the linear programming problem in standard form Linear

More information

What is Linear Programming?

What is Linear Programming? Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to

More information

Optimization in R n Introduction

Optimization in R n Introduction Optimization in R n Introduction Rudi Pendavingh Eindhoven Technical University Optimization in R n, lecture Rudi Pendavingh (TUE) Optimization in R n Introduction ORN / 4 Some optimization problems designing

More information

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method Lecture 3 3B1B Optimization Michaelmas 2015 A. Zisserman Linear Programming Extreme solutions Simplex method Interior point method Integer programming and relaxation The Optimization Tree Linear Programming

More information

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix. Nullspace Let A = (a ij ) be an m n matrix. Definition. The nullspace of the matrix A, denoted N(A), is the set of all n-dimensional column

More information

1 Review of Least Squares Solutions to Overdetermined Systems

1 Review of Least Squares Solutions to Overdetermined Systems cs4: introduction to numerical analysis /9/0 Lecture 7: Rectangular Systems and Numerical Integration Instructor: Professor Amos Ron Scribes: Mark Cowlishaw, Nathanael Fillmore Review of Least Squares

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

1 Solving LPs: The Simplex Algorithm of George Dantzig

1 Solving LPs: The Simplex Algorithm of George Dantzig Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.

More information

Solutions to Math 51 First Exam January 29, 2015

Solutions to Math 51 First Exam January 29, 2015 Solutions to Math 5 First Exam January 29, 25. ( points) (a) Complete the following sentence: A set of vectors {v,..., v k } is defined to be linearly dependent if (2 points) there exist c,... c k R, not

More information

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1 J. Zhang Institute of Applied Mathematics, Chongqing University of Posts and Telecommunications, Chongqing

More information

Airport Planning and Design. Excel Solver

Airport Planning and Design. Excel Solver Airport Planning and Design Excel Solver Dr. Antonio A. Trani Professor of Civil and Environmental Engineering Virginia Polytechnic Institute and State University Blacksburg, Virginia Spring 2012 1 of

More information

Dual Methods for Total Variation-Based Image Restoration

Dual Methods for Total Variation-Based Image Restoration Dual Methods for Total Variation-Based Image Restoration Jamylle Carter Institute for Mathematics and its Applications University of Minnesota, Twin Cities Ph.D. (Mathematics), UCLA, 2001 Advisor: Tony

More information

Date: April 12, 2001. Contents

Date: April 12, 2001. Contents 2 Lagrange Multipliers Date: April 12, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 12 2.3. Informative Lagrange Multipliers...........

More information

Linear Programming Problems

Linear Programming Problems Linear Programming Problems Linear programming problems come up in many applications. In a linear programming problem, we have a function, called the objective function, which depends linearly on a number

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

The Multicut L-Shaped Method

The Multicut L-Shaped Method 1 / 25 The Multicut L-Shaped Method Operations Research Anthony Papavasiliou Contents 2 / 25 1 The Multicut L-Shaped Method [ 5.1 of BL] 2 Example 3 Example: Capacity Expansion Planning Table of Contents

More information

Outline. Generalize Simple Example

Outline. Generalize Simple Example Solving Simultaneous Nonlinear Algebraic Equations Larry Caretto Mechanical Engineering 309 Numerical Analysis of Engineering Systems March 5, 014 Outline Problem Definition of solving simultaneous nonlinear

More information

Introduction to the Finite Element Method (FEM)

Introduction to the Finite Element Method (FEM) Introduction to the Finite Element Method (FEM) ecture First and Second Order One Dimensional Shape Functions Dr. J. Dean Discretisation Consider the temperature distribution along the one-dimensional

More information

A Globally Convergent Primal-Dual Interior Point Method for Constrained Optimization Hiroshi Yamashita 3 Abstract This paper proposes a primal-dual interior point method for solving general nonlinearly

More information

Principles of Scientific Computing Nonlinear Equations and Optimization

Principles of Scientific Computing Nonlinear Equations and Optimization Principles of Scientific Computing Nonlinear Equations and Optimization David Bindel and Jonathan Goodman last revised March 6, 2006, printed March 6, 2009 1 1 Introduction This chapter discusses two related

More information

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3

MATH2210 Notebook 1 Fall Semester 2016/2017. 1 MATH2210 Notebook 1 3. 1.1 Solving Systems of Linear Equations... 3 MATH0 Notebook Fall Semester 06/07 prepared by Professor Jenny Baglivo c Copyright 009 07 by Jenny A. Baglivo. All Rights Reserved. Contents MATH0 Notebook 3. Solving Systems of Linear Equations........................

More information

Lecture 2: August 29. Linear Programming (part I)

Lecture 2: August 29. Linear Programming (part I) 10-725: Convex Optimization Fall 2013 Lecture 2: August 29 Lecturer: Barnabás Póczos Scribes: Samrachana Adhikari, Mattia Ciollaro, Fabrizio Lecci Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Optimization Modeling for Mining Engineers

Optimization Modeling for Mining Engineers Optimization Modeling for Mining Engineers Alexandra M. Newman Division of Economics and Business Slide 1 Colorado School of Mines Seminar Outline Linear Programming Integer Linear Programming Slide 2

More information

Simplex method summary

Simplex method summary Simplex method summary Problem: optimize a linear objective, subject to linear constraints 1. Step 1: Convert to standard form: variables on right-hand side, positive constant on left slack variables for

More information

Computing a Nearest Correlation Matrix with Factor Structure

Computing a Nearest Correlation Matrix with Factor Structure Computing a Nearest Correlation Matrix with Factor Structure Nick Higham School of Mathematics The University of Manchester higham@ma.man.ac.uk http://www.ma.man.ac.uk/~higham/ Joint work with Rüdiger

More information

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all. 1. Differentiation The first derivative of a function measures by how much changes in reaction to an infinitesimal shift in its argument. The largest the derivative (in absolute value), the faster is evolving.

More information

Distributed Machine Learning and Big Data

Distributed Machine Learning and Big Data Distributed Machine Learning and Big Data Sourangshu Bhattacharya Dept. of Computer Science and Engineering, IIT Kharagpur. http://cse.iitkgp.ac.in/~sourangshu/ August 21, 2015 Sourangshu Bhattacharya

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix 7. LU factorization EE103 (Fall 2011-12) factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization algorithm effect of rounding error sparse

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

A New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse

A New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse A New Method for Estimating Maximum Power Transfer and Voltage Stability Margins to Mitigate the Risk of Voltage Collapse Bernie Lesieutre Dan Molzahn University of Wisconsin-Madison PSERC Webinar, October

More information

Optimization of Supply Chain Networks

Optimization of Supply Chain Networks Optimization of Supply Chain Networks M. Herty TU Kaiserslautern September 2006 (2006) 1 / 41 Contents 1 Supply Chain Modeling 2 Networks 3 Optimization Continuous optimal control problem Discrete optimal

More information

Optimization Methods in Finance

Optimization Methods in Finance Optimization Methods in Finance Gerard Cornuejols Reha Tütüncü Carnegie Mellon University, Pittsburgh, PA 15213 USA January 2006 2 Foreword Optimization models play an increasingly important role in financial

More information

Question 2: How do you solve a matrix equation using the matrix inverse?

Question 2: How do you solve a matrix equation using the matrix inverse? Question : How do you solve a matrix equation using the matrix inverse? In the previous question, we wrote systems of equations as a matrix equation AX B. In this format, the matrix A contains the coefficients

More information

How To Understand And Solve A Linear Programming Problem

How To Understand And Solve A Linear Programming Problem At the end of the lesson, you should be able to: Chapter 2: Systems of Linear Equations and Matrices: 2.1: Solutions of Linear Systems by the Echelon Method Define linear systems, unique solution, inconsistent,

More information

Convex Programming Tools for Disjunctive Programs

Convex Programming Tools for Disjunctive Programs Convex Programming Tools for Disjunctive Programs João Soares, Departamento de Matemática, Universidade de Coimbra, Portugal Abstract A Disjunctive Program (DP) is a mathematical program whose feasible

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

7 Gaussian Elimination and LU Factorization

7 Gaussian Elimination and LU Factorization 7 Gaussian Elimination and LU Factorization In this final section on matrix factorization methods for solving Ax = b we want to take a closer look at Gaussian elimination (probably the best known method

More information

CHAPTER 9. Integer Programming

CHAPTER 9. Integer Programming CHAPTER 9 Integer Programming An integer linear program (ILP) is, by definition, a linear program with the additional constraint that all variables take integer values: (9.1) max c T x s t Ax b and x integral

More information

The NEWUOA software for unconstrained optimization without derivatives 1. M.J.D. Powell

The NEWUOA software for unconstrained optimization without derivatives 1. M.J.D. Powell DAMTP 2004/NA05 The NEWUOA software for unconstrained optimization without derivatives 1 M.J.D. Powell Abstract: The NEWUOA software seeks the least value of a function F(x), x R n, when F(x) can be calculated

More information

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1. Introduction Linear Programming for Optimization Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc. 1.1 Definition Linear programming is the name of a branch of applied mathematics that

More information

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems

A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems A Lagrangian-DNN Relaxation: a Fast Method for Computing Tight Lower Bounds for a Class of Quadratic Optimization Problems Sunyoung Kim, Masakazu Kojima and Kim-Chuan Toh October 2013 Abstract. We propose

More information

Solving Systems of Linear Equations Using Matrices

Solving Systems of Linear Equations Using Matrices Solving Systems of Linear Equations Using Matrices What is a Matrix? A matrix is a compact grid or array of numbers. It can be created from a system of equations and used to solve the system of equations.

More information

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where.

1 Introduction. Linear Programming. Questions. A general optimization problem is of the form: choose x to. max f(x) subject to x S. where. Introduction Linear Programming Neil Laws TT 00 A general optimization problem is of the form: choose x to maximise f(x) subject to x S where x = (x,..., x n ) T, f : R n R is the objective function, S

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: FILTER METHODS AND MERIT FUNCTIONS HANDE Y. BENSON, DAVID F. SHANNO, AND ROBERT J. VANDERBEI Operations Research and Financial Engineering Princeton

More information

160 CHAPTER 4. VECTOR SPACES

160 CHAPTER 4. VECTOR SPACES 160 CHAPTER 4. VECTOR SPACES 4. Rank and Nullity In this section, we look at relationships between the row space, column space, null space of a matrix and its transpose. We will derive fundamental results

More information

Study Guide 2 Solutions MATH 111

Study Guide 2 Solutions MATH 111 Study Guide 2 Solutions MATH 111 Having read through the sample test, I wanted to warn everyone, that I might consider asking questions involving inequalities, the absolute value function (as in the suggested

More information

Agricultural and Environmental Policy Models: Calibration, Estimation and Optimization. Richard E. Howitt

Agricultural and Environmental Policy Models: Calibration, Estimation and Optimization. Richard E. Howitt Agricultural and Environmental Policy Models: Calibration, Estimation and Optimization Richard E. Howitt January 18, 2005 ii Contents 1 Introduction 1 1.1 Introduction to Linear Models..................

More information

The equivalence of logistic regression and maximum entropy models

The equivalence of logistic regression and maximum entropy models The equivalence of logistic regression and maximum entropy models John Mount September 23, 20 Abstract As our colleague so aptly demonstrated ( http://www.win-vector.com/blog/20/09/the-simplerderivation-of-logistic-regression/

More information

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions.

Algebra 2 Chapter 1 Vocabulary. identity - A statement that equates two equivalent expressions. Chapter 1 Vocabulary identity - A statement that equates two equivalent expressions. verbal model- A word equation that represents a real-life problem. algebraic expression - An expression with variables.

More information

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization 2.1. Introduction Suppose that an economic relationship can be described by a real-valued

More information

Linear Programming I

Linear Programming I Linear Programming I November 30, 2003 1 Introduction In the VCR/guns/nuclear bombs/napkins/star wars/professors/butter/mice problem, the benevolent dictator, Bigus Piguinus, of south Antarctica penguins

More information

Lecture 11. 3.3.2 Cost functional

Lecture 11. 3.3.2 Cost functional Lecture 11 3.3.2 Cost functional Consider functions L(t, x, u) and K(t, x). Here L : R R n R m R, K : R R n R sufficiently regular. (To be consistent with calculus of variations we would have to take L(t,

More information

Cost Minimization and the Cost Function

Cost Minimization and the Cost Function Cost Minimization and the Cost Function Juan Manuel Puerta October 5, 2009 So far we focused on profit maximization, we could look at a different problem, that is the cost minimization problem. This is

More information

Special Situations in the Simplex Algorithm

Special Situations in the Simplex Algorithm Special Situations in the Simplex Algorithm Degeneracy Consider the linear program: Maximize 2x 1 +x 2 Subject to: 4x 1 +3x 2 12 (1) 4x 1 +x 2 8 (2) 4x 1 +2x 2 8 (3) x 1, x 2 0. We will first apply the

More information

Two-Stage Stochastic Linear Programs

Two-Stage Stochastic Linear Programs Two-Stage Stochastic Linear Programs Operations Research Anthony Papavasiliou 1 / 27 Two-Stage Stochastic Linear Programs 1 Short Reviews Probability Spaces and Random Variables Convex Analysis 2 Deterministic

More information

1 Review of Newton Polynomials

1 Review of Newton Polynomials cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

More information

Lecture 11: 0-1 Quadratic Program and Lower Bounds

Lecture 11: 0-1 Quadratic Program and Lower Bounds Lecture : - Quadratic Program and Lower Bounds (3 units) Outline Problem formulations Reformulation: Linearization & continuous relaxation Branch & Bound Method framework Simple bounds, LP bound and semidefinite

More information

Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

More information

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88

Nonlinear Algebraic Equations. Lectures INF2320 p. 1/88 Nonlinear Algebraic Equations Lectures INF2320 p. 1/88 Lectures INF2320 p. 2/88 Nonlinear algebraic equations When solving the system u (t) = g(u), u(0) = u 0, (1) with an implicit Euler scheme we have

More information

Lecture 8 February 4

Lecture 8 February 4 ICS273A: Machine Learning Winter 2008 Lecture 8 February 4 Scribe: Carlos Agell (Student) Lecturer: Deva Ramanan 8.1 Neural Nets 8.1.1 Logistic Regression Recall the logistic function: g(x) = 1 1 + e θt

More information

A Framework Algorithm to Compute Optimal Asset Allocation for. Retirement with Behavioral Utilities

A Framework Algorithm to Compute Optimal Asset Allocation for. Retirement with Behavioral Utilities A Framework Algorithm to Compute Optimal Asset Allocation for Retirement with Behavioral Utilities Aparna Gupta and Walter Murray Management Science and Engineering, Stanford University, Stanford, CA 94305

More information

Fleet Assignment Using Collective Intelligence

Fleet Assignment Using Collective Intelligence Fleet Assignment Using Collective Intelligence Nicolas E Antoine, Stefan R Bieniawski, and Ilan M Kroo Stanford University, Stanford, CA 94305 David H Wolpert NASA Ames Research Center, Moffett Field,

More information

Factoring Quadratic Expressions

Factoring Quadratic Expressions Factoring the trinomial ax 2 + bx + c when a = 1 A trinomial in the form x 2 + bx + c can be factored to equal (x + m)(x + n) when the product of m x n equals c and the sum of m + n equals b. (Note: the

More information

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality. 8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent

More information

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver Høgskolen i Narvik Sivilingeniørutdanningen STE637 ELEMENTMETODER Oppgaver Klasse: 4.ID, 4.IT Ekstern Professor: Gregory A. Chechkin e-mail: chechkin@mech.math.msu.su Narvik 6 PART I Task. Consider two-point

More information

Solving Linear Systems, Continued and The Inverse of a Matrix

Solving Linear Systems, Continued and The Inverse of a Matrix , Continued and The of a Matrix Calculus III Summer 2013, Session II Monday, July 15, 2013 Agenda 1. The rank of a matrix 2. The inverse of a square matrix Gaussian Gaussian solves a linear system by reducing

More information

Methods for Finding Bases

Methods for Finding Bases Methods for Finding Bases Bases for the subspaces of a matrix Row-reduction methods can be used to find bases. Let us now look at an example illustrating how to obtain bases for the row space, null space,

More information

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS KEITH CONRAD 1. Introduction The Fundamental Theorem of Algebra says every nonconstant polynomial with complex coefficients can be factored into linear

More information