Optimization. Constrained with equality constraints Constrained with inequality constraints. Unconstrained with a function of a single variable

Similar documents
Cost Minimization and the Cost Function

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Practice with Proofs

2.3 Convex Constrained Optimization Problems

Multi-variable Calculus and Optimization

Economics 121b: Intermediate Microeconomics Problem Set 2 1/20/10

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

Lecture 5 Principal Minors and the Hessian

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

TOPIC 4: DERIVATIVES

Linear Programming Notes V Problem Transformations

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

In this section, we will consider techniques for solving problems of this type.

Advanced Microeconomics

Constrained optimization.

Convex analysis and profit/cost/support functions

Math 120 Final Exam Practice Problems, Form: A

Duality of linear conic problems

Inner Product Spaces

Nonlinear Programming Methods.S2 Quadratic Programming

15 Kuhn -Tucker conditions

5.1 Derivatives and Graphs

The Envelope Theorem 1

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Constrained Optimisation

Nonlinear Optimization: Algorithms 3: Interior-point methods

No: Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

1 if 1 x 0 1 if 0 x 1

Schooling, Political Participation, and the Economy. (Online Supplementary Appendix: Not for Publication)

Math 53 Worksheet Solutions- Minmax and Lagrange

3. INNER PRODUCT SPACES

Metric Spaces. Chapter Metrics

Lecture Notes 10

How To Prove The Dirichlet Unit Theorem

Lecture 3. Linear Programming. 3B1B Optimization Michaelmas 2015 A. Zisserman. Extreme solutions. Simplex method. Interior point method

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

THE FUNDAMENTAL THEOREM OF ALGEBRA VIA PROPER MAPS

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

A FIRST COURSE IN OPTIMIZATION THEORY

Notes V General Equilibrium: Positive Theory. 1 Walrasian Equilibrium and Excess Demand

Consumer Theory. The consumer s problem

Understanding Basic Calculus

MATH 304 Linear Algebra Lecture 20: Inner product spaces. Orthogonal sets.

DERIVATIVES AS MATRICES; CHAIN RULE

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Lecture 2: August 29. Linear Programming (part I)

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

Notes on metric spaces

Linear and quadratic Taylor polynomials for functions of several variables.

Class Meeting # 1: Introduction to PDEs

Envelope Theorem. Kevin Wainwright. Mar 22, 2004

Date: April 12, Contents

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Mathematical finance and linear programming (optimization)

Høgskolen i Narvik Sivilingeniørutdanningen STE6237 ELEMENTMETODER. Oppgaver

Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8

1 3 4 = 8i + 20j 13k. x + w. y + w

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

3. Reaction Diffusion Equations Consider the following ODE model for population growth

Lecture Notes on Elasticity of Substitution

Further Study on Strong Lagrangian Duality Property for Invex Programs via Penalty Functions 1

1. Prove that the empty set is a subset of every set.

Linear Programming in Matrix Form

A QUICK GUIDE TO THE FORMULAS OF MULTIVARIABLE CALCULUS

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

Introduction to Algebraic Geometry. Bézout s Theorem and Inflection Points

CHAPTER 9. Integer Programming

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents

The Steepest Descent Algorithm for Unconstrained Optimization and a Bisection Line-search Method

CITY AND REGIONAL PLANNING Consumer Behavior. Philip A. Viton. March 4, Introduction 2

Practical Guide to the Simplex Method of Linear Programming

Math 265 (Butler) Practice Midterm II B (Solutions)

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

1 Lecture: Integration of rational functions by decomposition

15. Symmetric polynomials

4.6 Linear Programming duality

Metric Spaces. Chapter 1

Sensitivity analysis of utility based prices and risk-tolerance wealth processes

8 Hyperbolic Systems of First-Order Equations

1.7 Graphs of Functions

More than you wanted to know about quadratic forms

Vector and Matrix Norms

OPRE 6201 : 2. Simplex Method

correct-choice plot f(x) and draw an approximate tangent line at x = a and use geometry to estimate its slope comment The choices were:

NOTES ON LINEAR TRANSFORMATIONS

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

On the Interaction and Competition among Internet Service Providers

Scalar Valued Functions of Several Variables; the Gradient Vector

Network Traffic Modelling

CS 295 (UVM) The LMS Algorithm Fall / 23

4.5 Linear Dependence and Linear Independence

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

Lecture 13: Risk Aversion and Expected Utility

Algebra II End of Course Exam Answer Key Segment I. Scientific Calculator Only

Transcription:

Optimization Unconstrained Constrained with equality constraints Constrained with inequality constraints Unconstrained with a function of a single variable Given a real valued function, y = f(x) we will be concerned with the existence of extreme values of the dependent variable y and the values of x which generate these extrema f(x) is called the objective function and the independent variable x is called the choice variable In order to avoid boundary optima, we will assume that f : X R, where X is an open interval of R 1

Local and global extrema Def 1: f has a local maximum at a point x o X if N(x o ) such that f(x) - f(x o ) < 0 for all x N(x o ), x x o Def 1: f has a local minimum at a point x o X if N(x o ) such that f(x) - f(x o ) > 0 for all x N(x o ), x x o Def 3: A real valued function f(x), f: X R, has an absolute or global l maximum (minimum) i at a point x o X, if f(x) < f(x o ) (f(x) > f(x o )), for all x x o, such that x X Illustrations: global 2

Illustration: local Optimization and the First Derivative Test Proposition 1 If f has a local maximum or minimum i at x o X, then f '(x o ) = 0 Points at which f ' = 0 are called critical values The image f(x o ) is called a critical value of the function and x o is called a critical value of the independent variable To distinguish maxima and minima we must study the curvature of the function 3

Curvature In a neighborhood of a maximum, a function has the property that t it is locally ll concave, and in a neighborhood of a minimum it is locally convex Concavity is implied by a nonpositive second derivative and convexity is implied by a nonnegative second derivative Some definitions Def 1: A set X R n is convex if for any x, x' X, [0, 1], x+(1-) x' X Def 2: A real-valued fn,f(x), f: X R, X R (X convex) is concave (convex) if for any x, x' X, [0, 1], f( x + (1- ) x) ( ) f(x) + (1- ) f(x) Def 3: A real-valued fn, f(x), f: X R, X R (X convex) is strictly concave (strictly convex) if for any x x, x,x' X, (0, 1), f( x + (1 - ) x) > (<) f(x) + (1 - ) f(x) 4

A result Theorem 1: If a function, f(x), f: X R, such hthat txi is an open interval lx R, is differentiable over X, (i) it is strictly concave (strictly convex) if and only if f(x) is decreasing (increasing) over x, (ii) if f '' exists and is negative (positive), for all x, then f is strictly concave (convex) Functions of n variables The definition of strict concavity for a function of n variables is analogous to the above definition In this case f: X R, where X is a subset of R n We have Def 4 A real-valued fn, f(x), f: X R, X R n (X convex) is strictly concave (convex) if for any x x, x,x' X, (0, 1), f( x + (1 - ) x) > (<) f(x) + (1 - ) f(x) 5

Derivative characterization Theorem 2 Let f: f: X R, X R n (X convex) Suppose that tfi is twice continuously differentiable on X If d 2 f = dx'hdx is negative (positive) definite for all xx, then f is strictly concave (convex) If d 2 f is negative (positive) semidefinite for all xx, then f is concave (convex) A sufficiency result Proposition 2 Let f be twice differentiable Let there exists an x o X such that f '(x o ) = 0 (i) If f ''(x o ) < 0, then f has a local maximum at x o If, in addition, f '' < 0 for all x or if f is strictly concave, then the local maximum is a unique global maximum (ii) If f ''(x o ) > 0, then f has a local minimum at x o If, in addition, f '' > 0 for all x or if f is strictly tl convex, then the local minimum is a unique global minimum 6

Remark: terminology The zero derivative condition is called the first order condition (FOC) and dthe second derivative condition is called the second order condition (SOC) Examples #1 Let f(x) = x+x x -1 we know that f(x) = 1 - x -2 and x = 1 Now calculate f ''(x) f (x) = 2x -3 At x = 1, f ''(1) = 2 > 0, so that 1 is a local min At x = -1, f '' = -2, so that -1 is a local max #2 Let f = ax - bx 2, a,b > 0 and x > 0 Here f' = 0 implies x = a/2b Moreover, f'' = -2b < 0, for all x Thus, we have a global max 7

Existence In the above discussion, we have assumed the existence of a maximum or a minimum In this section, we wish to present sufficient conditions for the existence of an extremum Proposition Let f: X R, X R, where X is a closed interval of R Then if f is continuous, it has a maximum and a minimum on X If f is strictly concave, then there is a unique maximum on X If f is strictly convex, then it has a unique minimum on X Remarks This proposition includes boundary optima in its assertion of existence To insure existence of interior optima, one must show that boundary points are dominated by some interior point If those boundary points are not part of the domain of the function, then one can take the limit it of the function at boundary points and show that those limit points are dominated by some interior point 8

An n-variable Generalization A set X is an open subset of R n if x X N(x) X In this case N(x o )i is defined das the set of points within an distance of x o : n N(x o o 2 1/ 2 ) {x [ ( x x ) ], 0} i1 i i Let y = f(x), where x R n and f : X R, where X is an open subset of R n A local extremum Def f has a local maximum (minimum) at a point x o Xif N(x o ) such that t for all xn(x o ), f(x) - f(x o ) < 0 (f(x) - f(x o ) > 0 for a minimum) 9

The FOC Proposition 1 If a differentiable function f has a maximum or a minimum at x o X, then f i (x o ) = 0, for all i Remark: The n equations generated by setting each partial derivative equal to zero represent tthe first order conditions If a solution exists, then they may be solved for the n solution values x io The SOC The SOC for a max is (*) d 2 f(x o ) = n n i1 j1 f ij (x o ) dx i dx j < 0 for all (dx 1,, dx j ) 0 This condition is that the quadratic form n n i1 j 1 f ij (x o ) dx i dx j is negative definite 10

SOC for a max In (*), the discriminate is the Hessian matrix of f (the objective function) As discussed above, the rather cumbersome (*) condition is equivalent to a fairly simple sign condition f (SOC) (max) PM i of H = f n1 11 1n f, evaluated at x o, have signs (-1) i f nn The SOC for a min The analogous conditions for a minimum are that t (**) d 2 f(x o ) = n n i1 j1 f ij (x o ) dx i dx j > 0 for all (dx 1,, dx n ) 0 The matrix condition is f (SOC) (min) PM i of H = f 11 1n n1 f, evaluated at x o, have positive signs f nn 11

SOC If f satisfies the SOC for a maximum globally, ll then f is strictly tl concave If it satisfies the SOC for a minimum globally, then f is strictly convex A sufficiency result Proposition 2 If at a point x o we have (i) f i (x o ) = 0, for all i, and (ii) SOC for a maximum (minimum) is satisfied at x o, Then x o is a local maximum (minimum) If in addition the SOC is met for all x X or if f is strictly concave (convex), then x o is a unique global maximum (minimum) 12

Examples #1 Maximizing a profit function over two strategy variables Let profit be a function of the two variables x i, i =1,2 The profit function is (x 1,x 2 ) = R(x 1,x 2 ) - r i x i, where r i is the unit cost of x i and R is revenue We wish to characterize a profit maximal choice of x i The problem is written as Max (x 1,x 2 ) { x1, x2} #1 Show FOC and SOC Characterize, x i / r 1 using Cramer's rule 13

#1 The FOC are 1 (x 1,x 2 ) = 0 2 (x 1,x 2 ) = 0 The second order conditions are 11 < 0, 11 22-12 2 > 0 (recall Young's Theorem ij = ji ) #1 H x1 / r1 x2 / r 1 = 1 0, where H is the relevant Hessian Using Cramer's rule, x 1 /r 1 = 1 0 H 12 22 = 22 / H < 0 Likewise x 2 /r 1 = 11 21 H 1 0 = - 21 / H 14

Remark The comparative static derivative x2 / r 1 has a sign which depends on whether 1 and 2 are complements or substitutes It is negative in the case of complements and positive in the case of substitutes Examples #2 Min { xy}, x 2 + xy +2y 2 The FOC are 2x + y = 0, x + 4y = 0 Solving for the critical values x = 0 and y = 0 f 11 = 2, f 12 = 1 and f 22 = 4 The Hessian is H = 2 1 1 4, with f 11 = 2 > 0 and H = 8-1 = 7 > 0 Thus, (0,0) is a minimum Further, it is global, because the Hessian sign conditions are met for any x,y 15

Existence with n Choice Variables Def 1 A set X R n is said to be open if for all x X N(x) such that N(x) X The set X is said to be closed if its complement is open Def 2 A set X R n is said to be bounded if the distance between any two of its points is finite That is, n i 1 [ ( x for all x, x' X ' 2 i x i ) ] 1 / 2 Existence Def 3 A set X R n is said to be compact if it is both closed and bounded d 16

Existence result Proposition Let f: X R, where X is a subset of R n If X is compact and f is continuous, then f has a maximum and a minimum on X If X is both compact and convex and f is strictly concave, then f has a unique maximum on X If X is both compact and convex and f is strictly convex, then f has a unique minimum on X Existence Remark: This proposition does not distinguish i between boundary optima and interior optima The results presented here can be used to show the existence of interior optima by showing that boundary optima are dominated The technique is as described above 17

Constrained Optimization The basic problem is to maximize a function of at least two independent d variables subject to a constraint We write the objective function as f(x) and the constraint as g(x) = 0 The constraint set is written as C = { x g(x) = 0} f maps a subset of R n into the real line Examples Max u(x 1, x 2 ) subject to I-p 1 x 1 p 2 x 2 = o Min r 1 x 1 +r 2 x 2 subject to q f(x 1,x 2 ) = 0 18

The basic problem The optimization problem is written as Max { x,, } 1 x n f(x) subject to g(x) = 0 A local constrained extremum Def x o is a local maximum (minimum) of f(x) subject to g(x) = 0 if there exists N(x o ) such that N(x o ) C and x N(x o ) C, f(x) < (>) f(x o ) 19

The FOC Proposition 1 Ltfb Let be a differentiable tibl function whose n independent id d variables ibl are restricted titdby the differentiable constraint g(x) = 0 Form the function L(,x) f(x) + g(x), where is an undetermined multiplier If x o is an interior maximizer or minimizer of f subject to g(x) = 0, then there is a o such that (1) L( o, x o )/x i = 0, for all lli, and (2) L( o, x o )/ = 0 The SOC The relevant SOC for a maximum is that (*) d 2 f(x o ) = n n i1 j1 f ij (x o ) dx i dx j < 0 for all (dx 1,, dx n ) 0 such that dg = 0 Condition (*) says that the quadratic form d 2 f is negative definite subject to the constraint that dg = 0 This is equivalent to d 2 L = d 2 f being negative definite subject to dg = 0, because dl = df +dg + gd = df, if dg = g = 0 20

SOC Condition (*) is equivalent to a rather convenient condition involving i the bordered Hessian matrix The bordered Hessian is given by H( o, x o ) = 0 g1 gn g 1 L 11 L 1n g L L n n1 nn Remark The bordered Hessian is the Jacobian of the FOC in variables (,x i ) L / 0 L / x 0 i 21

SOC The sign for a max condition is SOC (max) PM i of H of order i 3, evaluated at ( o, x o ), has sign (-1) i+1 SOC For a minimum, the second order condition is (**) d 2 f(x o ) = n n i 1 j1 f ij (x o ) dx i dx j > 0 for all (dx 1,, dx n ) 0 such that dg = 0 The equivalent sign condition is SOC (min) PM i of H of order i 3, evaluated at ( o, x o ), are all negative 22

Sufficiency Result Proposition 2 Let f be a differentiable function whose n independent variables are restricted by the differentiable i constraint g(x) = 0 Form the function L(,x) f(x) + g(x), ( ) where is an undetermined multiplier Let there exist an x o and a o such that (1) L( o,x o )/x i = 0, for all i, and (2) L( o,x o )/ = 0 Then x o is a local maximum (minimum) of f(x) subject to g(x) = 0, if, in addition to (1) and (2) the SOC for a maximum (minimum) is satisfied If SOC is met for all x C, then x o is a unique global maximizer (minimizer) Curvature conditions and the SOC In the above maximization problem, as long as the relevant constraint set is convex, the maximum will be a global maximum if the objective function is strictly quasi-concave The latter property means that the sets {x : f(x) f(x')} = U(x') are strictly convex These set are called upper level sets A set X in R n is strictly convex if x + (1-)x' interior i X for all (0,1) and all x x' with x, x' X 23

Curvature conditions and the SOC If the lower level sets ( {x : f(x) f(x')} = L(x')) of a function are strictly tl convex sets, then the function is said to be strictly quasi-convex When the relevant constraint set is convex and the objective function is strictly quasi- convex (strict quasi-concave), then the minimum (maximum) is unique An operational check If a function is defined on the nonnegative orthant t of R n (R n + = {x R n x i 0 for all i}) and is twice continuously differentiable, then there is an easy operational check for quasi-concavity or quasi-convexity Let f(x 1,,x n) be such a function 24

A Bordered Hessian Condition The function f is strictly quasi-concave if B = 0 f1 f 2 f N f f 1 11 f f 12 N1 f f 2 12 f f 22 N 2 f f N 1N f f 2N NN has PM i with sign (-1) i+1 for i 2 For strict quasi-convexity, the condition is that PM i are all negative for i 2 A Bordered Hessian Condition It is important to note that the condition pertains only to functions defined d on R n +n = {x R n : x i 0 for all i} The condition on B is equivalent to the statement that d 2 f is negative definite subject to df = 0 25

Example Display the 2 x 2 condition and show that it is equivalent to an iso-surface being strictly convex Existence The conditions for existence of a constrained optimum are the same as those for the unconstrained problem except that the set X C is the relevant set for which we assume boundedness and closedness (compactness) Further, the objective function is assumed to be continuous over this set Under these conditions, a constrained optimum will exist 26

Examples Max f(l,k) st C-wL-rK = 0 Min wl+rk st q- f(l,k) = 0 Extension to Multiple Constraints Suppose that there are m < n equality constraints t g j (x) = 0, j = 1,, m The Lagrangian is written as L( 1,, m, x 1,, x n ) = f(x) + j g j (x) The FOC are that the derivatives of L in x i and j vanish: f i + j g j /x i = 0, for all i, g j (x) = 0, for all j 27

Multiple Constraints The bordered Hessian becomes H = 0 J g mm, ' Jg Lij where J g is the Jacobian of the constraint system in x, and [L ij ] is the Hessian of the function L in x Remark The bordered Hessian is the Jacobian of the FOC in variables ( j,x i ) L / 0 j L / x 0 i 28

SOC For a maximum, the condition is (SOC)( max) PM i of H of order i > 2m has sign (-1) r, where r is the order of the largest order square [L ij ] embedded in PM i SOC For a minimum, the condition is (SOC)( min) PM i of H of order i > 2m has sign (-1) m 29

Presented in class Example Inequality Constrained Problems In many problems the side constraints are best represented as inequality constraints: t g j (x) 0 We wish to characterize the FOC to this extended problem: Max { x} f(x) subject to g j (x) 0, j = 1, m 30

Inequality Constrained Problems The first step is to form the Lagrangian L(,x) = f(x) + j g j ( x ) m j 1 FOC m g ( x ) j (1) L/ xi = fi + j = 0 for all i, x j 1 i (2) L/ j = g j 0, j 0 and g j j = 0, j = 1,, M and (3) The Constraint Qu alif icatio n holds 31

A Min These same conditions are used for a minimum i If one were to solve Min f(x) subject to g j (x) 0, then to use the above conditions one would rewrite the problem as Max -f(x) subject to g j (x) 0 Constraint qualification The FOC (1) and (2) are necessary conditions only if the Constraint Qualification holds This rules out particular irregularities by imposing restrictions on the boundary of the feasible set These irregularities would invalidate the FOC (1) and (2) should the solution occur there 32

Constraint qualification Let x o be the point at which (1) and (2) hold and let index set k = 1,, K represent the set of g j which are satisfied with equality at x o Constraint qualification The matrix J = o o g ( x ) g ( x ) 1 1 x 1 o g K ( x ) x 1 x n o g K ( x ) x n has rank K n That is the gradient vectors of the set of equality constraints are linearly independent 33

Example: #1 Nonnegativity constraints on x i L = f + 1 x 1 + 2 x 2 The FOC are f i + i = 0 x i 0, i 0 and i x i = 0 If all x i > 0, then i = 0 and we have the previous conditions If i > 0, then it must be true that x i = 0 and f i < 0 at the optimum It can be true that i = 0 and x i = 0 Examples #2 Consider the problem where we seek to Max f(x), x R 2, subject to p 1 x 1 + p 2 x 2 c, where c is a constant and xi 0 Assume that c,pi > 0, We should set up the Lagrangian as L = f(x) + [c - p 1 x 1 - p 2 x 2 ] + i x i The relevant first order conditions are as follows: f i - pp i + i = 0, i = 1,2,, c - p 1 x 1 - p 2 x 2 0, 0, [c - p 1 x 1 - p 2 x 2 ] = 0, and x i 0, i 0, i x i = 0, i = 1,2 34

Examples #2 Note that if the nonnegativity constraints are nonbinding and, at optimum, it is true that [c p 1 x 1 p 2 x 2 ], x i > 0, then the optimum is defined as a free optimum with f i = 0 Examples #2 Next, suppose that the constraint p 1 x 1 + p 2 x 2 c is binding, 2 > 0, and that t x 1 > 0 In this case, x 2 = 0, 1 = 0, and p 1 x 1 + p 2 x 2 - c = 0 and the first order conditions read f 1 = p 1 f 2 = p 2-2 which implies that f 2 < p 2 Thus, we have that f 1 /f 2 > p 1 /p 2, c = p 1 x 1 x 1 = c/p 1 and x 2 = 0 35

Illustration x 2 c\p 2 slope = p 1/p 2 constant y surface l slope = f/f 1/f 2 c\p 1 x 1 #2 If it were true that i = 0 and > 0, then f 1 /f 2 = p 1 /p 2, c = p 1 x 1 + p 2 x 2, and x 1, x 2 > 0 This is an interior optimum The illustration is as follows: 36

Illustration x 2 c\p 2 slope = p 1 /p 2 =f 1 /f 2 c\p 1 x 1 37