Lecture 7 Envelope Theorems, Bordered Hessians and Kuhn-Tucker Conditions

Similar documents
Lecture 5 Principal Minors and the Hessian

SECOND DERIVATIVE TEST FOR CONSTRAINED EXTREMA

Linear Programming Notes V Problem Transformations

Nonlinear Programming Methods.S2 Quadratic Programming

Solutions Of Some Non-Linear Programming Problems BIJAN KUMAR PATEL. Master of Science in Mathematics. Prof. ANIL KUMAR

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

In this section, we will consider techniques for solving problems of this type.

1 3 4 = 8i + 20j 13k. x + w. y + w

(a) We have x = 3 + 2t, y = 2 t, z = 6 so solving for t we get the symmetric equations. x 3 2. = 2 y, z = 6. t 2 2t + 1 = 0,

constraint. Let us penalize ourselves for making the constraint too big. We end up with a

2.3 Convex Constrained Optimization Problems

A FIRST COURSE IN OPTIMIZATION THEORY

4.6 Linear Programming duality

SOLUTIONS. f x = 6x 2 6xy 24x, f y = 3x 2 6y. To find the critical points, we solve

Duality in General Programs. Ryan Tibshirani Convex Optimization /36-725

Cost Minimization and the Cost Function

Envelope Theorem. Kevin Wainwright. Mar 22, 2004

Math 53 Worksheet Solutions- Minmax and Lagrange

MATH 304 Linear Algebra Lecture 18: Rank and nullity of a matrix.

15 Kuhn -Tucker conditions

1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style

Constrained optimization.

Chapter 9. Systems of Linear Equations

3.1 Solving Systems Using Tables and Graphs

Numerisches Rechnen. (für Informatiker) M. Grepl J. Berger & J.T. Frings. Institut für Geometrie und Praktische Mathematik RWTH Aachen

TOPIC 4: DERIVATIVES

Zeros of a Polynomial Function

Linear Programming Problems

Multi-variable Calculus and Optimization

Lecture 2: August 29. Linear Programming (part I)

Vector and Matrix Norms

Math 120 Final Exam Practice Problems, Form: A

Practical Guide to the Simplex Method of Linear Programming

Lagrange Interpolation is a method of fitting an equation to a set of points that functions well when there are few points given.

1 Shapes of Cubic Functions

Study Guide 2 Solutions MATH 111

Solutions for Review Problems

What is Linear Programming?

Chapter 6. Linear Programming: The Simplex Method. Introduction to the Big M Method. Section 4 Maximization and Minimization with Problem Constraints

Deriving Demand Functions - Examples 1

1 Lecture: Integration of rational functions by decomposition

3.1. Solving linear equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Date: April 12, Contents

Lecture Cost functional

a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)

10. Proximal point method

Limits and Continuity

is identically equal to x 2 +3x +2

Optimization in R n Introduction

3.6. Partial Fractions. Introduction. Prerequisites. Learning Outcomes

MA107 Precalculus Algebra Exam 2 Review Solutions

DERIVATIVES AS MATRICES; CHAIN RULE

Linear Programming for Optimization. Mark A. Schulze, Ph.D. Perceptive Scientific Instruments, Inc.

Equations, Inequalities & Partial Fractions

The Method of Partial Fractions Math 121 Calculus II Spring 2015

Nonlinear Optimization: Algorithms 3: Interior-point methods

Question 2: How do you solve a matrix equation using the matrix inverse?

1 Solving LPs: The Simplex Algorithm of George Dantzig

Constrained Optimisation

Interior-Point Algorithms for Quadratic Programming

3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style

Solving Linear Systems, Continued and The Inverse of a Matrix

2.4 Real Zeros of Polynomial Functions

7 Gaussian Elimination and LU Factorization

5 Systems of Equations

Lecture Notes 2: Matrices as Systems of Linear Equations

Systems of Linear Equations

Solutions to Math 51 First Exam January 29, 2015

BX in ( u, v) basis in two ways. On the one hand, AN = u+

Support Vector Machine (SVM)

r (t) = 2r(t) + sin t θ (t) = r(t) θ(t) + 1 = 1 1 θ(t) Write the given system in matrix form x = Ax + f ( ) sin(t) x y z = dy cos(t)

A NEW LOOK AT CONVEX ANALYSIS AND OPTIMIZATION

1. First-order Ordinary Differential Equations

Similar matrices and Jordan form

(Quasi-)Newton methods

To give it a definition, an implicit function of x and y is simply any relationship that takes the form:

3. Evaluate the objective function at each vertex. Put the vertices into a table: Vertex P=3x+2y (0, 0) 0 min (0, 5) 10 (15, 0) 45 (12, 2) 40 Max

What does the number m in y = mx + b measure? To find out, suppose (x 1, y 1 ) and (x 2, y 2 ) are two points on the graph of y = mx + b.

Vieta s Formulas and the Identity Theorem

Lecture 2: Homogeneous Coordinates, Lines and Conics

PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include

Determine If An Equation Represents a Function

24. The Branch and Bound Method

University of Lille I PC first year list of exercises n 7. Review

Question 2: How will changes in the objective function s coefficients change the optimal solution?

Lecture 4: Partitioned Matrices and Determinants

Constrained Least Squares

Cubic Functions: Global Analysis

3. Let A and B be two n n orthogonal matrices. Then prove that AB and BA are both orthogonal matrices. Prove a similar result for unitary matrices.

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Factoring Quadratic Expressions

1.4. Arithmetic of Algebraic Fractions. Introduction. Prerequisites. Learning Outcomes

PART 1: INSTRUCTOR INFORMATION, COURSE DESCRIPTION AND TEACHING METHODS

Methods of Solution of Selected Differential Equations Carol A. Edwards Chandler-Gilbert Community College

is identically equal to x 2 +3x +2

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

Mathematical finance and linear programming (optimization)

Economics 2020a / HBS 4010 / HKS API-111 FALL 2010 Solutions to Practice Problems for Lectures 1 to 4

Linearly Independent Sets and Linearly Dependent Sets

Least-Squares Intersection of Lines

Transcription:

Lecture 7 Envelope Theorems, Bordered Hessians and Kuhn-Tucker Conditions Eivind Eriksen BI Norwegian School of Management Department of Economics October 15, 2010 Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 1 / 20

Envelope theorems Envelope theorems In economic optimization problems, the objective functions that we try to maximize/minimize often depend on parameters, like prices. We want to find out how the optimal value is affected by changes in the parameters. Example Let f (x; a) = x 2 + 2ax + 4a 2 be a function in one variable x that depends on a parameter a. For a given value of a, the stationary points of f is given by f = 2x + 2a = 0 x = a x and this is a (local and global) maximum point since f (x; a) is concave considered as a function in x. We write x (a) = a for the maximum point. The optimal value function f (a) = f (x (a); a) = a 2 + 2a 2 + 4a 2 = 5a 2 gives the corresponding maximum value. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 2 / 20

Envelope theorems Envelope theorems: An example Example (Continued) The derivative of the value function is given by f a = a f (x (a); a) = a ( 5a 2 ) = 10a On the other hand, we see that f (x; a) = x 2 + 2ax + 4a 2 gives ( ) f f = 2x + 8a a = 2a + 8a = 10a a since x (a) = a. x=x (a) The fact that these computations give the same result is not a coincidence, but a consequence of the envelope theorem for unconstrained optimization problems: Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 3 / 20

Determine the optimal value function π (p). Verify the envelope theorem. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 4 / 20 Envelope theorems Envelope theorem for unconstrained maxima Theorem Let f (x; a) be a function in n variables x 1,..., x n that depends on a parameter a. For each value of a, let x (a) be a maximum or minimum point for f (x; a). Then a f (x (a); a) = ( ) f a x=x (a) The following example is a modification of Problem 3.1.2 in [FMEA]: Example A firm produces goods A and B. The price of A is 13, and the price of B is p. The profit function is π(x, y) = 13x + py C(x, y), where C(x, y) = 0.04x 2 0.01xy + 0.01y 2 + 4x + 2y + 500

Envelope theorems Envelope theorems: Another example Solution The profit function is π(x, y) = 13x + py C(x, y), hence we compute π(x, y) = 0.04x 2 + 0.01xy 0.01y 2 + 9x + (p 2)y 500 The first order conditions are π x = 0.08x + 0.01y + 9 = 0 8x y = 900 π y = 0.01x 0.02y + p 2 = 0 x 2y = 200 100p This is a linear system with unique solution x = 1 15 (1600 + 100p) and y = 1 15 ( 700 + 800p). The Hessian π = ( ) 0.08 0.01 0.01 0.02 is negative definite since D 1 = 0.08 < 0 and D 2 = 0.0015 > 0. We conclude that (x, y ) is a (local and global) maximum for π. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 5 / 20

Envelope theorems Envelope theorems: Another example Solution (Continued) Hence the optimal value function π (p) = π(x, y ) is given by ( 1 π 15 (1600 + 100p), 1 ) ( 700 + 800p) = 80p2 140p + 80 15 3 and its derivative is therefore p π(x, y 160p 140 ) = 3 On the other hand, the envelope theorem says that we can compute the derivative of the optimal value function as ( ) π = y = 1 140 + 160p ( 700 + 800p) = p 15 3 (x,y)=(x,y ) Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 6 / 20

Envelope theorems Envelope theorem for constrained maxima Theorem Let f (x; a), g 1 (x; a),..., g m (x; a) be functions in n variables x 1,..., x n that depend on the parameter a. For a fixed value of a, consider the following Lagrange problem: Maximize/minimize f (x; a) subject to the constraints g 1 (x; a) = = g m (x; a) = 0. Let x (a) be a solution to the Lagrange problem, and let λ (a) = λ 1 (a),..., λ m(a) be the corresponding Lagrange multipliers. If the NDCQ condition holds, then we have a f (x (a); a) = ( L a ) x=x (a),λ=λ (a) Notice that any equality constraint can be re-written in the form used in the theorem, since g j (x; a) = b j g j (x; a) b j = 0 Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 7 / 20

Envelope theorems Interpretation of Lagrange multipliers In a Lagrange problem, the Lagrange function has the form L(x, λ) = f (x) λ 1 (g 1 (x) b 1 )... λ m (g m (x) b m ) Hence we see that the partial derivative with respect to the parameter a = b j is given by L b j = λ j By the envelope theorem for constrained maxima, this gives that f (x ) b j = λ j where x is the solution to the Lagrange problem, λ 1,..., λ m are the corresponding Lagrange multipliers, and f (x ) is the optimal value function. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 8 / 20

Envelope theorems Envelope theorems: A constrained example Example Consider the following Lagrange problem: Maximize f (x, y) = x + 3y subject to g(x, y) = x 2 + ay 2 = 10. When a = 1, we found earlier that x (1) = (1, 3) is a solution, with Lagrange multiplier λ (1) = 1/2 and maximum value f (1) = f (x (1)) = f (1, 3) = 10. Use the envelope theorem to estimate the maximum value f (1.01) when a = 1.01, and check this by computing the optimal value function f (a). Solution The NDCQ condition is satisfied when a 0, and the Lagrangian is given by L = x + 3y λ(x 2 + ay 2 10) Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 9 / 20

Envelope theorems Envelope theorems: A constrained example Solution (Continued) By the envelope theorem, we have that ( f ) (a) a a=1 = ( λy 2) x=(1,3),λ=1/2 = 9 2 An estimate for f (1.01) is therefore given by ( f f (1.01) f ) (a) (1) + 0.01 = 10 0.045 = 9.955 a To find an exact expression for f (a), we solve the first order conditions: a=1 L x = 1 λ 2x = 0 x = 1 2λ L y = 3 λ 2ay = 0 y = 3 2aλ Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 10 / 20

Envelope theorems Envelope theorems: A constrained example Solution (Continued) We substitute these values into the constraint x 2 + ay 2 = 10, and get ( 1 2λ )2 + a( 3 2aλ )2 = 10 a + 9 4aλ 2 = 10 This gives λ = ± when a > 0 or a < 9. Substitution gives a+9 40a solutions for x (a), y (a) and f (a) (see Lecture Notes for details). For a = 1.01, this gives x (1.01) 1.0045, y (1.01) 2.9836 and f (1.01) 9.9553. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 11 / 20

Bordered Hessians Bordered Hessians The bordered Hessian is a second-order condition for local maxima and minima in Lagrange problems. We consider the simplest case, where the objective function f (x) is a function in two variables and there is one constraint of the form g(x) = b. In this case, the bordered Hessian is the determinant 0 g 1 g 2 B = g 1 L 11 L 12 g 2 L Example 21 L 22 Find the bordered Hessian for the following local Lagrange problem: Find local maxima/minima for f (x 1, x 2 ) = x 1 + 3x 2 subject to the constraint g(x 1, x 2 ) = x 2 1 + x 2 2 = 10. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 12 / 20

Bordered Hessians Bordered Hessians: An example Solution The Lagrangian is L = x 1 + 3x 2 λ(x1 2 + x 2 2 10). We compute the bordered Hessian 0 2x 1 2x 2 B = 2x 1 2λ 0 2x 2 0 2λ = 2x 1( 4x 1 λ) + 2x 2 (4x 2 λ) = 8λ(x1 2 + x2 2 ) and since x1 2 + x 2 2 = 10 by the constraint, we get B = 80λ. We solved the first order conditions and the constraint earlier, and found the two solutions (x 1, x 2, λ) = (1, 3, 1/2) and (x 1, x 2, λ) = ( 1, 3, 1/2). So the bordered Hessian is B = 40 in x = (1, 3), and B = 40 in x = ( 1, 3). Using the following theorem, we see that (1, 3) is a local maximum and that ( 1, 3) is a local minimum for f (x 1, x 2 ) subject to x1 2 + x 2 2 = 10. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 13 / 20

Bordered Hessians Bordered Hessian Theorem Theorem Consider the following local Lagrange problem: Find local maxima/minima for f (x 1, x 2 ) subject to g(x 1, x 2 ) = b. Assume that x = (x 1, x 2 ) satisfy the constraint g(x 1, x 2 ) = b and that (x 1, x 2, λ ) satisfy the first order conditions for some Lagrange multiplier λ. Then we have: 1 If the bordered Hessian B(x1, x 2, λ ) < 0, then (x1, x 2 ) is a local minima for f (x) subject to g(x) = b. 2 If the bordered Hessian B(x1, x 2, λ ) > 0, then (x1, x 2 ) is a local maxima for f (x) subject to g(x) = b. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 14 / 20

Optimization problems with inequality constraints Optimization problems with inequality constraints We consider the following optimization problem with inequality constraints: Optimization problem with inequality constraints Maximize/minimize f (x) subject to the inequality constraints g 1 (x) b 1, g 2 (x) b 2,..., g m (x) b m. In this problem, f and g 1,..., g m are function in n variables x 1, x 2,..., x n and b 1, b 2,..., b m are constants. Example (Problem 8.9) Maximize the function f (x 1, x 2 ) = x 2 1 + x 2 2 + x 2 1 subject to g(x 1, x 2 ) = x 2 1 + x 2 2 1. To solve this constrained optimization problem with inequality constraints, we must use a variation of the Lagrange method. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 15 / 20

Optimization problems with inequality constraints Kuhn-Tucker conditions Definition Just as in the case of equality constraints, the Lagrangian is given by L(x, λ) = f (x) λ 1 (g 1 (x) b 1 ) λ 2 (g 2 (x) b 2 ) λ m (g m (x) b m ) In the case of inequality constraints, we solve the Kuhn-Tucker conditions in additions to the inequalities g 1 (x) b 1,..., g m (x) b m. The Kuhn-Tucker conditions for maximum consist of the first order conditions L x 1 = 0, L x 2 = 0, L x 3 = 0,..., and the complementary slackness conditions given by L x n = 0 λ j 0 for j = 1, 2,..., m and λ j = 0 whenever g j (x) < b j When we solve the Kuhn-Tucker conditions together with the inequality constraints g 1 (x) b 1,..., g m (x) b m, we obtain candidates for maximum. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 16 / 20

Optimization problems with inequality constraints Necessary condition Theorem Assume that x = (x1,..., x N ) solves the optimization problem with inequality constraints. If the NDCQ condition holds at x, then there are unique Lagrange multipliers λ 1,..., λ m such that (x1,..., x n, λ 1,..., λ m ) satisfy the Kuhn-Tucker conditions. Given a point x satisfying the constraints, the NDCQ condition holds if the rows in the matrix g 1 x 1 (x g ) 1 x 2 (x g )... 1 x n (x ) g 2 x 1 (x g ) 2 x 2 (x g )... 2 x n (x )...... g m x 1 (x g ) m x 2 (x g )... m x n (x ) corresponding to constraints where g j (x ) = b j are linearly independent. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 17 / 20

Optimization problems with inequality constraints Kuhn-Tucker conditions: An example Solution (Problem 8.9) The Lagrangian is L = x1 2 + x 2 2 + x 2 1 λ(x1 2 + x 2 2 1), so the first order conditions are 2x 1 λ(2x 1 ) = 0 2x 1 (1 λ) = 0 2x 2 + 1 λ(2x 2 ) = 0 2x 2 (1 λ) = 1 From the first equation, we get x 1 = 0 or λ = 1. But λ = 1 is not possible by the second equation, so x 1 = 0. The second equation gives x 2 = 1 2(1 λ) since λ 1. The complementary slackness conditions are λ 0 and λ = 0 if x1 2 + x 2 2 < 1. We get two cases to consider. Case 1: x 1 2 + x 2 2 < 1, λ = 0. In this case, x 2 = 1/2 by the equation above, and this satisfy the inequality. So the point (x 1, x 2, λ) = (0, 1/2, 0) is a candidate for maximality. Case 2: x1 2 + x 2 2 = 1, λ 0. Since x 1 = 0, we get x 2 = ±1. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 18 / 20

Optimization problems with inequality constraints Kuhn-Tucker conditions: An example Solution (Problem 8.9 Continued) We solve for λ in each case, and check that λ 0. We get two candidates for maximality, (x 1, x 2, λ) = (0, 1, 3/2) and (x 1, x 2, λ) = (0, 1, 1/2). We compute the values, and get f (0, 1/2) = 1.25 f (0, 1) = 1 f (0, 1) = 1 We must check that the NDCQ condition holds. The matrix is (2x 1 2x 2 ). If x1 2 + x 2 2 < 1, the NDCQ condition is empty. If x 1 2 + x 2 2 = 1, the NDCQ condition is that (2x 1 2x 2 ) has rank one, and this is satisfied. By the extreme value theorem, the function f has a maximum on the closed and bounded set given by x1 2 + x 2 2 1 (a circular disk with radius one), and therefore (x 1, x 2 ) = (0, 1) is a maximum point. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 19 / 20

Optimization problems with inequality constraints Kuhn-Tucker conditions for minima General principle: A minimum for f (x) is a maximum for f (x). Using this principle, we can write down Kuhn-Tucker conditions for minima: Kuhn-Tucker conditions for minima There are Kuhn-Tucker conditions for minima in a similar way as for maxima. The only difference is that the complementary slackness conditions are λ j 0 for j = 1, 2,..., m and λ j = 0 whenever g j (x) < b j in the case of minima. Eivind Eriksen (BI Dept of Economics) Lecture 7 October 15, 2010 20 / 20