14.451 Lecture Notes 10

Similar documents
Economic Growth: Lectures 6 and 7, Neoclassical Growth

This is a simple guide to optimal control theory. In what follows, I borrow freely from Kamien and Shwartz (1981) and King (1986).

Economic Growth: Lectures 2 and 3: The Solow Growth Model

4. Only one asset that can be used for production, and is available in xed supply in the aggregate (call it land).

The Real Business Cycle Model

UCLA. Department of Economics Ph. D. Preliminary Exam Micro-Economic Theory

Representation of functions as power series

CAPM, Arbitrage, and Linear Factor Models

Discrete Dynamic Optimization: Six Examples

Market Power, Forward Trading and Supply Function. Competition

Economics 326: Duality and the Slutsky Decomposition. Ethan Kaplan

Optimal Auctions. Jonathan Levin 1. Winter Economics 285 Market Design. 1 These slides are based on Paul Milgrom s.

Partial f (x; y) x f (x; x2 y2 and then we evaluate the derivative as if y is a constant.

CHAPTER 7 APPLICATIONS TO MARKETING. Chapter 7 p. 1/54

15 Kuhn -Tucker conditions

Long-Term Debt Pricing and Monetary Policy Transmission under Imperfect Knowledge

IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011

1 Present and Future Value

Lecture 3: Growth with Overlapping Generations (Acemoglu 2009, Chapter 9, adapted from Zilibotti)

36 CHAPTER 1. LIMITS AND CONTINUITY. Figure 1.17: At which points is f not continuous?

Universidad de Montevideo Macroeconomia II. The Ramsey-Cass-Koopmans Model

Exact Nonparametric Tests for Comparing Means - A Personal Summary

Adverse selection and moral hazard in health insurance.

Section 2.4: Equations of Lines and Planes

Intertemporal approach to current account: small open economy

Chapter 13 Internal (Lyapunov) Stability 13.1 Introduction We have already seen some examples of both stable and unstable systems. The objective of th

Critical points of once continuously differentiable functions are important because they are the only points that can be local maxima or minima.

Chapter 3: Section 3-3 Solutions of Linear Programming Problems

Advanced Microeconomics

How To Solve A Minimum Set Covering Problem (Mcp)

Implementation of optimal settlement functions in real-time gross settlement systems

Class Notes, Econ 8801 Lump Sum Taxes are Awesome

Walrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.

1 Maximizing pro ts when marginal costs are increasing

Duality of linear conic problems

Chapter 7 Nonlinear Systems

Discussion of Self-ful lling Fire Sales: Fragility of Collateralised, Short-Term, Debt Markets, by J. C.-F. Kuong

Financial Economics Lecture notes. Alberto Bisin Dept. of Economics NYU

Problem Set 2 Due Date: March 7, 2013

Practice with Proofs

Surface Normals and Tangent Planes

Real Business Cycle Models

Review Horse Race Gambling and Side Information Dependent horse races and the entropy rate. Gambling. Besma Smida. ES250: Lecture 9.

Portfolio selection based on upper and lower exponential possibility distributions

Online Appendix to Stochastic Imitative Game Dynamics with Committed Agents

Midterm March (a) Consumer i s budget constraint is. c i b i c i H 12 (1 + r)b i c i L 12 (1 + r)b i ;

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.

Nonlinear Optimization: Algorithms 3: Interior-point methods

Elements of probability theory

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

8.1 Min Degree Spanning Tree

6.207/14.15: Networks Lecture 15: Repeated Games and Cooperation

Lecture 6: Price discrimination II (Nonlinear Pricing)

Voluntary Voting: Costs and Bene ts

Economic Growth: Lecture 11, Technology Diffusion, Trade and World Growth

Optimal insurance contracts with adverse selection and comonotonic background risk

Pricing Cloud Computing: Inelasticity and Demand Discovery

Common sense, and the model that we have used, suggest that an increase in p means a decrease in demand, but this is not the only possibility.

LS.6 Solution Matrices

Consumer Theory. The consumer s problem

Labor Economics, Lecture 12: Equilibrium Search and Matching

Do option prices support the subjective probabilities of takeover completion derived from spot prices? Sergey Gelman March 2005

Cooperation with Network Monitoring

1 if 1 x 0 1 if 0 x 1

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

Lecture Notes: Basic Concepts in Option Pricing - The Black and Scholes Model

The Binomial Distribution

An investigation of cheapest-to-deliver on Treasury bond futures contracts

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

LECTURE NOTES IN MEASURE THEORY. Christer Borell Matematik Chalmers och Göteborgs universitet Göteborg (Version: January 12)

Midterm Practice Problems

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

A Simpli ed Axiomatic Approach to Ambiguity Aversion

Economic Principles Solutions to Problem Set 1

Scalar Valued Functions of Several Variables; the Gradient Vector

Lecture 2. Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and Constrained Optimization

General Theory of Differential Equations Sections 2.8, , 4.1

Paid Placement: Advertising and Search on the Internet

Math 319 Problem Set #3 Solution 21 February 2002

Lecture 1: Systems of Linear Equations

160 CHAPTER 4. VECTOR SPACES

14.41 Midterm Solutions

Transcription:

14.451 Lecture Notes 1 Guido Lorenzoni Fall 29 1 Continuous time: nite horizon Time goes from to T. Instantaneous payo : f (t; x (t) ; y (t)) ; (the time dependence includes discounting), where x (t) 2 X and y (t) 2 Y. Constraint: _x (t) = g (t; x (t) ; y (t)) : (1) Relative to discrete time, we are now using the state variable/control variable approach (x is the state, y is the control). We assume throughout that f and g are continuously di erentiable functions. Problem is to choose the functions x (:) and y (:) to maximize f (t; x (t) ; y (t)) dt subject to constraint (1) for each t 2 [; T ] and the initial condition: x () given. We will use two approaches, a variational approach and a dynamic programming approach. Both approaches will be used as heuristic arguments for the Principle of Optimality. 1.1 Variational approach 1.1.1 Necessity argument Suppose you know that optimal x (t) and y (t) exist and are interior for each t. We use a variational approach to derive necessary conditions for optimality. Take any continuous function h (t) and let y (t; ") = y (t) + "h (t) : To obtain the perturbed version of x, solve the di erential equation _x (t; ") = g (t; x (t; ") ; y (t; ")) (2) 1

with initial condition x (; ") = x (). For j"j small enough both y (t; ") and x (t; ") are, respectively, in Y and X. Optimality implies that W (") f (t; x (t; ") ; y (t; ")) dt f (t; x (t) ; y (t)) dt = W () for all " in a neighborhood of, and, supposing, W is di erentiable this implies Moreover, (2) implies that W (") = f (t; x (t; ") ; y (t; ")) dt + W () = : (t) [g (t; x (t; ") ; y (t; ")) _x (t; ")] dt for any continuous function (t). Integrating by parts, this can be rewritten as W (") = [f (t; x (t; ") ; y (t; ")) + (t) g (t; x (t; ") ; y (t; "))] dt + (T ) x (T; ") + () x (; ") + _ (t) x (t; ") dt Suppose we can di erentiate x (t; ") and y (t; "), we have that W (") = + h f x (t; x (t; ") ; y (t; ")) + (t) g x (t; x (t; ") ; y (t; ")) + _ i (t) x " (t; ") dt [f y (t; x (t; ") ; y (t; ")) + (t) g y (t; x (t; ") ; y (t; "))] y " (t; ") dt (T ) x " (T; ") (given that x (; ") = x ()). Now we will choose the function so that all the derivatives x " (T; ") (which are hard to compute since they involve the di erential equation (2)) vanish. In particular set (T ) = and let (t) solve the di erential equation _ (t) = f x (t; x (t) ; y (t)) (t) g x (t; x (t) ; y (t)) : Moreover y " (t; ") = h (t) so putting together our results we have: W () = [f y (t; x (t) ; y (t)) + (t) g y (t; x (t) ; y (t))] h (t) dt = : Since this must be true for any choice of h (t) it follows that a necessary condition for this to be satis es is f y (t; x (t) ; y (t)) + (t) g y (t; x (t) ; y (t)) = : We can summarize this result de ning the Hamiltonian H (t; x (t) ; y (t) ; (t)) = f (t; x (t) ; y (t)) + (t) g (t; x (t) ; y (t)) ; and we have the following: 2

Theorem 1 If x and y are optimal, continuous and interior then there exists a continuously di erentiable function (t) such that H y (t; x (t) ; y (t) ; (t)) = (3) and (T ) =. 1.1.2 Su ciency argument _ (t) = H x (t; x (t) ; y (t) ; (t)) (4) _x (t) = H (t; x (t) ; y (t) ; (t)) (5) Now suppose we have found candidate solutions x (t) and y (t) which satisfy (3)-(5) for some continuous function (:). We want to check if these conditions are su cient for an optimum. For this, we need an additional concavity assumption. Let M (t; x (t) ; (t)) = max H (t; x (t) ; y; (t)) (6) y and suppose that M is concave in x. Consider any other feasible path x (t) and y (t). Then f (t; x (t) ; y (t)) + g (t; x (t) ; y (t)) M (t; x (t) ; (t)) M (t; x (t) ; (t)) + M x (t; x (t) ; (t)) (x (t) x (t)) or f (t; x (t) ; y (t)) + (t) g (t; x (t) ; y (t)) f (t; x (t) ; y (t)) + (t) g (t; x (t) ; y (t)) + M x (t; x (t) ; (t)) (x (t) x (t)) Now M x (t; x (t) ; (t)) = H x (t; x (t) ; y (t) ; (t)) = _ (t) the rst from an envelope argument, and the second from (4). So we have f (t; x (t) ; y (t)) f (t; x (t) ; y (t)) + (t) ( _x (t) _x (t)) + _ (t) (x (t) x (t)) (7) Integrating by parts and using (T ) = and x () = x () we have f (t; x (t) ; y (t)) dt We have thus proved a converse to Theorem 1: f (t; x (t) ; y (t)) dt: Theorem 2 If x and y are two continuous functions that satisfy (3)-(5) for some continuous function (:) with (T ) =, X is a convex set and M (t; x; (t)) is concave in x for all t 2 [; T ] then x and y are optimal. 3

1.2 Dynamic programming We now use a di erent approach to characterize the same problem. De ne the value function (sequence problem): V (t; x (t)) = max x;y s:t: t f (t; x () ; y ()) d _x () = g (; x () ; y ()) x (t) given The principle of optimality can be applied to obtain Z s V (t; x (t)) = max f (; x () ; y ()) d + V (s; x (s)) x;y s:t: t _x = g (; x () ; y ()) and x (t) given rewrite the maximization problem as Z s f (; x () ; y ()) d + V (s; x (s)) max x;y t and taking limits we have Z 1 s lim s!t s t max f (; x () ; y ()) d + V (s; x (s)) x;y t V (t; x (t)) = V (t; x (t)) = Suppose now that: (1) we can invert taking the limit and taking the maximum, and (2) V is di erentiable w.r.t. to both arguments. Then we obtain n max f (t; x (t) ; y (t)) + _ o V (t; x (t)) + V x (t; x (t)) _x (t) = y and max ff (t; x (t) ; y (t)) + V x (t; x (t)) g (t; x (t) ; y (t))g + V _ (t; x (t)) = (8) y This is the Hamilton-Jacobi-Bellman equation. We can use it to derive the Maximum Principle conditions once more. Given an optimal solution x ; y just identify (t) = V x (t; x (t)) (9) and we immediately have that y (t) must maximize the Hamiltonian: max ff (t; x (t) ; y) + (t) g (t; x (t) ; y)g : y We can also get the condition for (t) (4) assuming V is twice di erentiable. Di erentiating both sides of (9) yields _ (t) = V x;t (t; x (t)) + V x;x (t; x (t)) _x (t) 4

Di erentiating (8) with respect to x (t) (which can be done because the condition holds for all possible x (t) not just the optimal one, we are using a value function!) and using an envelope argument we get f x (t; x (t) ; y (t))+v x (t; x (t)) g x (t; x (t) ; y (t))+v x;x (t; x (t)) g (t; x (t) ; y (t))+v t;x (t; x (t)) = Combining the two (at the optimal path) yields which implies (4). f x (t; x (t) ; y (t)) + (t) g x (t; x (t) ; y (t)) + _ (t) = 2 In nite Horizon The problem now is to maximize subject to f (t; x (t) ; y (t)) _x (t) = g (t; x (t) ; y (t)) an initial condition x () and a terminal condition lim b (t) x (t) = where b (t) is some given function b : R +! R +. Going to in nite horizon the Maximum Principle still works, but we need to add some conditions at 1 (to replace the condition (T ) = we were using above). 2.1 Su ciency argument Let us go in reverse now and look rst for su cient conditions for an optimum. Suppose we have a candidate path x (t) ; y (t) that satis es conditions (3)-(5) for some continuous function and the function M de ned in (6) is concave. We can compare the candidate path with any other feasible path as in the proof of Theorem 2, however at the moment of integrating by parts (7) we are left with the following expression: f (t; x (t) ; y (t)) dt f (t; x (t) ; y (t)) dt + lim (t) (x (t) x (t)) : If we assume that lim (t) x (t) = and lim (t) x (t) for all feasible paths x (t) we then have This proves the following. f (t; x (t) ; y (t)) dt f (t; x (t) ; y (t)) dt: 5

Theorem 3 Suppose x and y are two continuous functions that satisfy (3)- (5) for some continuous function (:), X is a convex set and M (t; x; (t)) is concave in x for all t and the following two conditions are satis ed and lim (t) x (t) = (transversality condition) (1) lim (t) x (t) for all feasible paths x (t) : (11) Then x and y are optimal. If M (t; x; (t)) is strictly concave in x then x and y are the unique optimal solution. We have thus established that the transversality condition (1) (together with concavity and our usual optimality conditions) is su cient. 2.2 Necessity Proving that some transversality condition is necessary is more cumbersome and requires additional assumptions (although, clearly, when we deal with necessary conditions we don t need concavity). The general idea is the following. Use the HBJ equation (8) to show that _V (t; x (t)) = H (t; x (t) ; y (t) ; (t)) : If you can argue that lim _ V (t; x (t)) = then you have the transversality condition lim H (t; x (t) ; y (t) ; (t)) = : Under additional conditions (e.g. if x (t) converges to a steady state level) then this condition implies (1). (See Theorems 7.12 and 7.13 in Acemoglu (29)). However, in applications it is often more useful to use the transversality condition as a su cient condition (as in Theorem 3 and in Theorem 7.14 in Acemoglu (29)). 3 Discounting The problem now is to maximize subject to e t f (x (t) ; y (t)) _x (t) = g (t; x (t) ; y (t)) an initial condition x () and a terminal condition lim b (t) x (t) = 6

where b (t) is some given function b : R +! R +. First, notice that the Hamiltonian would be H (t; x (t) ; y (t) ; (t)) = e t f (x (t) ; y (t)) + (t) g (t; x (t) ; y (t)) but de ning (t) = e t (t) we can de ne the current-value Hamiltonian ~H (x (t) ; y (t) ; (t)) = f (x (t) ; y (t)) + (t) g (t; x (t) ; y (t)) (so that H = e t ~ H, so H is also called present-value Hamiltonian). Therefore, conditions (3)-(5) become ~H y (t; x (t) ; y (t) ; (t)) = _ (t) = (t) ~ Hx (t; x (t) ; y (t) ; (t)) _x (t) = ~ H (t; x (t) ; y (t) ; (t)) : The limiting conditions (1) and (11) become and lim e t (t) x (t) = (transversality condition) lim e t (t) x (t) for all feasible paths x (t) : All the results above go through and we just express them using di erent variables. 7

MIT OpenCourseWare http://ocw.mit.edu 14.451 Dynamic Optimization Methods with Applications Fall 29 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.