Basic models: Intertemporal optimization and resource extraction Anton Bondarev Department of Economics, Basel University 14.10.2014
Plan of the lecture Idea of intergenerational optimization: Ramsey leads to Hotelling; theory; Optimal extraction without regeneration: Hotelling-type problems;
Introductory remarks Birth of dynamics in Economics In early XX century vast majority of economic models were static; They rested upon the concepts of Walras and Marshall of efficient market allocations; Even the just-born Keynesian Economics operated mainly in static terms; The question of intergenerational efficiency or optimization have not been discussed; Ramsey (1927) was the first to draw attention to this problem.
Introductory remarks Ramsey Idea of optimization of consumption over infinite time; Introduction of discount rates; Dynamic choice between consumption c and accumulation k; Goal of maximizing utility over time: max c e ρt U(c); (1) t=0 While consumption is taken from output: c t = Y t k t ; (2) Output grows through investments which are necessary for growth: k t+1 = Y t (1 c t ) δk t. (3)
Introductory remarks Modern formulation: Ramsey-type problems J := 0 e rt U(c)dt max c s.t. k = f (k) c (n + δ)k; k(0) = k 0 ; 0 c f (k). (4) This is an optimal control problem with one state variable and one control variable.
Introductory remarks From Ramsey to Hotelling: why we need optimal control We need to optimize investments and consumption over many time periods; Exhaustible resources enter production function, and not only capital; Dynamic problem of optimal rate of resource exploitation; Sustainability concept has been born; Hotelling (1931) was the first to apply Ramsey-type analysis to exhaustible resources management.
Main features Multi-stage decision-making; Optimization of a dynamic process in time; Optimization is carried over functions, not variables; The planning horizon of an optimizing agent is taken into account (finite or infinite); The problem includes objective and the dynamical system; Some initial and/or terminal conditions are given.
Continuous-time problems Assume there is continuous number of stages (real time); State is described by continuous time function, x(t); Initial and terminal states are fixed, x(0) = x 0, x(t ) = x T ; Find a function x(t), minimizing the cost of going from x 0 to x T ; What gives the costs? Concept of objective functional: T { min x(t) + u 2 (t) } dt (5) 0
Ingredients of dynamic optimization problem Every dynamic optimization problem should include: Some set of boundary conditions: fixed starting and/or terminal points; A set of admissible paths from initial point to the terminal one; A set of costs, associated with different paths; An objective: what to maximize or minimize.
Functionals Definition A functional J is a mapping from the set of paths x(t) into real numbers (value of a functional). J := J(x(t)). Functional is NOT a function of t; x(t) is the unknown function, which have to be found; This is defined in some functional space H; Hence formally J : H R.
Types of boundary conditions Fixed-time problem: x(0) = x 0, time length is fixed to t [0,.., T ], terminal state is not fixed; Example: optimal price setting over fixed planning horizon; Fixed endpoint problem: x(0) = x 0, x(t ) = x T, but terminal time is not fixed; Example: production cost minimization without time constraints; Time-optimal problem: x(0) = x 0, x(t ) = x T, T min; Example: producing a product as soon as possible regardless of the costs; Terminal surface problem: x(0) = x 0, and at terminal time f (T ) = x(t ).
Transversality In variable endpoint problems as above given boundary conditions are not sufficient to find the optimal path; Additional condition on trajectories is called transversality condition; It defines, how the trajectory crosses the boundary line; The vast majority of economic problems use this type of conditions; Example: shadow costs of investments at the terminal time should be zero.
Problem The subject of optimal control is: Maximize (minimize) some objective functional T J = F (x(t), u, t)dt 0 with conditions on: Initial, terminal states and time; x(0) = x 0 ; x(t ) = x T, t [0..T ] Dynamic constraints (define the dynamics of states); ẋ(t) = f (x, u, t) Static constraints on states (nonnegativity, etc.) x(t) 0, u(t) 0.
Hamiltonian To solve an optimal control problem one need Hamiltonian function; This is an analog of Lagrangian for static problems; It includes the objective and dynamic constraints; If static constraints are present, the augmented Hamiltonian is used; First order conditions on Hamiltonian provide optimality criteria.
Construction Let the optimal control problem be: J := T 0 F (x, u, t)dt max u ; Then the associated Hamiltonian is given by: s.t. ẋ = f (x, u, t). (6) H(λ, x, u, t) = F (x, u, t) + λ(t) f (x, u, t). (7)
Comments In the Hamiltonian λ(t) is called costate variable; It usually represents shadow costs of investments; Investments are the control, u(t); This has to be only piecewise-continuous and not continuous; Hamiltonian includes that many costate variables as many dynamic constraints the system has; Unlike lagrange multipliers, costate variable changes in time; The optimal dynamics is defined by the pair of ODE then: for state, x(t) and costate, λ(t).
Example Consider the problem: max u( ) T 0 e rt [ x(t) α 2 u(t)2 ]dt s.t. x(t) = β(t) u(t) x(t), u(t) 0, x(0) = x 0. (8) where β(t) is arbitrary positive-valued function and α, r, T are constants.
The Hamiltonian of the problem (8) should be: H CV (λ, x, u, t) = x α 2 u2 + λ CV [β(t) u x]. (9) Where the admissible set of controls include all nonnegative values (u(t) 0). QUESTION: where is the discount term e rt? Transformation e rt λ(t) = λ CV (t) yields current value Hamiltonian. It is used throughout all the economic problems.
Optimality To obtain first-order conditions of optimality the Pontryagin s Maximum Principle is used. Let J := s.t. T 0 ẋ = f (x, u, t); be the optimal control problem and F (x, u, t)dt max u ; u(t) 0, x(0) = x 0. (10) H(λ, x, u, t) = F (x, u, t) + λ(t) f (x, u, t). (11) the associated Hamiltonian.
Optimality conditions The optimal control u(t) is such that it maximizes the Hamiltonian, (11), and must hold for almost all t. This is maximum condition. Along optimal trajectory u H(λ, x, u, t) : = 0; u H(λ, x, u, t) = H (λ, x, t) (12) λ(t) = rλ(t) H x(λ, x, t). (13) which is the adjoint or costate equation. is transversality condition. λ(t ) = 0 (14)
Theorem (Pontryagin s Maximum Principle) Consider the optimal control problem (10) with Hamiltonian (11) and maximized Hamiltonian as above. Assume the state space X is a convex set and there are no scrap value. Let u( ) be a feasible control path with associated state trajectory x(t). If there exists and absolutely continuous function λ : [0, T ] R n such that the maximum condition (12), the adjoint equation (13) and transversality condition (14) are satisfied, and such that the function x H (λ, x, t) is concave and continuously differentiable, then u( ) is an optimal control.
Sufficiency The theorem provides only necessary, but not sufficient criteria of optimality; The sufficient condition is given by the concavity of a maximized Hamiltonian H w. r. t. x(t); In our example the Hamiltonian is linear in state and quadratic in control, and hence concave in state; Sufficient condition is satisfied; This is always true for linear-quadratic problems.
Main points on optimal control To solve an optimal control problem is: Right down the Hamiltonian of the problem; Derive first-order condition on the control; Derive costate equation; Substitute optimal control candidate into state and costate equations; Solve the canonical system of equations; Define optimal control candidate as a function of time; Determine the concavity of a maximized Hamiltonian.
Hotelling model Social optimum Assume the utility depends on resource only; Social planner maximizes utility subject to finiteness of the resource: T max e rt { U(R(t)) br(t) } dt (15) R( ) 0 s. t. T 0 R(t)dt = S 0. (16) After some manipulation this constitutes the standard optimal control problem.
Hotelling model Canonical form We set S(t) the new artificial state variable: Ṡ = R(t) (17) Adjoining the new dynamic constraint to the optimization problem; Transformation from rates to stocks.
Hotelling model Hotelling in optimal control form Transformation above yields: T max e rt { U(R(t)) br(t) } dt (18) R( ) 0 s. t. Giving the Hotelling s rule: OR Ṡ = R(t). (19) U (R(t)) b = λ(t)e rt (20) ṗ(t) p(t) = λ(t) λ(t) = r (21)
Hotelling model Competitive problem There exist n identical resource-extracting firms; The optimization problem is then: { T } nr(t) max e rt p(x)dx nb(r(t), S(t)) dt R( ) 0 0 s.t. nṡ = nr(t), S(0) = S, S(T ) = S T (= 0). (22)
Hotelling model Monopolistic problem There exists a monopoly on the resource; Objective is profit maximization: T { } max e rt p(r)r(t) b MONO (R(t), S(t)) dt R( ) 0 s.t. Ṡ = R(t). (23)
Hotelling model Time to resource depletion Parameter T above is free; It is defined as an additional choice variable in optimal control problem; The condition for that is zero optimized Hamiltonian value: H(R (T ), S (T ), λ (T ), T ) = 0. (24)
Hotelling model Conclusion Resource optimization is essentially intertemporal, i. e. dynamic; Finite resource requires social planning in most cases; With some parameters market can follow optimal path (less frequent than not); Intertemporal discount rate influences heavily the outcome; Optimal control is a versatile tool for solving intertemporal resource extraction problems.
Hotelling model Next time Application of Hotelling s rule; Paper: Ludwig (2014) Hotelling rent, national oil companies and oil supply in the long run.