5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes technique produces -5-4 -3-2 - 0 2 x2 5 4 3 2 x Negative quantity! The tangency solution violates an unspoken economic requirement x 2 0. There is a systematic approach to inequality-constrained maximisation a.k.a. concave programming or nonlinear programming. To be explicit the full set of restrictions for the 2 good consumer problem is not p x + p 2 x 2 = m but p x + p 2 x 2 m 0 x 0 x 2 0 The aspect I emphasise is that the constraints are inequalities. We have seen how the Lagrange multiplier method for dealing with one equality constraint extends naturally to the case of several constraints. If we wish to maximise f(x) subject to g(x) =b and h(x) = c, we work with the Lagrangian L(x, λ,µ)=f(x) λ [g(x) b] µ [h(x) c] with a multiplier for each constraint. (See Dixit 24ff) The inequality constrained optimisation problem (SH 682) is:
Maximise the objective function f(x) where f : R n R subject to the constraints g j (x) 0whereg j : R n R and j =,..., m. Terminology: The set of points satisfying the constraintsiscalledtheconstraint set, admissible set or feasible set. Ifattheoptimumx,g j (x )=0then the j-th constraint is binding; ifnot,itisslack. If at least one constraint is binding then x is on the boundary of the feasible set; if none are binding x is an interior point. Example 5. Intheconsumerproblemswehaveseen the budget constraint is binding all income is spent because the consumer is never satiated. The constraint x 2 0 may be binding as in the situation pictured at the beginning of this section. The method of Kuhn-Tucker multipliers is a variation on the Lagrange multiplier method. If all the constraints are binding then the Lagrange method will produce the same results as KT. The Kuhn-Tucker approach involves forming the Lagrangean in more or less the usual way mx L = f(x) λ j g j (x) j= with the same derivative with respect to the choice variables conditions L =0,i=,..., n or L =0. x i However the further conditions specify the interaction between the multipliers and the constraints. The complementary slackness conditions state λ j 0 for all i λ j = 0 whenever g j (x ) < 0 If the constraint is slack the corresponding multiplier is zero.
Solving this assortment of equalities and inequalities in the n+m unknowns (choice variables and multipliers) is messier than the Lagrange method for equality constrained problems. c =2:x =0 c =0:x =0 To see how it works, consider a transparent case Example 5.2 Maximise the (strictly concave) function y = x 2 subject to x c. Theoptimalx can either be interior (< c) or on the boundary (= c) of the feasible set which will depend on the value of c. The pictures show c =2, 0,. c = :x = To do the Kuhn-Tucker analysis, form the Lagrangean L = x 2 λ(x c). The K-T conditions are L x = 2x λ =0 λ = 2x λ 0 λ = 0 if (x c) < 0
Remark 5. The conditions L x = ( x2 ) x λ =0 and λ 0 imply that the derivative at the maximum cannot be negative. It is obvious that the derivative cannot be negative at a maximum because a reduction in x (this is always feasible) would then raise the value of the objective. Now for the three examples, c =, 0and2. Remark 5.2 Often the Kuhn-Tucker conditions are used not for finding a solution but for providing information about a solution. For example in the general problem of maximising a strictly concave function subject to x c, the conditions imply that at a maximum theslopecannotbenegative. c = : there are 2 possibilities: x = or x <. The latter is impossible for it would imply that λ = 0 and hence x =0,acontradiction. Sox =. c = 0 : there are 2 possibilities: x =0orx < 0. As before the latter is impossible. All the conditions are satisfied when x =0. c = 2 : there are 2 possibilities: x =2orx < 2. The former is impossible for it makes 2x and hence λ negative. 6 Kuhn -Tucker theorem There are lots of propositions linking the Kuhn-Tucker conditions to the existence of a maximum. The conditions can be interpreted as necessary conditions for a maximum (compare the treatment of Lagrange multipliers in 8.2). Or, making strong assumptions about f and g j, as sufficient conditions. That line is taken in the next theorem.
Theorem 6. ( Kuhn-Tucker sufficiency) Consider the inequality constrained optimisation problem with concave objective and convex constraints: i.e. to maximise f(x) (where f : R n R) subject to the constraints g j (x) 0 where g j : R n R and j = P,..., m. Define L = f(x) m λ j g j (x) and let x j= be a feasible point. Suppose we can find numbers λ j such that L(x ) = 0, λ j 0 for all i and λ j =0whenever g j (x ) < 0. Then x solves the maximisation problem Proof. Since f is concave the supporting hyperplane theorem takes the form f(x) f(x )+ f(x )(x x ). Using L(x ) = 0, we can write this as f(x) f(x )+ X λ j g j (x )(x x ). The aim is to show that the sum term on the right is not positive. The multipliers associated with slack constraints will be zero so we need only attend to the binding constraints g j (x )=0. In such cases, since g j is convex we have 0 g j (x) 0+ g j (x )(x x ). Because the λ j are nonnegative, P λ j g j (x )(x x ) is not positive as required. Remark 6. Like Lagrange multipliers these Kuhn- Tucker multipliers can be interpreted as measures of the sensitivity of the maximum value to changes in the constraint (0.2) but we won t go into the details. See SH 696. Remark 6.2 This theorem can be extended to apply to quasi-concave objective functions. Dixit 97ff discusses the extension.
7 Quasi-linear utility again Return to the quasi-linear utility case and now incorporate all the inequality constraints and include prices and income Maximise u(x) = x2 + αx 2, α > 0 s.t. p x + p 2 x 2 m 0, x 0, x 2 0. The Lagrangean L is x 2 +αx 2 λ 0 (p x +p 2 x 2 m) λ ( x ) λ 2 ( x 2 ) The Kuhn-Tucker conditions are L 2 λ 0 p + λ =0 x = 2 x L x 2 = λ 0 p 2 + λ 2 =0 α λ 0, λ, λ 2 0 λ 0 (p x + p 2 x 2 m) = 0 λ i ( x i ) = 0,i=, 2.. Because the objective function is strictly increasing in x and x 2 the budget constraint is binding so λ 0 > 0. 2. The constraint x 0, cannot bind for then 2 x 2 would be infinitely large. So λ =0. 3. The other constraint may or may not bind. Putting this information about the budget constraint and λ = 0 into the Kuhn-Tucker conditions: L 2 λ 0 p =0, x = 2 x L x 2 = λ 0 p 2 + λ 2 =0, α (p x + p 2 x 2 m) = 0, λ 2 ( x 2 ) = 0. Consider the possibility x 2 = 0 : from the budget constraint we get x = m p
and so L x = 2 λ 0 = 2 Ã! m 2 λ0 p =0 p Ã! 2 p m Putting this value into x L =0 2 L = α Ã! 2 p2 + λ 2 =0 x 2 2 p m But as λ 2 0 it must be the case that when x 2 =0, α satisfies α Ã! p 2 2 2 2 p m So small values of α produce a corner solution. It is reasonable that the consumer always consume some of the first good because marginal utility w.r.t. it approaches infinity as x 0 while marginal utility w.r.t. the other good is constant at α. 8 Dynamic optimisation In dynamic optimisation a time-path is chosen. Simple dynamic optimisation problems can be treated by the same methods as the static optimisation problem. However dynamic problems have special features which often suggest a different treatment. The interior solution x,x 2 > 0 is associated with larger values of α and corresponds to the case λ = λ 2 =0.) Example 8. Consider a simple T -period problem where agivenstockb 0 is consumer over T periods (formally
a variation on the consumer problem with logarithmic utility) max x U(x) = T X δ t ln x t TX s.t. x t = b 0 (fixed) (*) Form the Lagrangean TX L(x, λ) = δ t TX ln x t λ( x t b 0 ) and go through the usual steps to obtain a solution that involves x t = δx t for t =2,..., T This kind of dynamic equation is called a difference equation and is characteristic of the discrete time formulation. If the problem were formulated in continuous time a differential equation would appear at this point. In more complicated problems diagonalising methods are used for investigating the properties of the solution. One difference between static and dynamic optimisation is that dynamic equations appear naturally in the latter. A second difference is that multiple constraints (one for each time period) are routine. Example 8.2 (continues Ex. 8.) In complicated problems it is usually convenient to specify a budget constraint for each time period. Thus the constraint (*) would appear as: b t = b t x t, t =,..., T ; b 0 fixed (**) This law of motion describes how the available chocolate stock evolves: the bars of chocolate left over at the end of period t equals the bars available at the end of t less what has been consumed in period t. (*) collapses these dynamic equations into one constraint TP x t = b 0, eliminating the b,b 2,..., b T.
The Lagrange method extends to multiple constraints by introducing a multiplier for each constraint. Thus here TX L(x, λ) = δ t TX ln x t λ t (x t b t b t ). There are 2T equations to solve: T of the form L t =0 and T making up (**). Just as there is a sequence {x,..., x T } there is a sequence of multipliers {λ,..., λ T }. The usual algebra produces conditions like x t = δ λ t. x t λ t We already know that x t = δx t and it turns out that λ t isthesameforalltimeperiods. A third difference between static and dynamic optimisation is the existence of specialised techniques for treating the latter including (Pontryagin s) maximum principle and dynamic programming. 8. Maximum principle The maximum principle is widely used in Macroeconomics, usually in its continuous time form. I will go through a discrete time version to suggest where the coninuous time forms come from. A fairly general formulation covering the chocolate stock example and extensions to include production and investment involves the choice variables c(),..., c(t ); thesesymbolsareeasierontheeyethanc etc. The notation reflects the terminology of control theory. There is a state variable s governed by an equation of motion or state equation. The problem is to choose a control variable sequence c to maximise a value function. This may involve one or both of the statevariableandthecontrolvariable. max V (s, c) = c TX v(s(t),c(t))
s.t. s(t +) s(t) =f(s(t),c(t)) (***) for t =,..., T and with s() and s(t +) fixed at s and s T + respectively. (Other end conditions are possible.) The Lagrangian is L = TX v(s(t),c(t)) TX λ(t)[s(t +) s(t) f(s(t),c(t))] In optimal control the λ s are called co-state variables. Differentiating w.r.t. c(t) and s(t) (writing partial derivatives using subscripts) the first order conditions are v c(t) + λ(t)f c(t) = 0, t =,...,T v s(t) λ(t ) + λ(t)+λ(t)f s(t) = 0, t =2,...,T These conditions can be obtained as first order conditions involving a new function H (Hamiltonian) defined for all t by H(s(t),c(t), λ(t)) v(s(t),c(t)) + λ(t)f(s(t),c(t). Differentiating w.r.t. c(t) ands(t) H = 0, t =,...,T c(t) λ(t) λ(t ) = H s(t), t =2,...,T 8.. In continuous time In the more usual continuous time formulation the problem is to choose the time path of consumption c(t) to maximise s.t ds dt V = ZT o v(s(t),c(t))dt = f(s(t),c(t))
The first order conditions for a maximum are conditions on the partial derivatives of H, H(t, s(t), c(t), λ(t)) = v(t, s(t), c(t))+λ(t)f(t, s(t), c(t)) The first order conditions are H = 0 c dλ = H dt s. Example 8.3 Logarithmic chocolate in continuous time. Choose a consumption path x(t) to maximise U(x(t)) = ZT 0 ln x(t)e ρt dt subject to (writing k for the stock). k = x k(0) = given k(t ) = free In this case the chocolate stock is the state variable its derivative appears in the constraint and consumption is the control variable. The choice of labels may not seem very natural you control the chocolate stock by consuming chocolate. In this example the state variable does not appear in the objective function. The Hamiltonian is H(t, k(t),x(t), λ(t)) = ln x(t)e ρt λ(t)x(t) The first order conditions are. k = H λ = x(t). λ = H k =0 H x = e ρt λ(t) =0 x(t) The second condition. λ= 0 is so simple because k does not appear in the Hamiltonian; it implies that λ(t) is constant So from the third condition x(t) e ρt The time path of consumption is exponentially declining and so is the chocolate stock.
THE END