Appl Mah Opim (9) 59: 99 46 DOI.7/s45-8-95- OpimalCompensaionwihHiddenAcion and Lump-Sum Paymen in a Coninuous-Time Model Jakša Cvianić Xuhu Wan Jianfeng Zhang Published online: 6 June 8 Springer Science+Business Media, LLC 8 Absrac We consider a problem of finding opimal conracs in coninuous ime, when he agen s acions are unobservable by he principal, who pays he agen wih a one-ime payoff a he end of he conrac. We fully solve he case of quadraic cos and separable uiliy, for general uiliy funcions. The opimal conrac is, in general, a nonlinear funcion of he final oucome only, while in he previously solved cases, for exponenial and linear uiliy funcions, he opimal conrac is linear in he final oupu value. In a specific example we compue, he firs-bes principal s uiliy is infinie, while i becomes finie wih hidden acion, which is increasing in value of he oupu. In he second par of he paper we formulae a general mahemaical heory for he problem. We apply he sochasic maximum principle o give necessary condiions for opimal conracs. Sufficien condiions are hard o esablish, bu we sugges a way o check sufficiency using non-convex opimizaion. Keywords Hidden acion Moral hazard Second-bes opimal conracs and incenives Principal-agen problems Sochasic maximum principle Forward-backward SDEs J. Cvianić ( ) Calech, M/C 8-77, E. California Blvd., Pasadena, CA 95, USA e-mail: cvianic@hss.calech.edu X. Wan Deparmen of Informaion and Sysems Managemen, HKUST Business School, Clear Waer Bay, Kowloon, Hong Kong e-mail: imwan@us.hk J. Zhang Deparmen of Mahemaics, USC, 36 S. Vermon Ave., KAP 8, Los Angeles, CA 989-53, USA e-mail: jianfenz@usc.edu
Appl Mah Opim (9) 59: 99 46 Inroducion This paper builds a general heory of opimal conracs beween wo paries in coninuous ime when he acion of one pary is no observable. In mos exising coninuousime models of his ype i is assumed ha he paries have exponenial uiliy funcions, while we allow general uiliy funcions. In economics lieraure, hese ypes of problems are known as Principal-Agen problems, wih a principal who hires an agen o perform a cerain ask by conrolling a given sochasic process. For example, shareholders hire a company execuive whose effor has an effec on he company s sock price, or invesors hire a porfolio manager o manage heir wealh from invesmens. In a previous paper [5], we sudy he case in which he acions of he agen are observed by he principal, he so-called firs-bes case. Here, we consider he second-bes case of hidden acions or moral hazard, in which he agen s conrol of he drif of he process is unobserved by he principal. Thus, i is harder (more expensive) for he principal o provide incenives o he agen in order o make her apply high effor. The seminal paper on his opic in he coninuous-ime framework is Holmsrom and Milgrom [3], which showed ha if boh he principal and he agen have exponenial uiliies, hen he opimal conrac is linear. Schäler and Sung [4] generalized hose resuls using a dynamic programming and maringales approach of Sochasic Conrol Theory, and Sung [6]showed ha he lineariy of he opimal conrac sill holds even if he agen can conrol he volailiy, oo. A nice survey of he lieraure is provided by Sung [7]. Our model is similar o hose papers, bu we obain furher resuls also for nonexponenial uiliy funcions. We use a so-called weak formulaion, meaning ha he agen, wih her acions, influences he disribuion of he oucome process, or, more precisely is rae of reurn. This approach is firs suggesed by Mirrlees [6, 7], as explained nicely in [, p. 77]. We will illusrae his approach firs in a simple singleperiod model. Differen variaions and applicaions of he problem are considered in [3, 7, 8, 4,, 3, 8, 9]. See also [8, 9], and []. The paper closes o ours is Williams [9]. Tha paper uses he sochasic maximum principle o characerize he opimal conrac in he principal-agen problems wih hidden informaion, in he case wihou volailiy conrol. I focuses on he case of a coninuously paid reward o he agen, while we sudy he case when he reward is paid once, a he end of he conrac. Moreover, we prove our resuls from he scrach, hus geing hem under weaker condiions. (Williams [9] also deals wih he so-called hidden saes case, which we do no discuss here.) A very nice paper wih a seing in which he paymen o he agen is coninuous is Sannikov [3]. Tha paper has a risk-neural principal, and he agen has an arbirary uiliy funcion. While working on he general heory, we have been able o idenify a special framework in which he problem is racable even wih general uiliy funcions: under he assumpion of a cos funcion which is quadraic in agen s effor and under a separable uiliy, we find an explici soluion for he conrac payoff. To he bes of our knowledge, his is he firs ime ha he opimal second-bes lump-sum conrac payoff is explicily described in a coninuous-ime conracing problem wih hidden acion, oher han for exponenial and linear uiliies. The soluion depends only on
Appl Mah Opim (9) 59: 99 46 he final oucome (usually in a nonlinear way), and no on he hisory of he conrolled process, he fac which was known before for exponenial and linear uiliies. The conrac is deermined from an equaion which exends he classical Borch rule for marginal uiliies of he firs-bes conrac. I is an increasing funcion of he final value of he oupu, hus in he spiri of real-life conracs, such as call opion conracs in execuive compensaion. While he opimal payoff is explicily given as a soluion o a nonlinear equaion, he agen s opimal effor is obained as a par of a soluion o a simple Backward Sochasic Differenial Equaion (BSDE), which, in a Markovian framework, boils down o solving a linear parabolic PDE, a sandard hea equaion. In a concree example ha we compue, wih risk-neural principal (linear uiliy) and risk-averse agen (logarihmic uiliy), he effor is an increasing funcion of he curren value of he oupu. Ineresingly, he firs-bes case for ha example leads o infinie uiliy for he principal. For he general heory, we provide a deailed discussion on how o check wheher he necessary condiions we find are also sufficien. In paricular, his is rue for he separable uiliy case. Our mehod is based on sudying he agen s value funcion, ha is, her remaining expeced uiliy process. In coninuous-ime sochasic conrol lieraure his mehod is known a leas since Davis and Varaiya [6]. In dynamic Principal-Agen problems in discree-ime, i is used, among ohers, in [,, 5]. The heory for general non-separable uiliy funcions is quie hard. If he necessary condiions deermine a unique conrol process, hen, if we proved exisence of he opimal conrol, we would know ha he necessary condiions are also sufficien. The exisence of an opimal conrol is hard because, in general, he problem is no concave. I is relaed o he exisence of a soluion o Forward-Backward Sochasic Differenial Equaions (FBSDEs), possibly fully coupled. However, i is no known under which general condiions hese equaions have a soluion. The FBSDEs heory is presened in he monograph [5]. The mehod of he sochasic maximum principle ha we use is covered in he book [3]. For oher applicaions of he sochasic maximum principle in finance, see he recen book by Oksendal and Sulem []. The paper is organized as follows: In Sec. we se up he model. In Sec. 3 we analyze he racable case of quadraic cos and separable uiliy. In Sec. 4, we find necessary condiions for he agen s problem and he principal s problem in general case. In Sec. 5 we discuss how o esablish sufficiency and illusrae wih he Holmsrom-Milgrom example. We conclude in Sec. 6 and provide longer proofs in Appendix. The Seing. Model wih Symmeric Informaion We firs describe here he model appropriae when considering he full informaion, firs-bes case, in which he agen s acions are observed. Le {W be a sandard Brownian Moion on a probabiliy space (, F,P) and denoe by F W ={F W T is augmened filraion on he inerval [,T]. The
Appl Mah Opim (9) 59: 99 46 conrolled sae process, or oupu process is denoed X = X u,v and is dynamics are given by dx = u v d + v dw, X = x. (.) Here for simpliciy we assume all he processes are one-dimensional. The F W - adaped processes u and/or v may be conrolled by an agen, who is hired by a principal o conrol he oupu process X. For example, if u is fixed and v is conrolled, process X corresponds exacly o a value process of a porfolio which invess in a sock and a bank accoun, whose manager decides which amoun v of money o hold in he sock a ime, and keeps he res in he bank accoun. The value u corresponds hen o he expeced reurn rae of he sock above he risk-free rae. This is a well known and sandard model in Financial Mahemaics. If he manager can affec he mean reurn hrough her effor, for example by carefully choosing he asses in which o inves, hen we can assume ha u is also chosen by he manager. A more general model would be a sandard general model of Sochasic Conrol Theory given by dx = b(,x,u,v )d + σ(,x,v )dw. (.) When σ is nondegenerae, one can always se ṽ = σ(,x,v ), ũ = b(,x,u,v )σ (, X,v ). Then (.) becomes (.). Moreover, under some monooniciy condiions on b,σ, one can wrie u, v as funcions of (X, ũ, ṽ). In his sense, (.) and (.) are equivalen. We always consider models of ype (.). The full informaion case, in which he principal observes X, u, v and hus also W, was sudied in Cvianić eal.[5]. In he so-called hidden acion case, he principal can only observe he conrolled process X, bu no he underlying Brownian moion or he agen s conrol u (so he agen s acion u is hidden o he principal). We presen he appropriae model for his, second-bes case, in he following secion. A ime T, he principal gives he agen compensaion in he form of a payoff C T = F(X ), where F : C[,T] R is a (deerminisic) mapping. We noe ha since he principal is assumed o observe he process X coninuously, he volailiy conrol v can also be observed by he principal hrough he quadraic variaion of X, under he assumpion v. Because he can verify wha volailiy has been used, for a given process v he principal can design he payoff F in order o induce he agen o implemen i (or o force her o do so by harsh penalizaion). In his sense, we may consider v as a conrol chosen by he principal insead of by he agen, as is usual in he lieraure when acion is observed. We say ha he pair (F, v) is a conrac. The agen s problem is ha, given a conrac (F, v), she needs o choose he conrol u (over some admissible se which will be specified laer) in order o maximize her uiliy V (F, v) = sup u V (u; F,v) = sup E[U (F (X u,v ), G u,v T )]. u
Appl Mah Opim (9) 59: 99 46 3 Here, G u,v = g(s,x s,u s,v s )ds (.3) is he accumulaed cos of he agen, and wih a sligh abuse of noaion we use V boh for he objecive funcion and is maximum. We say a conrac (F, v) is implemenable if here exiss an effor process u F,v which maximizes he agen s uiliy given he conrac, ha is, i is such ha The principal maximizes her uiliy V (u F,v ; F,v)= V (F, v). (.4) V = max E[U (X uf,v,v F,v,u F,v T F(X uf,v,v ))], (.5) where he maximum is over all implemenable conracs (F, v) and corresponding agen s opimal effors u F,v, such ha he following paricipaion consrain or individual raionaliy (IR) consrain holds: V (F, v) R. (.6) Noe ha ypically for a given conrac (F, v) he corresponding opimal effor u F,v of he agen will be unique, in which case he principal only maximizes over (F, v). Consan R is he reservaion uiliy of he agen and represens he value of he agen s ouside opporuniies, he minimum value she requires o accep he job. Funcions U and U are uiliy funcions of he agen and he principal. The ypical cases sudied in he lieraure are he separable uiliy case wih U (x, y) = U (x) y, and he non-separable case wih U (x, y) = U (x y), where, wih a sligh abuse of noaion, we use he same noaion U also for he funcion of one argumen only. We could also have he same generaliy for U, bu his makes less sense from he economics poin of view. Remark. A sandard way o wrie he principal s problem is he one of looking for a so-called consrained Pareo opimal soluion, which is such ha no oher conrac saisfying he required consrains could make boh paries beer off. More precisely, he problem is wrien as sup F,v,u [ ] E[U (X T F)]+λV (u; F,v) subjec o he consrain (.4). When we resric ourselves o implemenable conracs, hen his formulaion is equivalen o our formulaion above, because he choice of he Lagrange muliplier λ is equivalen o he choice of he minimal agen s uiliy R, while he consrain (.4) is absorbed ino he principal s problem by seing he effor u in he principal s problem (.5) equal o u F,v, ha is, equal o an effor which he agen will choose opimally given a conrac (F, v). While our formulaion is less
4 Appl Mah Opim (9) 59: 99 46 sandard for symmeric informaion problems, i is consisen wih he approach we will use for solving he problem in he case of hidden acion: we will firs characerize he opimal effor of he agen for a given conrac, and hen solve he principal s problem analogous o (.5). Remark. Our approach also works for he framework in which he oupu process X = (X,...,X d ), d isad-dimensional vecor. In his case effor u and Brownian moion W would also be d-dimensional vecors, while v would be a d d marix. The principal s uiliy may ake he general form U (X T,C T ), or a special form U ( i Xi T C T ) as in [3]. By observing X coninuously, he principal also observes is quadraic variaion marix vv, where v denoes he ranspose of v. Noice ha he principal canno observe v direcly. If we assume furher ha, given vv, he paricular choice of v does no change he value of he cos G T, hen he principal essenially observes v, and one can exend all he resuls in his paper wihou subsanial difficuly.. Model wih Hidden Acion For he origins, imporance and numerous applicaions of he discree-ime Principal- Agen heory wih hidden acion (and more), we refer he reader o he excellen book [4]. The original moivaion behind coninuous-ime models in he seminal paper [3] was o show ha if he agen has a rich sraegy space, hen she can undermine complicaed conracs, and he principal is forced o use simple conracs, as is frequenly he case in pracice. Before heir paper, here was a gap beween he heory, in which complex conracs were opimal, and pracice, in which ofen relaively simple conracs are observed. They also show ha heir coninuous-ime model in which he drif is conrolled, bu no he volailiy, can be obained as a limi of a model in which he oucomes have mulinomial disribuion, and in which he agen chooses he probabiliies of he oucomes, while he possible oucomes values are fixed. We essenially adop Holmsrom and Milgrom coninuous-ime model for he underlying dynamics, as do mos oher exising papers in coninuous ime... A Single-Period Model wih Hidden Acion In order o make i easier o undersand our coninuous-ime model, we firs presen a simple problem in a single-period model. This is a familiar model, and can be found, for example, in he book [4]. This modeling approach, of he acion deermining he disribuion of he oupu, was originally suggesed by Mirrlees [6, 7] and exended in []. I was driven parly by racabiliy, as Mirrlees [6, 7] has shown ha wihou his approach even simple problems may be hard or no have a soluion. In his model we describe he conrac which is opimal among linear conracs, while in he analogues coninuous-ime model he same linear conrac urns ou o be opimal even if we allow general conracs. This is an advanage of coninuous-ime, as discovered firs in [3]. The agen picks he disribuion of he final oupu X by her acion, unobserved by he principal. We assume ha under a fixed probabiliy P = P,wehave X = X + vb
Appl Mah Opim (9) 59: 99 46 5 where v is fixed and B is a random variable wih a sandard normal disribuion. By applying acion u, he probabiliy P changes o P u, under which he disribuion of B is normal wih mean u and variance one. Therefor, under P u, X has mean uv. We consider only conracs which are linear in X, or, equivalenly, in B : C = k + k B. We assume a quadraic cos funcion, g(u) = u /, and exponenial uiliy funcions. Denoing by E u he expecaion operaor under probabiliy P u, we define he agen s problem o be he minimizaion of where we used he fac ha E u [e γ (k +k B u /) ]=e γ (k u /+k u k γ ) E u [e kb ]=e ku+ k. (.7) Hence, he opimal acion u is u = k. (.8) We now describe a mehod which will also use in he coninuous-ime case. We suppose ha he principal decides o provide he agen wih a conrac payoff C which resuls in (opimal) expeced uiliy of R o he agen. This means ha, using (.7) and (.8), R = γ E u [e γ (C u /) ]= γ e γ (k + u γ u ). (.9) Compuing e γ k from his and using C = k + ub, we can wrie γ e γ C = Re γ ( [γ ]u +ub ). (.) Thus, we ge a represenaion of he conrac payoff in erms of he agen s promised uiliy R and he source of uncerainy B, somehing we will find helpful laer on, oo. Denoing he principal s risk aversion wih γ,usinge γ C = (e γ C ) γ /γ, X = X + vb and (.), we can wrie he principal s expeced uiliy as E u [U P (X C )]= γ ( γ R) γ /γ E u [e γ (X +vb [γ ]u ub ) ] which can be compued as γ ( γ R) γ /γ e γ (X γ v /+(+γ )uv u (+γ +γ )). Maximizing over u, we ge he opimal u as + γ u = v. (.) + γ + γ
6 Appl Mah Opim (9) 59: 99 46 If he principal could choose v, he would opimize he above expression over v, oo. If here was symmeric informaion, and he model was X = X + av + vb (under a single, fixed probabiliy P ), in a similar way i can be compued ha he opimal acion is u = v, and ha he conrac is of he form C = c + γ X. γ + γ γ We see ha he sensiiviy γ +γ of he firs-bes conrac is less han he sensiiviy k /v = +γ +γ +γ of he second-bes conrac. This is no surprising when he acion is unobservable he principal is forced o ry o induce more effor by offering higher incenives. As menioned above, when we illusrae our heory wih he Holmsrom- Milgrom [3] problem in coninuous ime, we will see ha he above second-bes conrac is acually opimal among all conracs, linear or no... A Coninuous-ime Model wih Hidden Acion Similarly o our single-period model, Holmsrom-Milgrom [3] assume ha he agen is choosing a probabiliy measure under which he oupu process evolves (as can be seen in paricular in heir proof of Theorem 6 in he appendix of ha paper). Schäler and Sung [4] make his approach precise and rigorous, based on he socalled weak formulaion and a maringale approach o sochasic conrol. We adop he same formulaion, and describe i in more deail nex. Le B be a sandard Brownian moion under some probabiliy space wih probabiliy measure Q, and F B ={F B T be he filraion on [,T] generaed by B. For any F B -adaped square inegrable process v>, le X = x + v s db s. (.) Then v is a funcional of X, v =ṽ (X ) and obviously i holds ha F X = F B,. Moreover, effor process u is assumed o be a funcional ũ of X. Given such ũ, we define ( u =ũ (X ); B u = B u s ds; M u = exp u s db s ) u s ds ; (.3) and a new probabiliy measure Q u by dq u dq = M u T.
Appl Mah Opim (9) 59: 99 46 7 Then we know by Girsanov Theorem ha, under cerain condiions, B u is a Q u - Brownian moion and dx = v db = (ũ ṽ )(X )d +ṽ (X )db u. Tha is, in he language of Sochasic Analysis, he riple (X, B u,p u ) is a weak soluionohesde dx = (ũ ṽ )(X )d +ṽ (X )dw. Compared o (.), we noe ha in he weak formulaion we consider funcionals (ũ, ṽ) as conrols. Accordingly, we consider (F, ṽ) as a conrac offered by he principal. The choice of ũ corresponds o he choice of probabiliy measure P u, hus o he choice of he disribuion of process X. I is also well known ha his is he only way o vary probabiliy measures in Brownian models, while keeping hem equivalen (i.e., having he same null ses). For any conrac payoff C T FT B, here exiss some funcional F such ha C T = F(X ). Thus, a conrac (F, ṽ) is equivalen o a random variable C T FT B and a process v F B. Also, an acion ũ is equivalen o a process u F B. For simpliciy, in he following we abuse he noaion by wriing u =ũ (X ) and v =ṽ (X ) when here is no danger of confusion. Now given a conrac C T FT B and v FB, he agen s problem is o find an opimal conrol u C T,v F B such ha where, recalling (.3), V (u C T,v ; C T,v)= V (C T,v) = sup V (u; C T,v), u V (u; C T,v) = E Qu {U (C T,G T )=E Q {M u T U (C T,G T ). (.4) For simpliciy from now on we denoe E = E Q and E u = E Qu. The principal s problem is o find opimal (C T,v ) such ha where V (CT,v ) = V = sup V (u C T,v ; C T,v), C T,v,u C T,v V (u; C T,v) = E u {U (X T C T )=E{M u T U (X T C T ). (.5) We see from his las expression ha, indeed, he choice of u is really he choice of M u and hence he choice of he probabiliy measure, ha is, he choice of disribuion of he oupu X. As usual in conrac heory, we assume ha when he agen is indifferen beween wo acions, she will choose he one beer for he principal.
8 Appl Mah Opim (9) 59: 99 46 Remark.3 (i) The agen chooses her acion based on he oupu value X which is observable o he principal. However, alhough u is F X -adaped, he principal does no know ũ, and hence does no know he value of B u eiher. (ii) Mahemaically, he srong formulaion we used in Sec.. and he weak formulaion of his secion are in general no equivalen, due o he differen requiremens on he measurabiliy of he agen s conrol u. In Sec.., u is an F W -adaped process, and hus F Xu may be smaller han F W. In conras, in he weak formulaion here, u is F X -adaped, and hus F Bu may be smaller han F X. (iii) If we resric u o a smaller admissible se, say o hose such ha F Xu = F W in he srong formulaion and hose such ha F X = F Bu in he weak formulaion, hen he wo formulaions are equivalen. This consrain, however, may make he calculaions in he general framework very difficul. (iv) However, in he solvable examples exising in he lieraure, i urns ou ha, for he opimal u, he wo filraions are equal (e.g., he soluion u in he srong formulaion is a feedback conrol, or even deerminisic); see below for more commens. (v) In he srong formulaion, one has o disinguish he opimal conrac F and is realized opimal value C T = F(X.)(see, e.g. [5]). In he weak formulaion, since, given he oucome ω, he oupu value of X is fixed (independen of u), he random variable C T can be undersood as he conrac, and we do so in his paper. We noe ha ofen in he lieraure, for racabiliy reasons he weak formulaion is used for he agen s problem and he srong formulaion for he principal s problem. However, here is a suble measurabiliy issue, as poined ou in par (ii) of he above remark. More precisely, on one hand, he opimal acion u F,v obained from he agen s problem by using he weak formulaion may no be in he admissible se under he srong formulaion (if F Bu is sricly smaller han F X ); on he oher hand, given a principal s arge acion u (see Sec. 4..) in he srong formulaion, i is no always possible o obain i as an opimal soluion of he agen s problem in he weak formulaion, as i may no be even implemenable. In his paper we are able o develop a general heory using he weak formulaion for boh he agen s problem and he principal s problem, avoiding poenial inconsisencies. On he oher hand, as menioned in par (iv) of he above remark, in all he solvable examples in he lieraure i urns ou ha he opimal, for he srong formulaion, effor u is a funcional of oupu X only (and no of Brownian Moion W ). If ha is he case, one may use he srong formulaion for he principal s problem wihou loss of generaliy. We now presen a special case which can be solved more or less sraighforwardly, and i is probably he mos ineresing par of he paper for economic applicaions. Laer, we presen a more fully developed mahemaical heory, which, unforunaely does no lead o general exisence resuls, or addiional solvable examples, bu i does provide necessary condiions for opimaliy. 3 Special Case: Separable Uiliy wih Fixed Volailiy v and Quadraic Cos g The model we presen here is quie general in he choice of he uiliy funcions, and hus could be of use in many economic applicaions. The soluion is explici in
Appl Mah Opim (9) 59: 99 46 9 he characerizaion of he opimal conrac payoff C T, via a nonlinear deerminisic equaion. I is in general semi-explici in erms of he opimal effor u, as i boils down o solving a linear BSDE (no FBSDE!). To he bes of our knowledge, his is he firs explici descripion of a soluion o a coninuous-ime Principal-Agen problem wih hidden acion and lump-sum paymen, oher han he case of exponenial and linear uiliy funcions. Moreover, as in hose wo cases, he opimal conrac is sill a funcion only of he final oucome X T, and no of he hisory of he oupu process X, bu unlike hose wo cases, he dependence on X T is nonlinear in general. We assume he process v is fixed; he agen akes separable uiliy U (C T ) G T ; and he cos funcion is g(,x,u,v) = u /. Firs we adop sandard assumpions for uiliy funcions, which are assumed hroughou his secion. Assumpion 3. U,U are wice differeniable such ha U i >,U i,i =,. We now specify he echnical condiions u and C T should saisfy. Roughly speaking, we need enough inegrabiliy so ha calculaions in he remainder of he secion can go hrough. Definiion 3. The se A of admissible effor processes u is he space of F B - adaped processes u such ha (i) P( u d < ) = ; (ii) E{ MT u 4 <. We noe ha any u A saisfies he Novikov condiion and hus he Girsanov Theorem can be applied, see (7.) below. Definiion 3. The se A of admissible conracs is he space of F B T -measurable C T such ha (i) E{ U (C T ) 4 + e 4U (C T ) <. (ii) E{ U (X T C T ) + e U (C T ) U (X T C T ) <. 3. Agen s Problem Firs we esablish a simple echnical lemma for a linear BSDE. Lemma 3. Assume u F B, Girsanov Theorem holds rue for (B u,q u ), and E{ MT u <. Then for any ξ FT B such ha Eu { ξ <, here exiss a unique Q u -square inegrable pair (Y, Z) F B such ha Y = ξ Z s dbs u. (3.)
Appl Mah Opim (9) 59: 99 46 Obviously Y = E u {ξ, and uniqueness also follows immediaely. Bu in general F Bu F B, so we canno apply he sandard Maringale Represenaion Theorem direcly o obain Z. Neverheless, he resul follows from an exended Maringale Represenaion Theorem, see [], and for compleeness we provide a proof in Appendix. We now solve he agen s problem. Theorem 3. For any conrac C T A, he opimal effor û A for he agen is obained by solving he BSDE Ȳ = E [e U (C T ) ]=e U (C T ) û s Ȳ s db s. (3.) Moreover, he agen s opimal expeced uiliy is given by Remark 3. (i) We can see from he proof below ha V = log Ȳ = log E[e U (C T ) ]. (3.3) log Ȳ = U (C T ) û s ds û s dbûs (3.4) represens he remaining expeced uiliy of he agen when she behaves opimally. Thus, he opimal û is chosen so ha he remaining expeced uiliy is increasing by he reducion in he remaining cos unil i reaches U (C T ), and here is addiional mean zero erm û s dbûs. Acually, he fac ha he inegrand of his erm is equal o û disinguishes he opimal effor from non-opimal effors. (ii) One of he main insighs of Holmsrom-Milgrom [3] and Schäler and Sung [4] is he represenaion of he opimal payoff C T in erms of he agen s opimal effor û and her uiliy cerainy equivalen. Tha represenaion corresponds in our model of his secion o (3.4), afer a ransformaion (he cerainy equivalen is U (log Ȳ )). For our case of separable uiliies i is more convenien o work wih he remaining uiliy as in (3.4) han wih he cerainy equivalen. (iii) In he language of opion pricing heory finding û is equivalen o finding a replicaing porfolio for he opion wih payoff e U (C T ). Numerous mehods have been developed o compue such an objec, someimes analyically, oherwise numerically. Le us commen on his in more deail. As we will see below, he opimal conrac (when exiss) is going o be a deerminisic funcion of X T,sayC T = c(x T ). Assume v = v(,x ) is a deerminisic funcion of X and and hus X is Markovian. Then he soluion o he BSDE (3.) is a funcion of X, ha is, Ȳ = F(,X ) and df(,x ) = [ F + F xxv ] d + F x v db. Since Ȳ is a maringale, he d erm has o be zero, so ha F saisfies he PDE { F (, x) + v (, x)f xx (, x) = ; F(T,x)= e U (c(x)) ; (3.5)
Appl Mah Opim (9) 59: 99 46 and u = u(, X ) = v(,x )F x (, X ). (3.6) F(,X ) Proof of Theorem 3. We firs claim ha he BSDE (3.) is well-posed and ha û deermined by (3.) is indeed in A. The proof is echnical and hus is posponed o he Appendix. We now show ha û is opimal for he agen s problem. For any u A,asis sandard in his ype of sochasic conrol problems (see [4], for example) and, in paricular, in dynamic principal-agen problems, we consider he remaining uiliy of he agen a ime W A,u = E u [ U (C T ) u s ]. ds Then W A,u u s ds is a Qu -maringale, so by Lemma 3. here exiss an F B - adaped process Z A,u such ha W A,u u s ds = U (C T ) u s ds Zs A,u dbs u. Then, swiching from B u o B, wehave W A,u = U (C T ) + [ u s Zs A,u ] u s ds Zs A,u db s. (3.7) Noe ha W A,u = E u [U (C T ) u s ds], is he agen s uiliy, given acion u.on he oher hand, for û from he heorem, using Iô s rule and (3.), we ge log Ȳ = U (C T ) + û s ds û s db s. Thus, log Ȳ = E[log Ȳ ]=W A,û is he agen s uiliy if she chooses acion û. Noice ha W A,û W A,u = = [ ] [ û + u ] u Z A,u d + [û u u Z A,u ]d + [Z A,u û ]db [Z A,u û ]db [Z A,u û ]db u. (3.8) The equaliy holds if and only if u =û. Noe ha E u { E u { ZA,u d <, and { { ( ) û d = E MT u û d CE MT u + û d <,
Appl Mah Opim (9) 59: 99 46 hanks o (7.) below. Then E u { [Z A,u û ] d <. Taking expeced values under Q u in (3.8) we ge W A,û W A,u, wih equaliy if and only if u =û. 3. Principal s Problem Since, given a conrac C T A, he agen s opimal effor û =û(c T ) from Theorem 3. is unique, he principal s opimizaion problem can be wrien as sup C T Eû(C T ) [U (X T C T )] (3.9) under he consrain (.6). Noe now ha by solving he linear equaion (3.), we ge, denoing henceforh û = u, Ȳ = Ȳ e u sdb s u s ds which, ogeher wih (.3) and (3.3), gives he following crucial fac U (C T ) = log Ȳ T = V + log(mt u ). (3.) This urns ou o be exacly he reason why his problem is racable: he fac ha for he opimal agen s effor u,wehave M u T = e V e U (C T ). (3.) In oher words, he choice of he probabiliy measure corresponding o he opimal acion u has an explici funcional relaion wih he promised payoff C T. When we use his expression, and recall ha E u [X]=E[MT u X], we can rewrie he principal s problem as sup C T,V e V E{e U (C T ) [U (X T C T ) + λ], (3.) where λ is a Lagrange muliplier for he IR consrain E[e U (C T ) ]=e V (see (3.3)). As usual in hidden acion problems, we see ha he principal will give he smalles possible uiliy o he agen, Moreover, we ge V = R. Proposiion 3. Assume ha he conrac C T is required o saisfy L C T H
Appl Mah Opim (9) 59: 99 46 3 for some F T -measurable random variables L,H, which may ake infinie values. If, wih probabiliy one, here exiss a finie value ĈT λ (ω) [L(ω), H (ω)] ha maximizes and λ can be found so ha e U (C T ) [U (X T C T ) + λ] (3.3) E[e U (Ĉ λ T ) ]=e R hen Ĉ λ T is he opimal conrac. Since (3.3) is considered ω by ω, we reduce he problem o a one-variable deerminisic opimizaion problem. Remark 3. In his remark we assume ha here is an inerior soluion when maximizing (3.3). (i) The firs order condiion for maximizing (3.3) can be wrien as U (X T C T ) U (C = λ + U (X T C T ). (3.4) T ) This is a generalizaion of he classical Borch rule for risk-sharing in he firsbes (full informaion) case: U (X T C T ) U (C = λ. (3.5) T ) The difference is he las erm in (3.4): he raio of marginal uiliies of he agen and he principal is no longer consan, bu a consan plus uiliy of he principal. Increase in global uiliy of he principal also makes him happier a he margin, relaive o he agen, and decrease in global uiliy makes him less happy a he margin. This will end o make he conrac more nonlinear han in he firs-bes case. For example, if boh uiliy funcions are exponenial, and we require C T L> (for echnical reasons), i is easy o check from (3.5) ha he firs-bes conrac C T will be linear in X T for C T >L. On he oher hand, as can be seen from (3.4), he second-bes conrac will be nonlinear. (ii) Noe ha he opimal conrac is a funcion of he final value X T only. In he previous coninuous-ime lieraure, only he cases of exponenial (non-separable) uiliy funcions and linear uiliy funcions have been solved explicily, leading o linear conracs. I s been saed ha in he case of general uiliies he opimal conrac may depend on he hisory of he process X. However, his is no he case in he special framework of his secion. (iii) The firs order condiion can be solved for he opimal conrac C T = C T (X T ) as a funcion of X T, and we can find, omiing he funcions argumens, ha X T C T = U U U U + U U U (U ).
4 Appl Mah Opim (9) 59: 99 46 Thus, he conrac is a non-decreasing funcion of X T, and is slope wih respec o X T is no higher han one. In he firs-bes case of (3.5), we have X T C T = U U U + U U. We see ha he sensiiviy of he conrac is higher in he second-bes case, because more incenives are needed o induce he agen o provide opimal effor when he effor is hidden. The erm which causes he increase in he slope of he conrac is U (U ) in he denominaor. We see ha his erm is dominaed by he agen s marginal uiliy, bu i also depends on he principal s marginal uiliy. Higher marginal uiliy for eiher pary causes he slope of he conrac o increase relaive o he firsbes case. There is also an alernaive way o formulae he principal s problem, as opimizing over effor u, or probabiliy measures Q u, which we presen nex. From (3.), wih V = R, he principal s problem is sup u U E[G(MT u )] = sup E[MT u U (X T J (R + log(mt u )))]. u Here, J = U and G is a random funcion on posiive real numbers, defined by I is sraighforward o compue ha G(x) = xu (X T J (R + log(x)). G (x) < so ha G is a sricly concave funcion, for every fixed X T (ω). We define he dual funcion, for y>, The maximum is aained a G(y) = max[g(x) xy]. x ˆx =[(G ) (y)] +. Thus, we ge he following upper bound on he principal s problem, for any consan λ>: E[G(M u T )] E[ G(λ)]+λE[M u T ]=E[ G(λ)]+λ. The upper bound will be aained if M u T = (G ) (λ)
Appl Mah Opim (9) 59: 99 46 5 and λ is chosen such ha E[(G ) (λ)]=. This leads o he following alernaive o he Proposiion 3. Proposiion 3. Assume (for simpliciy) ha here are no required bounds on he conrac C T. If, wih probabiliy one, (G ) (λ) > for λ for which E[(G ) (λ)]=, hen he opimal effor u saisfies he Backward SDE M u = (G ) (λ) u s Ms u db s (assuming a soluion exiss). Moreover, we have (G ) (λ) = e R e (X T,λ) where C T = (X T,λ)is a soluion o (3.4), and i is he opimal payoff. Proof By he above upper bound, M u T = (G ) (λ) is opimal for he principal s problem, and he BSDE from he proposiion is he consequence of he dynamics of he process M. Moreover, we have G (e R e (X T,λ) ) = U (X T J ( (X T,λ)) U (X T J ( (X T, λ)))j ( (X T,λ))= λ where he las equaliy comes from he definiion of (X T,c). Thus he BSDE for u becomes M u = e R e (X T,ĉ) u s Ms u db s where, since E[MT u ]=, we have E[e (X T,λ) ]=e R, so ha he IR consrain is saisfied. 3.3 Examples Example 3. Suppose he principal is risk-neural while he agen is risk averse wih U (x) = x, U (x) = log x. Also assume σ > is a given process and v = σ X, X = x> so ha X > for all. Moreover, assume ha λ = e R x>.
6 Appl Mah Opim (9) 59: 99 46 The firs order condiion (3.4) gives and in order o saisfy he IR consrain C T = (X T + λ) e R = E[C T ]= (x + λ) we need o ake λ = λ. By he assumpions, we have C T >, and C T is hen he opimal conrac and i is linear. The opimal u is obained by solving BSDE Noing ha we ge Ȳ = E [C T ]=e R + Ȳ u db. E [C T ]= (X + λ) = e R + σ X db u = σ X + λ. Since λ>, we see ha he effor goes down as he oupu decreases, and goes up when he oupu goes up. Thus, he incenive effec coming from he fac ha he agen is paid an increasing funcion of he oupu a he end, ranslaes ino earlier imes, so when he promise of he fuure paymen ges higher, he agen works harder. Also noice ha he effor is bounded in his example (by σ ). Assume now ha σ is deerminisic. The principal s expeced uiliy can be compued o be equal o E u [X T C T ]=e R E[C T (X T C T )]=x e R + e X R x 4 [ e σ s ds ] The firs erm, x e R, is wha he principal can ge if he pays a consan payoff C T, in which case he agen would choose u. The second erm is he exra benefi of inducing he agen o apply non-zero effor. The exra benefi increases quadraically wih he iniial oupu, increases exponenially wih he volailiy squared, and decreases exponenially wih he agen s reservaion uiliy. While he principal would like bes o have he agen wih he lowes R, he cos of hiring expensive agens is somewha offse when he volailiy is high (which is no surprising, given ha he principal is risk-neural). For comparison, we look now a he firs-bes case in his example. Ineresingly, we have Remark 3.3 Assume ha σ > is deerminisic and bounded. Then he principal s firs-bes opimal uiliy is.
Appl Mah Opim (9) 59: 99 46 7 Proof We see from (3.5) ha, whenever he principal is risk-neural, a candidae for an opimal conrac is a consan conrac C T. Wih log-uiliy for he agen, we se C T = λ where λ is obained from he IR consrain, and he opimal uiliy of he principal is obained from [ { sup E[X T λ]=sup E xe [u σ σ ]d+ T σ db e R e E{ ud]. (3.6) u u Under he assumpion ha σ is deerminisic and bounded, we show now ha he righ side of (3.6) is. In fac, for any n,se A n = { σ db >n F T ; α n = P(An ) ; and Then he cos is finie: u n (ω) = { α n, T T,ω A n ;, oherwise. { E However, for a generic consan c>, { E x exp ( { = E x exp (un ) d = T 4. [ u n σ ] σ d + ( α T n T σ d An { ( E x exp α T n σ d T { ( = E x exp α T n σ d T { ( ce x exp α T n σ d + n T ( = cx exp α T ) n σ d + n P(A n ) T ( = cx exp α T ) n σ d + n T σ db ) T ) σ d + σ db T ) σ d + σ db An σ d + ) An α n cxα n e cα n, σ db ) An (3.7)
8 Appl Mah Opim (9) 59: 99 46 which obviously diverges o as α n. We noe ha anoher compleely solvable example in his special framework is he case of boh he principal and he agen having linear uiliies. Bu in ha case i is easily shown ha he firs-bes and he second-bes are he same, so here is no need o consider he second-bes. The propery ha u is increasing in he oupu values X holds rue in more general cases. We have he following resul whose proof is posponed o he Appendix. Proposiion 3.3 Suppose ha U (x) = x and v = σ X for deerminisic σ. Assume λ is non-negaive and (3.3) admis an inerior soluion. Assume furher ha for any x in he domain of U, where max(h (x), H (x)), (3.8) H = U U + (U ) U 3(U ) ; H = (U ) 4 + U U (U ) U (U ) + xu H. Then u = u(, X ) for some deerminisic funcion u and u x. Remark 3.4 The following examples saisfy he condiion (3.8). (i) U (x) = x. Then Thus (ii) U (x) = log(x). Then U =,U = U =. H (x) =. U = x,u = x,u = x 3. Thus (iii) U (x) = γ xγ where <γ <. Then Thus H (x) =. U = xγ,u = (γ )xγ,u = (γ )(γ )xγ 3. H (x) = γx γ 4[ x γ + ( γ)x γ + ( γ) ]. Exponenial uiliy U (x) = γ e γx does no saisfy (3.8).
Appl Mah Opim (9) 59: 99 46 9 4 General Case: Necessary Condiions In he res of he paper we find necessary condiions, and discuss a possibiliy of finding sufficien condiions for opimal conracs in a very general framework. This maerial is echnical, and much less inuiive han he special case discussed in he previous secion. We use he mehod of he so-called Sochasic Maximum Principle, as described in he book [3]. I is an exension of he Ponryagin maximum principle o he sochasic case. We believe his is he righ mehod o use for he general heory. For example, i naurally leads o he use of remaining expeced uiliy of he agen as he variable on which he soluion heavily depends, and also i shows immediaely wha oher variables are imporan, wihou having o guess wha hey should be, as in oher approaches. 4. Sanding Assumpions Recall he seup from Sec.. We need he following assumpions. (A) Funcion g :[,T] R R R R is coninuously differeniable wih respec o x,u,v, g x is uniformly bounded, and g u,g v have uniform linear growh in x,u,v. In addiion, g is joinly convex in (x,u,v), g u > and g uu >. (A) (i) Funcions U : R R, U : R R are differeniable, wih U >, U <, U >, U is joinly concave and U is concave. (ii) Someimes we will also need U K for some consan K. For any p, denoe L p T (Qu ) ={ξ FT B : Eu { ξ p < ; { L p (Q u ) = {η F B : E u η p d <, and define L p T (Q), Lp (Q) in a similar way. We nex define he admissible se for he agen s conrols. (A3) Given a conrac (C T,v), he admissible se A(C T,v)of agen s conrols associaed wih his conrac is he se of all hose u F B such ha (i) Girsanov Theorem holds rue for (B u,q u ); (ii) U (C T,G T ), U (C T,G T ) L T (Qu ); (iii) For any bounded u F B, here exiss ε > such ha for any ε [,ε ), u ε saisfies (i) and (ii) a above and u ε 4, g ε 4, gu ε 4, MT ε 4, U (C T,G ε T ), U (C T,G ε T ) are uniformly inegrable in L (Q) or L T (Q), where u ε = u + ε u, V ε = V (u ε ), G ε T = g ε ()d, M ε = M uε,
Appl Mah Opim (9) 59: 99 46 and g ε () = g(,x,u ε,v ), gu ε () = g u (, X,u ε,v ). When ε = we omi he superscrip. We noe ha, for any u A(C T,v) and u, ε saisfying (A3)(iii), we have u ε A(C T,v) for any ε [,ε ). We noe also ha, under mild assumpions on (C T,v), all bounded u belong o A(C T,v). The admissible se for he conracs (C T,v) is more involved. We pospone is descripion unil laer. 4. Necessary Condiions 4.. The Agen s Problem We fix now a conrac (C T,v), u A(C T,v), and u F B bounded. Denoe, omiing argumens of U, U, he variaions g() = g u (, X,u,v ) u ; G = g(s)ds; [ ] M = M u s db s u s u s ds = M u s dbs u ; V = E{ MT U + M T U G T. Moreover, for any bounded u F B and ε (,ε ) as in (A3)(iii), denoe g ε () = gε () g() ; G ε T ε = Gε T G T ; MT ε ε = Mε T M T ; ε V ε = V ε V. ε For a given payoff C T and cos variable G T, inroduce he so-called adjoin processes W A = E u[u (C T,G T )]=U (C T,G T ) Zs A, dbs u; = E u[ U (C T,G T )]= U (C T,G T ) (4.) T Y A Zs A, dbs u where Z A,i are obained from Lemma 3.. The firs one represens he agen s remaining expeced uiliy, and he second one he agen s remaining expeced marginal cos. The laer becomes unimporan when he uiliy is separable (when U (C, G) = U (C) G hen Y A ). Theorem 4. Under our sanding assumpions, we have he following differeniaion resul for he value funcion of he agen: { lim V ε = V = E u Ɣ A u d (4.) ɛ
Appl Mah Opim (9) 59: 99 46 where Ɣ A = Z A, + g u (, X,u,v )Y A. (4.3) In paricular, he necessary condiion for u o be an opimal conrol is: Z A, + g u (, X,u,v )Y A. (4.4) Proof See Appendix. Remark 4. In he separable case he necessary condiion becomes g u = Z A, (since Y A ). We can inerpre Z A, as a derivaive of he agen s remaining uiliy wih respec o he Brownian moion (acually, i is equal o wha is called a Malliavin derivaive). Thus, a he opimum, in he separable uiliy case he agen s local marginal cos of effor has o be equal o he sensiiviy of he agen s remaining uiliy wih respec o he underlying uncerainy. In he non-separable case, his has o be adjused by normalizing wih he global marginal cos Y A. We now provide a FBSDE characerizaion of he necessary condiion. We see ha given (C T,v)(and hus also X), he opimal u should saisfy he FBSDE G = g(s,x s,u s,v s )ds; W A Y A = U (C T,G T ) Z A, s db u s ; = U (C T,G T ) Z A, s db u s ; (4.5) wih maximum condiion (4.4). Moreover, since g uu >, we may assume here exiss a funcion h(, x, v, z) such ha g u (,x,h(,x,v,z),v)= z. (4.6) Noe ha U <, so Y A <. Thus, (4.4) is equivalen o u = h(, X,v, Z A, /Y A ). (4.7) Tha is, given (C T,v)and X, one may solve he following (self-conained) FBSDE: G = g(s,x s,h(s,x s,v s, Zs A, /Ys A), v s)ds; W A = U (C T,G T ) + Zs A, h(s, X s,v s, Zs A, /Ys A)ds Zs A, db s ; Y A = U (C T,G T ) + Zs A, h(s, X s,v s, Zs A, /Ys A)ds Zs A, db s. (4.8) Then, as a necessary condiion, he opimal conrol u C T,v should be defined by (4.7). 4.. The Principal s Problem We now characerize he admissible se A of conracs (C T,v). Our firs requiremen is:
Appl Mah Opim (9) 59: 99 46 (A4) (C T,v) is implemenable. Tha is, (4.8) has a unique soluion, and u C T,v defined by (4.7)isinA(C T,v). Noe ha we found only necessary condiions for he agen. Laer we will have some discussion on when he above u C T,v is indeed he agen s opimal conrol. Thus, solving he principal s problem wih u C T,v insead of general u is no quie saisfying he consrain (.4), bu i represens solving a relaxed problem over effors which saisfy he agen s necessary condiion of opimaliy. This is radiionally called he firs-order approach. The approach will produce he opimal soluion if he necessary condiions for he agen are also sufficien. Now, an implemenable conrac (C T,v) uniquely deermines u C T,v. In fac, for fixed v, he correspondence beween C T and u C T,v is one o one, up o a consan. To see his, we fix some (u, v) and wan o find some C T such ha u C T,v = u. For noaional convenience, we denoe Z A = Z A,. If u = u C T,v for some C T, hen (4.4) holds rue for u. Tha is, Z A, = g u (, X,u,v )Y A. Denoe by R he agen s expeced uiliy, ha is R = W A. Then (4.5) becomes G = g(s,x s,u s,v s )ds; W A = R g u(s, X s,u s,v s )Ys AdBu s ; where Y A = U (C T,G T ) Z A s dbu s ; (4.9) W A T = U (C T,G T ). (4.) Since U >, we may assume here exiss a funcion H(x,y) such ha Then (4.) leads o U (H (x, y), y) = x. (4.) C T = H(W A T,G T ). (4.) Plugging his ino (4.9), we ge X = x + v sdb s ; G = g(s,x s,u s,v s )ds; W A Y A = R g u(s, X s,u s,v s )Y A s dbu s ; = U (H (W A T,G T ), G T ) Z A s dbu s. (4.3)
Appl Mah Opim (9) 59: 99 46 3 Now fix ( R,u,v). IfFBSDE(4.3) is well-posed, we may define C T by (4.) and we can easily see ha u C T,v = u. In his sense, for echnical convenience, from now on we consider ( R,u,v) (insead of (C T,v)) as a conrac, or say, as he principal s conrol, and we call u he principal s arge acion. Then (A4) should be rewrien as (A4 ) We assume ha ( R,u,v) is an implemenable conrac, by which we mean: (i) FBSDE (4.3) is well-posed; (ii) For C T defined by (4.), (C T,v)is implemenable in he sense of (A4). We noe ha he heory of FBSDEs is far from complee. The well-posedness of (4.3) is in general unclear (unless we pu sric condiions). In fac, even for linear FBSDEs here is no general resul like Lemma 3.. Insead of adoping oo srong echnical condiions, in his paper we assume he well-posedness of he involved FB- SDEs direcly and leave he general FBSDE heory for fuure research. However, in he separable uiliy case, he corresponding FBSDEs will become decoupled FBS- DEs and hus we can use Lemma 3. o esablish heir well-posedness, as we will see laer. Now for any (u, v) and any bounded ( u, v), denoe u ε X ε = u + ε u ; v ε = x + v ε s db s; X ε = Xε X ; G ε T ε = v + ε v ; G ε T = g(,x ε,uε,vε )d; (4.4) = Gε T G T ; V ε ε = V ε V. ε Denoe also wih superscrip ε all corresponding quaniies. (A5) The principal s admissible se A of conrols is he se of all hose conracs ( R,u,v) such ha, for any bounded ( u, v), here exiss a consan ε > such ha for any ε [,ε ): (i) (A4 ) holds rue for ( R,u ε,v ε ); (ii) The FBSDEs (4.7) and (4.9) below are well-posed for ( R,u ε,v ε ); (iii) lim ε V ε = Y P for Y P defined in (4.7) below. Noe again ha we will specify sufficien condiions for (A5) in he separable uiliy case laer on. We also assume ha A is no empy. We now derive he necessary condiions for he Principal s problem. Since R = E u {W A T =Eu {U (C T,G T ) is he opimal uiliy of he agen, he condiion (.6) becomes equivalen o R R. Inuiively i is obvious ha he principal would choose R = R in order o maximize her uiliy. Again, due o he lack of saisfacory heory of FBSDEs, here we simply assume ha he opimal R = R, and we will prove i rigorously in he separable uiliy case by using he comparison heorem of BSDEs, as we did in he special case.
4 Appl Mah Opim (9) 59: 99 46 Given (u, v),le(x,g,w A,Y A,Z A ) be he soluion o (4.3) wih R = R. Define C T by (4.). This will guaranee ha he agen s necessary condiion is saisfied. Inroduce he principal s remaining expeced uiliy Y P = E u [U (X T C T )]=U (X T C T ) Zs P dbu s ; (4.5) By Lemma 3. (4.5) is well-posed. Then he principal s problem is o choose opimal (u, v) in order o maximize V (u, v) = E u {Y P T =Y P. (4.6) Similarly as before, denoe, omiing funcions argumens, X = v s db s ; g u = g uu u + g uv v + g ux X; G T = [g x X + g u u + g v v ]d. Moreover, consider he following FBSDE sysem W A Y A Y P = g uy A s u sds [g u Y A s + Y A s g u]db u s ; = U C T + U G T + = U [ X T C T ]+ where C T is defined by Z A s u sds Z A s dbu s ; Z P s u sds Z P s dbu s, (4.7) W A T = U C T + U G T ; (4.8) For he general framework we have here, we need o inroduce he following adjoin processes X i, Y i, Z i, which, unforunaely, do no all have a direc economic inuiion: X = g uzs ds; X = [g uxzs Y s A + g x Ys ]ds; Y = U [U X T U ] Zs dbu s ; (4.9) Y = U U [U X T U ]+XT U Zs dbu s ; Y 3 = XT + U Zs 3dBu s. Theorem 4. Under (A5), we have Y P = Eu{ Ɣ P, u d + Ɣ P, v d, (4.)
Appl Mah Opim (9) 59: 99 46 5 where Ɣ P, Ɣ P, = Z P g u Y Y A + X ZA + g uu Z Y A + g u Y ; = g uv Z Y A + g v Y + Z 3 + u (Y 3 X ). (4.) In paricular, he necessary condiion for (u, v) o be an opimal conrol is: Ɣ P, = Ɣ P, =. (4.) Proof See Appendix. In summary, we have he following sysem of necessary condiions for he principal: X = x + v sdb s ; G = g(s,x s,u s,v s )ds; W A = R g uy A s dbu s ; X = g uzs ds; X = [g uxzs Y s A + g x Ys ]ds; Y A Y P = U (H (W A T,G T ), G T ) Z A s dbu s ; = U (X T H(W A T,G T )) Z P s dbu s ; Y = U [U X T U ] Z s dbu s ; Y = U U [U X T U ]+XT U Zs dbu s ; Y 3 = XT + U Zs 3dB s; (4.3) wih maximum condiion (4.). In paricular, if (4.) has a unique soluion u = h (, X,Y Y A,Y,ZP v = h (, X,Y Y A,Y,ZP + X ZA,Z Y A,Z 3 ); + X ZA,Z Y A,Z 3 ), hen, by plugging (h,h ) ino (4.3) we obain a self conained FBSDE.
6 Appl Mah Opim (9) 59: 99 46 4..3 Fixed Volailiy Case If he principal has no conrol on v, hen boh v and X are fixed. In his case, along he variaion one can only choose v =. Then (4.3) can be simplified as G = g(s,x s,u s,v s )ds; W A = R g uy A s dbu s ; X = g uz s ds; Y A Y P wih maximum condiion = U (H (W A T,G T ), G T ) Z A s dbu s ; = U (X T H(W A T,G T )) Z P s dbu s ; Y = U X T U U Zs dbu s ; Y = U U [U X T U ]+XT U Zs dbu s ; Ɣ P, 4.3 Separable Uiliies (4.4) = Z P g u Y Y A + X ZA + g uu Z Y A + g u Y =. (4.5) In his subsecion we assume he agen has a separable uiliy funcion, namely, Here we abuse he noaion U. We noe ha if U (A.)(i) sill holds rue. 4.3. The Agen s Problem In his case obviously we have Then (4.3) becomes U (C T,G T ) = U (C T ) G T. (4.6) Y A = ; Z A, =. > and U, hen Assumpion Ɣ A = Z A, g u (, X,u,v ). (4.7) Denoe W A = W A + gds. Then (4.5) and (4.8) become W A = U (C T ) + [u s Zs A, g]ds Zs A, db s ; (4.8)
Appl Mah Opim (9) 59: 99 46 7 and W A = U (C T ) + [Zs A, h(s, X s,v s,zs A, ) g(s,x s,h(s,x s,v s,zs A, ), v s )]ds Zs A, db s ; (4.9) respecively. 4.3. The Principal s Problem Firs one can check sraighforwardly ha Y A = ; Z A = ; Y = Y ; Z = Z. (4.3) Denoe J = U ; W A = W A + G ; Ỹ 3 = Y 3 X. (4.3) Then (4.) and (4.) become, C T = J ( W A T ); ƔP, Therefore, (4.3) becomes X = x + v sdb s ; W A Y P = Z P g uu Z ; ƔP, = Z 3 + u Ỹ 3 g v Y g uv Z ; (4.3) = R + gds + g udb u s ; = U (X T J ( W A T )) Z P s dbu s ; Y = U (X T J ( W T A)) U (J ( W T A)) Zs dbu s ; Ỹ 3 = U (X T J ( W A T )) [g x Y s + g uxz s ]ds Z 3 s dbu s ; (4.33) wih maximum condiions Ɣ P, = Ɣ P, =. As menioned in Sec. 4., we shall specify some sufficien condiions for he well-posedness of he FBSDEs in his case. Firs, under he inegrabiliy condiions in (A5 ) below, X and W A are well defined. Applying Lemma 3. on (Y P,Z P ), (Y,Z ) and hen on (Ỹ 3, Z 3 ), we see ha (4.33) is well-posed. Therefore, FBSDEs (4.9), (4.7), and (4.9) are well-posed in his case. Recall (4.4) and define oher ε-erms similarly. We now modify A as follows. (A5 ) The principal s admissible se A of conrols is redefined as he se of all hose conracs ( R,u,v) such ha, for any bounded ( u, v), here exiss a consan ε > such ha for any ε [,ε ): (i) u ε, v ε,mt ε, [Mε T ], g ε, gu ε, gε v, gε x, gε uu, gε uv, gε ux, U ε, U ε, [U ]ε, and [J ]ε are uniformly inegrable in L p (Q) or L p T (Q), forsomep large enough (where J = U ).
8 Appl Mah Opim (9) 59: 99 46 (ii) u A(C T,v) and (C T,v) is implemenable in he sense of (A4), where C T is defined in (4.3); Noe ha we may specify p as in (A5). Bu in order o simplify he presenaion and o focus on he main ideas, we assume p is as large as we wan. Theorem 4.3 Assume (A5 ). Then (A5) holds rue and he opimal R is equal o R. Proof We firs show ha he principal s opimal conrol R is R. In fac, for fixed (u, v), le superscrip R denoe he processes corresponding o R. Then obviously W A, R W A,R for any R R. Since we ge J (x) = U (J (x)) >, U >, Y P, R T Therefore, = U (X T C R T ) = U (X T J ( W A, R T Y P, R Thus, opimal R is equal o R. I remains o prove = E u {Y P, R T E u {Y P,R T =Y P,R. )) U (X T J ( W A,R T )) = Y P,R T. lim ε V ε = Y P. (4.34) We pospone he proof o he Appendix. To end his subsecion, for fuure use we noe ha (4.) becomes ) C T = J (R + g u (, X,u,v )db u + g(,x,u,v )d. This means ha he principal s problem is { sup E u U (x + u v d + v db u u,v J (R + g u (, X,u,v )db u + )) g(,x,u,v )d. (4.35)
Appl Mah Opim (9) 59: 99 46 9 4.3.3 Fixed Volailiy Case If we also assume v (hence X) is fixed, hen (4.33) becomes W A = R + gds + g udbs u; Y P = U (X T J ( W A T )) Z P s dbu s ; Y = U (X T J ( W T A)) U (J ( W T A)) Zs dbu s ; (4.36) wih maximum condiion Ɣ P, =. 5 General Case: Sufficien Condiions 5. A General Resul If he necessary condiion uniquely deermines a candidae for he opimal soluion, i is also a sufficien condiion, if an opimal soluion exiss. We here discuss he exisence of an opimal soluion. In general, our maximizaion problems are nonconcave, so we have o use infinie dimensional non-convex opimizaion mehods. Le H be a Hilber space wih norm and inner produc.lef : H R be a funcional wih Fréche derivaive f : H H. Tha is, for any h, h H, lim ε [F(h+ ε h) F(h)]= f(h), h. ε The following heorem is a direc consequence of he so-called Ekeland s variaional principle, see [9]. Theorem 5. Assume (A) F is coninuous; (A) There exiss unique h H such ha f(h ) = ; (A3) For ε>, δ> such ha F(h) F(h ) ε whenever f(h) δ. (A4) V = sup h H F(h)<. Then h is he maximum argumen of F. Tha is, F(h ) = V. Remark 5. () A sufficien condiion for (A3) is ha f is inverible and f is coninuous a. () If H = R and f is coninuous and inverible, hen F is eiher convex or concave, and hus he resul obviously holds rue. (3) If (A4) is replaced by inf h H F(h)>, hen h is he minimum argumen of F.
3 Appl Mah Opim (9) 59: 99 46 5. Sufficien Condiions for he Principal-Agen Problem 5.. The Agen s Problem: Separable Uiliy Assume ha he uiliy is separable, U (C, G) = U (C) G. Wehave ] W A = E [U Qu (C T ) g(s,x s,u s,v s )ds + Zs A, dbs u. + g(s,x s,u s,v s )ds. Then, swiching o Brownian Mo- We define W A = W A ion B, wehave W A = U (C T ) + [u s Zs A, g(s,x s,u s,v s )]ds Zs A, db s. (5.) Noe ha W A = EQu [U (C T ) g(s,x s,u s,v s )ds], so he agen s problem is o maximize W A over u. By he comparison heorem for BSDE s, since U (C T ) is fixed, from (5.) we see ha W A will be maximized if he drif u szs A, g(s,x s,u s,v s ) is maximized, which is he case if Zs A, = g u (s, X s,u s,v s ). This is exacly he necessary condiion (4.4), aking ino accoun (4.7). Thus, (4.4) is also a sufficien condiion for opimaliy for he agen s problem. 5.. The Agen s Problem: General Case We now discuss wha he condiions of Theorem 5. boil down o. Fix (C T,v), and le H be he se of all admissible u and <u,u > = { E u u. d (5.) For he funcional V : H R defined by V (u) = V (u; C T,v),by(4.) V is Fréche differeniable wih Fréche derivaive Ɣ A. More precisely, for any u,wesolve (4.5) (wihou assuming (4.4)) and hen define Ɣ A (u) = Ɣ A by (4.3). Under echnical assumpions, Ɣ A is a mapping from H H. Moreover, we need enough assumpions o guaranee ha FBSDE (4.5) iswellposed, meaning, in paricular, ha V is coninuous in u, so ha (A) would be rue. We furher need ha FBSDE (4.5) ogeher wih maximum condiion (4.4), or equivalenly FBSDE (4.8), has a unique soluion, and denoe u as he conrol corresponding o his soluion. Then (A) would be rue. To ensure (A3), we need o have, for any sequence of u, Ɣ A (u) W A,u W A,u. (5.3) Recalling Remark 5.(), we noe ha (5.3) can be replaced by he following sronger condiions. Assume for any Ɣ, FBSDE(4.5) ogeher wih condiion
Appl Mah Opim (9) 59: 99 46 3 Ɣ A = Ɣ has a unique soluion and le u Ɣ denoe he corresponding u. Tha is, recalling (4.6) and noing again ha Ɣ A = Ɣ, u Ɣ = h(, X,v, [Ɣ Z A,,Ɣ ]/Y A,Ɣ ]), where (Y A,Ɣ,Z A,,Ɣ ) is he soluion of he following FBSDE G Ɣ = g(s,x s,h(s,x s,v s, [Ɣ s Zs A,,Ɣ ]/Ys A,Ɣ ), v s )ds; W A,Ɣ Y A,Ɣ = U (C T,G Ɣ T ) + Z A,,Ɣ s db s ; = U (C T,G Ɣ T ) + Z A,,Ɣ s db s. Z A,,Ɣ s Z A,,Ɣ s h(s, X s,v s, [Ɣ s Z A,,Ɣ s h(s, X s,v s, [Ɣ s Z A,,Ɣ s ]/Ys A,Ɣ )ds ]/Ys A,Ɣ )ds (5.4) We need ha he above FBSDE is well-posed. In paricular, Then (A3) holds. 5..3 The Principal s Problem V (u Ɣ ) = W A,Ɣ W A, = V (u ), as Ɣ. (5.5) Here, H is he admissible se of (u, v) wih (u,v ), (u,v ) = { E [u u + v v. ]d The funcional is V (u, v) defined in (4.6). By (4.) V is Fréche differeniable wih Fréche derivaive (Ɣ P, (u, v), Ɣ P, (u, v)). As in Sec. 4.., we need he following: (i) Considering (u, v) as parameers, FBSDE (4.3) (wihou assuming (4.)) is well-posed; (ii) FBSDE (4.3) ogeher wih (4.) has a unique soluion (u,v ); (iii) For any sequence of (u, v), (Ɣ P, (u, v), Ɣ P, (u, v)) Y P,u,v Y P,u,v. Then (u,v ) is he opimal conrol for he principal problem. Similarly, (iii) can be replaced by he following sronger condiion: (iii ) For any (Ɣ,Ɣ ), FBSDE (4.3) ogeher wih condiion (Ɣ P, (Ɣ,Ɣ ) is well-posed. In paricular,,ɣ P, ) = V (u Ɣ,Ɣ,v Ɣ,Ɣ ) V (u,,v, ), as (Ɣ,Ɣ ).
3 Appl Mah Opim (9) 59: 99 46 5..4 Fixed Volailiy Case In his case v is fixed. Se H o be he admissible se of u wih inner produc defined by (5.). The funcional is V (u) wih Freche derivaive Ɣ P, (u). We need he following: (i) Considering u as a parameer, FBSDE (4.4) (wihou assuming (4.5)) is wellposed; (ii) FBSDE (4.4) ogeher wih (4.5) has a unique soluion u ; (iii) For any sequence of u, Ɣ P, (u) Y P,u Y P,u. Then u is he opimal conrol of he principal s problem. Similarly, (iii) can be replaced by he following sronger condiion: (iii ) For any Ɣ, FBSDE(4.4) ogeher wih condiion Ɣ P, = Ɣ is well-posed. In paricular, V (u Ɣ ) V (u ), as Ɣ. 5.3 Exension of he Original Holmsrom-Milgrom Problem We now illusrae he approach above on a well known example. Assume exponenial uiliies, U (x, y) = U (x y) = e γ (x y) and U (x) = e γ x.hereweabuse he noaion U again. In he original [3] paper, i was also he case ha v was fixed as a given consan, bu we do no assume ha. The resuls below are obained in [4, 6] using a differen approach. For he exponenial uiliy we have ha U (x, c) = γ U (x, c). Therefore, we ge, for he agen s problem adjoin processes (4.), ha γ W A Thus, he necessary condiion (4.4) becomes Z A, = Y A, γ Z A, = Z A,. = γ g u (, X,u,v )W A. (5.6) Therefore, he agen s remaining uiliy is W A = U (C T G T ) + Ws A γ g u (s, X s,u s,v s )dbs u. For he opimal conrac we will have W A = R. Therefore, we ge ( W A = R exp ) γ g u (s, X s,u s,v s )ds γ g u (s, X s,u s,v s )dbs u. (5.7)
Appl Mah Opim (9) 59: 99 46 33 Le us firs show ha he condiion (5.6) is also sufficien. For any given conrol u, is posiive, we can wrie since W A W A where Z A, = Z A, /W A. Denoe = e γ (C T G T ) Ws A Ỹ = W A e γ G. We ge, by swiching o Brownian Moion B, dỹ = e γ C T Z A, s Ỹ s db u s + Z s A, dbs u [ Z s A, Ỹ s u s + γ g(s,x s,u s,v s )Ỹ s ]ds. (5.8) Noe ha he agen wans o maximize W A = Ỹ. By he BSDE Comparison Theorem, he laer is maximized if he drif in (5.8) is maximized. We see ha his will be rue if condiion (5.6) is saisfied, which is hen a sufficien condiion. Denoe J (y) = U (y) = log( y)/γ. The principal s problem is hen o maximize E Qu [U (X T J (W A T (u)) G T )]. (5.9) We now impose he assumpion (wih a sligh abuse of noaion) ha g(,x,u,v) = μ x + g(,u,v), (5.) for some deerminisic funcion μ. Doing inegraion by pars we ge he following represenaion for he firs par of he cos G T : s μ s X s ds = X T μ s ds μ u du[u s v s ds + v s dbs u ]. (5.) If we subsiue his ino G T = μ sx s ds + g(s,u s,v s )ds, and plug he expression for X T and he expression (5.7)forỸ A ino (5.9), wih U (x) = e γix,we ge ha we need o minimize [ E [exp ( γ u μ s ds ][ X + [ s + γ g(s,u s,v s )ds γ γ [ ] μ s ds v s dbs u + γ g u (s, u s,v s )dbs u γ ] u v d + γ γ ] μ r dr u s v s ds [ s gu (s, u s,v s ) ds ] )] μ r dr v s dbs u. (5.)
34 Appl Mah Opim (9) 59: 99 46 This is a sandard sochasic conrol problem, for which he soluion urns ou o be deerminisic processes û, ˆv (as can be verified, once he soluion is found, by verifying he corresponding Hamilon-Jacobi-Bellman equaion). Assuming ha u, v are deerminisic, he expecaion above can be compued by using he fac ha ( E [exp u f s dbs u )] = exp ( f s ds ) for a given square-inegrable deerminisic funcion f. Then, he minimizaion can be done inside he inegral in he exponen, and boils down o minimizing over (u,v ) he expression [ ] gu μ s ds u v + γ (, u,v ) + γ {[ The opimal conrac is found from (4.), as: + g(,u,v ) ] μ s ds v g u (, u,v ). (5.3) C T = G T γ log( W A T ) where W A should be wrien no in erms of he Brownian Moion B u, bu in he erms of he process X. Since we have ( = R exp γ g u (s, u s,v s )/ds + γ u s g u (s, u s,v s )ds W A γ g u (s, u s,v s )db s ), (5.4) we ge ha he opimal conrac can be wrien as (assuming opimal v is sricly posiive) C T = c + μ s X s ds + for some consan c.ifμ and g u(s,u s,v s ) g u (s, u s,v s ) dx s v s v s is a consan, hen we ge a linear conrac. Le us consider he special case of Holmsrom-Milgrom [3], wih Then (5.3) becomes v, g(,x,u,v)= u /. u + γ u / + u / + γ { u. Minimizing his we ge consan opimal u of Holmsrom-Milgrom [3], given by û = + γ + γ + γ.
Appl Mah Opim (9) 59: 99 46 35 The opimal conrac is linear, and given by C T = a + bx T, where b =û and a is such ha he IR consrain is saisfied, a = log( R) bx + b T γ (γ ). (5.5) Noe ha in he limi when γ i we ge he case corresponding o he linear uiliy funcion U i (x) = x. 6 Conclusion We provide a new solvable case of Principal-Agen problems wih general uiliy funcions and hidden acion in models driven by Brownian Moion, which leads o nonlinear conracs. We hen formulae a general heory leading o he necessary condiions for he opimal soluion. However, he quesion of he exisence of an opimal soluion remains open. We analyze boh he agen and he principal s problem in weak formulaion, hus having a consisen framework. Acknowledgemens Research suppored in par by NSF grans DMS 4-3575, DMS 6-398 and DMS 6-3366, and hrough he Programme GUEST of he Naional Foundaion For Science, Higher Educaion and Technological Developmen of he Republic of Croaia. We are solely responsible for any remaining errors, and he opinions, findings and conclusions or suggesions in his aricle do no necessarily reflec anyone s opinions bu he auhors. We are graeful o he edior and he anonymous referees for helpful suggesions ha improved he exposiion of he paper. Appendix In his appendix we provide he remaining proofs. Proof of Lemma 3. We firs assume ξ is bounded. Then M u T ξ L T (Q). Le(Ỹ, Z) be he unique soluion o he BSDE Define Ỹ = M u T ξ Z s db s. Y = Ỹ [M u ], Z =[ Z u Ỹ ][M u ]. One can check direcly ha dy = Z db u, Y T = ξ. Moreover, which implies ha Y = E {M u T ξ[mu ] = E u {ξ, E u{ sup Y CE u { ξ <. T
36 Appl Mah Opim (9) 59: 99 46 Then one can easily ge Z L (Q u ). In general, assume ξ n are bounded and E u { ξ n ξ. Le (Y n,z n ) be he soluion o BSDE (3.) wih erminal condiion ξ n. Then E u { sup T Y n Y m + Z n Z m d CE u { ξ n ξ m. Therefore, (Y n,z n ) converges o some (Y, Z) which saisfies (3.). Proof of Theorem 3. We firs show ha Lemma 7. For any u A, we have E {e T u d < ; (7.) and hus Girsanov Theorem holds for u. Proof Denoe τ n = inf { : u s ds + u s db s >n T. Then τ n T. Moreover, τn e u db = M u τn e τn u d. Squaring boh sides and aking he expecaion, we ge { E e { τn u d = E Mτ u n τn [ { e u d [E{ Mτ u n 4 ] E e ] τn u d. Thus Leing n we ge (7.). E { e τn u d E{ M u τ n 4 E{ M u T 4 <. We now show ha BSDE (3.) is wellposed and û A. Firs, by Definiion 3.(i), we can solve he following linear BSDE Define Ȳ = e U (C T ) Z s db s. Then (Ȳ,û) saisfies (3.). û = Z Ȳ.
Appl Mah Opim (9) 59: 99 46 37 Since Ȳ > is coninuous, and E{ Z d <, we know û saisfies Definiion 3.(i). Moreover, by sraighforward calculaion (or recall (3.)) we have Thus, by Definiion 3.(i), MûT =[Ȳ ] e U (C T ). E{ MûT 4 =[Ȳ ] 4 E{e 4U (C T ) <. Therefore, û A. Proof of Proposiion 3.3 Under our assumpions (3.4) becomes C T + U (C T ) = X T + λ. We know C T = c(x T ) for some deerminisic funcion c. Then by Remark 3. we know u = u(, X ) and (3.5) (3.6) hold rue wih v(,x) = σ x. Noe ha u x (, x) = σ [ F F x F + x(ff xx Fx ](, ) x). Wihou loss of generaliy, we prove he resul only a =. Denoe X x = x + σ s Xs x db s; X = + σ s X s db s. Then X T = X T x ; and F(,x)= E{e U (c(xt x )) ; F x (,x)= E { e U (c(xt x )) U (c(xx T ))c (XT x ) X T ; { F xx (,x)= E e U (c(xt x ))[ U (c(xx T ))c (XT x ) X T + U (c(xx T )) c (X x T ) X T + U (c(xx T ))c (X x T ) X T ]. Thus, by suppressing he variables, u x (,x)= σ [ F FF x + x[ff xx Fx ](,x) ] σ F [ FE = σ xf E { e U { {e U U c X T + xfe e U [U c + U c ] X T [ U c X T +[U c + U c ] X T ].
38 Appl Mah Opim (9) 59: 99 46 Then Noe ha c (x) = U U U > ; c (x) = U U U U [ U U c = U U U ] [ U U ]U c. U c + U c = U U + U U 3 U U U c c = U U H (c(x T )). Denoe y = c(x T ).IfH (y), hen, combining he fac ha X T >, U c X T +[U c + U c ] X T. We now assume H (y) <, hen H (y). Since λ, we have Thus <X T X T + λ = y + U (y). U (c(x T ))c (X T )X T +[U (c(x T ) c (X T ) + U (X T )c (X T )] X T = U (c(x T ))c (X T )X T + U (c(x T ))c (X T )X T + [ U = c X U T U (y) + c X T = U [ U U ]H (y). So in boh cases, we have H U U H U U H U U (y) c (X T ) X T (y) c (X T ) X T (X T + λ) [ (y) y + ]] U (y) U c X T +[U c + U c ] X T. Thus u x (,x). Proof of Theorem 4. By (A3)(iii), one can easily show ha { sup E [ g ε () 4 + gu ε () 4 ]d + G ε T 4 + MT ε 4 <, (7.) ε [,ε )
Appl Mah Opim (9) 59: 99 46 39 and { lim E [ g ε () g() 4 + gu ε () g u() 4 ]d + G ε T G T 4 + MT ε M T 4 =. ɛ (7.3) Lemma 7. We have { lim E [ g ε () g() 4 ]d + G ε T G T 4 + MT ε M T =. ɛ Proof Firs, noe ha g ε () = gδε u ()dδ u. Then By (7.3) we ge Similarly, Then g ε () g() u g δε u () g u() dδ. { lim E g ε () g() 4 d =. (7.4) ɛ lim ɛ E{ Gε T G T 4 =. Second, noing ha M ε T = exp( uε db M ε T = E{ MT ε M T { [( = E C C ( M δε T + M δε T [ MT δε T u db M δε T uε d) we have ] (u + δε u ) u d dδ. ) u db M T u db )] (u + δε u ) u d M T u u ds dδ { E MT δε M T u db M T (u + δε u ) u d + M T [(u + δε u ) u u u )]d dδ [ E{ M δε T M T 4
4 Appl Mah Opim (9) 59: 99 46 + E{ MT δε M T 4 E{ (u + δε u ) 4 d + E{ M T ε ] dδ. Then by (7.3) and Assumpion A3(iii) we prove he resul. To prove he heorem, we also need he following simple lemma (see, e.g., [5]): Lemma 7.3 Assume W = α sdb s + A is a coninuous semimaringale, where B is a Brownian moion. Suppose ha () α d < a.s. () Boh W and A are uniformly (in ) inegrable. Then E[W T ]=E[A T ]. We now show (4.). The firs equaliy can be wrien as { lim V ε = E M T U + M T U G T. (7.5) ɛ Noe ha we have V ε = V ε V [ ] = E MT ε ε U ε + M T U ε U ε. (7.6) As for he limi of he firs erm on he righ-hand side, we can wrie M ε T U ε M T U =[ M ε T M T ]U + M ε T [U ε U ]. By Assumpion A3(iii) and he above L bounds on MT ε, his is inegrable uniformly wih respec o ε, so he expeced value (under Q) converges o zero, which is wha we need. As for he limi of he second erm in he righ side of (7.6), noice ha we have U ε M T lim U = M T U G T. (7.7) ɛ ε We wan o prove he uniform inegrabiliy again. We noe ha U ε U ε = U (C T,G T + θ(g ε T G T ))dθ Gε T { U (C T,G T ) + U (C T,G ε T ) Gε T where he las inequaliy is due o monooniciy of U. Therefore, we ge M U ε U T ε { C U (C T,G T ) + ( U CT,G ε ) T + G ε 4 T + M T 4.
Appl Mah Opim (9) 59: 99 46 4 Thus, from Assumpion A3(iii), he lef-hand side is uniformly inegrable, and he expecaions of he erms in (7.7) also converge, and we finish he proof of (7.5). We now wan o prove he second equaliy of (4.). We have V = E { M T U + M T U G T where Ɣ A = Z A, = E {M T U = E u{ W A T u db u + M T U u db u + Y A T = E u{ Ɣ A u d + + g u (, X,u,v )Y A, Ɣ B Ɣ B db u g u u d = W A u s + Z A, g u u d, (7.8) u s dbs u + ZA, G and he las equaliy is obained from Iô s rule, and definiions of Y A,i,Z A,i.We need o show E u Ɣ B db u =. We wan o use Lemma 7.3 in he las wo lines of (7.8), wih α = Ɣ B and W = W A u s db u s + Y A A = Ɣs A u sds. From he BSDE heory and our assumpions we have E u{ sup ( W A + Y A ) + T From his i is easily verified ha Ɣ B ( Z A, d < g u (s) u s ds + Z A, )d <. (7.9) so ha condiion () of he lemma is saisfied. Nex, we have [ E u{ { ] T sup W CE u sup W A + Y A + + g u () d T T C + CE {MT + g u () 4 d <,
4 Appl Mah Opim (9) 59: 99 46 hanks o (7.9) and (7.). Moreover, E u{ sup A = E u{ sup T T [Z A, s + g u (s)ys A ] u sds CE {M T Z A, + g u (s)ys A d { CE M T 4 + [ Z A, + g u () 4 + Y A ]d <. The las wo bounds ensure ha condiion () of he lemma is saisfied, so ha he las erm in (7.8) is zero, and we finish he proof of (4.). Finally, (4.4) follows direcly from (4.) ifu is opimal, as u is arbirary. Proof of Theorem 4. Firs he necessiy of (4.) is obvious because ( u, v) is arbirary. Noe ha X = v s dbs u + u s v s ds. Applying Lemma 7.3 repeaedly, we have { E [ U C T + U G T ]YT = E{ WT A Y T Noe ha Then Therefore, { = E Y g uy A u d Z [g u Y A + Y A g u ]d { = E Y g uy A u d Z Y A g u d X ZA u d XT [ U C T + U G T ]. U Y T + X T U = U. { E{U T C T =E Y g uy A u d Z Y A g u d X ZA u d [XT U + YT U ] G T ].
Appl Mah Opim (9) 59: 99 46 43 Y {U P = E [ X T C T ]+ Z P u d { = E U X T + Z P u d +[XT U + YT U ] G T [ [g u Y Y A X ZA ] u Z Y A g u ]d { = E [Z P g u Y Y A + X ZA + g uu Z Y A + g u Y ] u d + + [g uv Z Y A + g v Y ] v d [g ux Z Y A + g x Y ] X d + U X T { = E [Z P g u Y Y A + X ZA + g uu Z Y A + g u Y ] u d + [g uv Z Y A + g v Y ] v d + YT 3 X T { = E [Z P g u Y Y A + X ZA + g uu Z Y A + g u Y ] u d + [g uv Z Y A + g v Y + Z 3 + (Y 3 X )u ] v d { = E Ɣ P, u d + Ɣ P, v d. The proof is complee. Proof of Theorem 4.3 We have already shown ha we can se R = R. Recall ha X = x + v sdb s ; M = exp W A Y P ( u sdb s ) u s ds ; = R + gds u sg u ds + g udb s ; = U (X T J ( W A T )) + u s Z P s ds Z P s db s;
44 Appl Mah Opim (9) 59: 99 46 and ha X = v sdb s ; M = M [ u sdb s u s u s ds]; ϕ = ϕ u u + ϕ v v + ϕ x X, ϕ = g,g u ; W A = gds [g u u s + u s g u ]ds + g udb s ; Y P = U (X T J ( W T A))[ X T W T A U (J ( W T A))] + [Zs P u s + u s Zs P ]ds Zs P db s; To prove (4.34), we need he following resul. For any random variable ξ and any p>, E uε { ξ p =E{MT uε ξ p E{ MT uε E{ ξ p C E{ ξ p ; E{ ξ p =E uε {[M uε E uε {[M uε E uε { ξ p (7.) = E{[MT uε] T ] ξ p E uε { ξ p C T ] E uε { ξ p. Proof of (4.34) In his proof we use a generic consan p o denoe he powers, which may vary from line o line. We assume all he involved powers are always less han or equal o he p in (A5 ). Firs, one can easily show ha { lim E sup [ X ε X p + M ε M p + W A W A p ] ε T + [ g ε g p + gu ε g u p ]d =. Using he argumens in Lemma 3. we have {[ ] p E uε Z P,ε d C<, which, by applying (7.) wice, implies ha {[ ] p E u Z P,ε d C<. Noe ha Y P,ε Y P = U ε U + [Z P,ε s [ ε u s Zs P,ε + u s [Zs P,ε Zs ]ds P ] Z P s ]db s
Appl Mah Opim (9) 59: 99 46 45 = U ε U + ε u s Zs P,ε ds [Z P,ε s Z P s ]dbu s. Using he argumens in Lemma 3. again we ge [ lim Eu{ sup Y P,ε Y P p T ] p + Z P,ε Z P d =, ε T which, ogeher wih (7.), implies ha { [ lim E sup Y P,ε Y P p T ] p + Z P,ε Z P d =. (7.) ε T Nex, recall (4.4) one can easily show ha { lim E sup [ X ε X p + M ε M p + W A W A p ] ε T + [ g ε g p + gu ε g u p ]d =. Then similar o (7.) one can prove ha { lim E sup ε T Y P,ε Y P p =. In paricular, The proof is complee. lim V ε = lim P,ε Y ε ε = Y P. References. Abreu, D., Pearce, D., Sacchei, E.: Opimal carel equilibria wih imperfec monioring. J. Econ. Theory 39, 5 69 (986). Abreu, D., Pearce, D., Sacchei, E.: Toward a heory of discouned repeaed games wih imperfec monioring. Economerica 58, 4 63 (99) 3. Biais, B., Marioi, T., Planin, G., Roche, J.C.: Dynamic securiy design: convergence o coninuous ime and asse pricing implicaions. Rev. Econ. Sud. 74, 345 39 (7) 4. Bolon, P., Dewaripon, M.: Conrac Theory. MIT Press, Cambridge (5) 5. Cvianić, J., Wan, X., Zhang, J.: Firs-bes conracs for coninuous-ime principal-agen problems. J. Appl. Mah. Soch. Anal. 6 Aricle ID 953, 7 pages 6. Davis, M.H.A., Varaiya, P.P.: Dynamic programming condiions for parially-observable sochasic sysems. SIAM J. Conrol, 6 6 (973) 7. DeMarzo, P., Sannikov, Y.: Opimal securiy design and dynamic capial srucure in a coninuousime agency model. J. Finance 6, 68 74 (6) 8. Deemple, J., Govindaraj, S., Loewensein, M.: Opimal conracs and ineremporal incenives wih hidden acions. Working paper, Boson Universiy () 9. Ekeland, I.: On he variaional principle. J. Mah. Anal. Appl. 47, 34 353 (974)
46 Appl Mah Opim (9) 59: 99 46. Fujisaki, M., Kalliapur, G., Kunia, H.: Sochasic differenial equaions for he nonlinear filering problem. Osaka J. Mah 9, 9 4 (97). Hellwig, M., Schmid, K.M.: Discree-ime approximaions of Holmsröm-Milgrom Brownian-moion model of ineremporal incenive provision. Economerica 7, 5 64 (). Holmsrom, B.: Moral hazard and observabiliy. Bell J. Econ., 74 9 (979) 3. Holmsrom, B., Milgrom, P.: Aggregaion and lineariy in he provision of ineremporal incenives. Economerica 55, 33 38 (987) 4. Hugonnier, J., Kaniel, R.: Muual fund porfolio choice in he presence of dynamic flows. Mah. Finance (8, o appear) 5. Ma, J., Yong, J.: Forward-Backward Sochasic Differenial Equaions and Their Applicaions. Lecure Noes in Mah., vol. 7. Springer, Berlin (999) 6. Mirrlees, J.: Noes on welfare economics, informaion, and uncerainy. In: Balch, McFadden, Wu (eds.) Essays on Economic Behavior under Uncerainy. Norh-Holland, Amserdam (974) 7. Mirrlees, J.: The opimal srucure of incenives and auhoriy wihin an organizaion. Bell J. Econ. 7, 5 3 (976) 8. Müller, H.: The firs-bes sharing rule in he coninuous-ime principal-agen problem wih exponenial uiliy. J. Econ. Theory 79, 76 (998) 9. Müller, H.: Asympoic efficiency in dynamic principal-agen problems. J. Econ. Theory 9, 9 3 (). Oksendal, B., Sulem, A.: Applied Sochasic Conrol of Jump Diffusion. Springer, Berlin (4). Ou-Yang, H.: Opimal conracs in a coninuous-ime delegaed porfolio managemen problem. Rev. Financ. Sud. 6, 73 8 (3). Phelan, C., Townsend, R.: Compuing muli-period, informaion-consrained opima. Rev. Econ. Sud. 58, 853 88 (99) 3. Sannikov, Y.: A coninuous-ime version of he principal-agen problem. Rev. Econ. Sud. (7, forhcoming) 4. Schäler, H., Sung, J.: The firs-order approach o he coninuous-ime principal-agen problem wih exponenial uiliy. J. Econ. Theory 6, 33 37 (993) 5. Spear, S.E., Srivasava, S.: On repeaed moral hazard wih discouning. Rev. Econ. Sud. 54(4), 599 67 (987) 6. Sung, J.: Lineariy wih projec selecion and conrollable diffusion rae in coninuous-ime principalagen problems. RAND J. Econ. 6, 7 743 (995) 7. Sung, J.: Lecures on he heory of conracs in corporae finance. Preprin, Universiy of Illinois a Chicago () 8. Sung, J.: Opimal conracs under adverse selecion and moral hazard: a coninuous-ime approach. Rev. Financ. Sud. 8, 73 (5) 9. Williams, N.: On dynamic principal-agen problems in coninuous ime. Working paper, Princeon Universiy (4) 3. Yong, J., Zhou, X.Y.: Sochasic Conrols: Hamilonian Sysems and HJB Equaions. Springer, New York (999)