MEANVARIANCE HEDGING WITH RANDOM VOLATILITY JUMPS


 Rafe Andrews
 1 years ago
 Views:
Transcription
1 MEANVARIANCE HEDGING WIH RANDOM VOLAILIY JUMPS PAOLO GUASONI AND FRANCESCA BIAGINI Abstract. We introduce a general framework for stochastic volatility models, with the risky asset dynamics given by: dx t ω, η = µ t ηx t ω, ηdt + σ t ηx t ω, ηdw t ω where ω, η Ω H, F Ω F H, P Ω P H. In particular, we allow for random discontinuities in the volatility σ and the drift µ. First we characterize the set of equivalent martingale measures, then compute the meanvariance optimal measure P, using some results of Schweizer on the existence of an adjustment process β. We show examples where the risk premium λ = µ r σ follows a discontinuous process, and make explicit calculations for P. Introduction he introduction of the meanvariance approach for pricing options under incomplete information is due to Föllmer and Sondermann [7], who first proposed the minimization of quadratic risk. heir work, as well as that of Bouleau and Lamberton, focused on the case when the price of the underlying asset is a martingale. he more general semimartingale case was considered by: Duffie and Richardson [6], Schweizer [18, 19, 2, 21], Hipp [1], Monat and Stricker [13], Schäl [17], and a definitive solution was provided by Schweizer and Rheinländer [16], and Gourieroux, Laurent, and Pham [14], with different methods. In the meantime, the random behavior of volatility turned out to be a major issue in applied option pricing, and Hull and White [11], Stein and Stein [22] and Heston [9] proposed different models with stochastic volatility. In fact, such models are special cases of incomplete information, and can be effectively embedded in the theoretical framework developed by mathematicians. A particularly appealing feature of meanvariance hedging is that European options prices are calculated as the expectations of their respective payoff under a possibly signed martingale measure P, introduced by Schweizer [21]. he optimal strategy can also be found in terms of this measure, therefore it is not surprising that considerable effort has been devoted to its explicit calculation. 1
2 In this paper, we address the problem of calculating P in presence of volatility jumps or, more generally, when the socalled market price of risk λ = µ r follows a possibly discontinuous process. σ We consider the following market model, where each state of nature ω, η belongs to the product space Ω H, endowed with the product measure P Ω P H : 1 { dst ω, η = µ t ηs t ω, ηdt + σ t ηs t ω, ηdw t ω t B t = exp r sds and r is a constant. We assume the existence on H of a set A of martingales with the representation property: this somewhat technical condition is in fact satisfied in most models present in the literature. First we characterize martingale measures in terms of A, then, exploiting some results of Schweizer, we isolate the density of the meanvariance optimal martingale measure. When volatility follows a diffusion process such as in the Heston or Hull and White models, only to mention two of them, this result was already obtained by Laurent and Pham with stochastic control arguments. It turns out that the jump size distribution is a critical issue: in fact, finite distributions are easily handled by n martingales, where n is the cardinality of the jump size support. On the contrary, an infinite distribution for the jump size requires a more general approach involving random measures. Moreover we show calculations for sample models where volatility jumps are random both in size and in time of occurrence. For all of them, we calculate the density of the meanvariance optimal measure, and the law of the jumps under P. In the last section, we calculate the meanvariance hedging strategy for a call option, exploiting the change of numeraire technique of El Karoui, Geman and Rochet [8], as well as the general formula in feedback form of Rheinländer and Schweizer [16]. 1. Preliminaries For all standard definitions on stochastic processes, we refer to Dellacherie and Meyer [4] Stopping imes. We recall here some definitions and properties of stopping times, which will come useful in the sequel. In this subsection, we assume that all random variables are defined on some probability space Ω, F, P. 2
3 Proposition 1.1. Let τ be a realvalued, Borel random variable, and F τ the smallest filtration under which τ is a stopping time. We have that: 2 F τ t = τ 1 B[, t] τ 1 t, where BA is the family of Borel subsets of A. Proof. By definition, F τ t = σ{τ s}, s t. Since F τ t is a σfield and contains all sublevels of τ within [, t], it necessarily contains the inverse images of all Borel sets of [, t]. he set τ 1 t, is the complement of τ 1 [, t], and belongs to the σfield. he reverse inclusion is trivial. Finally, the righthand side in 2 is easily seen to be a σfield. Remark 1.2. An immediate consequence of Proposition 1.1 is the rightcontinuity of Ft τ. Moreover, Ft τ = τ 1 B[, t τ 1 [t,. his means that the augmentation of Ft τ is continuous if and only if the law of τ is diffuse. Corollary 1.3. he filtration generated by the random variable τ t coincides with F τ if and only if τ is an optional time, that is, if {τ < t} F t for all t. Definition 1.4. A stopping time τ is totally inaccessible if it is strictly positive and for every increasing sequence of stopping times τ 1,..., τ n, such that τ n < τ for all n, P lim n τ n = τ, τ < =. otally inaccessible stopping times can be characterized as follows: Proposition 1.5. Let τ a strictly positive stopping time. he following properties are equivalent: i τ is totally inaccessible; ii here exists a uniformly integrable martingale M t, continuous outside the graph of τ such that M = and M τ = 1 on {τ < }. Proof. See Dellacherie and Meyer [5], VI.78. Proposition 1.6. If the law of a stopping time is diffuse, then it is totally inaccessible with respect to F τ. Proof. See Dellacherie and Meyer [4], IV.17. Definition 1.7. Let A be an adapted process, with A = and locally integrable variation. he compensator of A is defined as the unique predictable process Ã such that A Ã is a local martingale. Remark 1.8. In particular, the compensator of an increasing process is itself an increasing predictable process. Proposition 1.9. Let τ be a stopping time with a diffuse law. hen the compensator of the process 1 {τ t} with respect to F τ is given by where F x = P τ x. Ã t = log1 F t τ 3
4 Proof. By assumption, M t = 1 {τ t} Ãt is a local martingale, and by Remark 1.8, Ã t is an increasing process. herefore we have: sup M s 1 s t and M t is in fact a martingale see for instance Protter, page 35, heorem 47. We look for a compensator of the form Ãt = a t τ, with a : R + R +. Hence: It follows that: E [ 1 {τ s} 1 {τ t} Ft ] = E [as τ a t τ F t ] P t < τ s P t < τ s t = E [ 1 {t<τ s} a τ a t + 1 {t<τ} a s a t ] P t < τ his equality can be rewritten as an integral equation: F s F t = s t a x a t df x + s a s a t df x It is easy to check by substitution that the unique solution to this integral equation is given by a x = log1 F x. By the uniqueness, this is the only compensator of 1 {τ t} Martingale Representation. We denote by M 2 the set of squareintegrable martingales on a probability space Ω, F, P endowed with a filtration F t satisfying the usual hypotheses. Definition 1.1. A set A = {X 1,... X n } M 2 is strongly orthogonal if X i X j M 2 for i, j {1... n}. Definition A set A of strongly orthogonal martingales X 1,..., X n has the representation property if I = M 2, where { n } t I = X : X t = HsdX i s i for all H predictable such that [ t ] E Hs i 2 d[x i ] s < 1 i n t i=1 In this paper we will need the following result: Proposition Let A = X 1,..., X n a set of strongly orthogonal martingales. If there is only a martingale measure for A, then A has the representation property. Proof. See Dellacherie and Meyer [5], VIII.58 or Protter [15], IV.37. 4
5 1.3. Random measures. For completeness, we provide some basic definitions on random measures: we refer to [12] for a complete treatment of the subject. Let H, F, P be a probability space. Definition A random measure on R + R is a family ν = νη; dt, dx : η H of nonnegative measures on R + R, BR + BR, satisfying νη; {} R = identically. We set H = H R + R, with σfields Õ = O BR and P = P BR, where O and P are respectively the optional and predictable σfield on H R +. Definition For any Õmeasurable function hη; t, x we define the integral process h ν as hη, s, xνη; ds, dx if hη, s, x νη; ds, dx < h ν t η = [,t] H [,t] H + otherwise Analogously, a random measure ν is optional resp. predictable if the process h ν is optional resp. predictable for every optional resp. predictable function h. Definition An integervalued random measure is a random measure that satisfies: i νη; {t} R 1 identically; ii for each A BR + BR, ν, A take its values in N; iii ν is optional; iv Pσfinite, i.e. there exists a Pmeasurable partition A n of H such that 1 An ν is integrable. By proposition 1.14 of [12] we obtain the following characterization of integervalued random measures. Proposition If ν is an integervalued random measure, there exists a random set D = [ n ], where [ n ] n N is the sequence of graphs of stopping times n, and a real optional process β such that νη; dt, du = Σ s 1 D η, sɛ s,βs ηdt, du If n n N satisfies [ n ] [ m ] = for n m, it is called an exhausting sequence for D. Proposition Let X be an adapted càdlàg R d valued process. hen: ν X η; dt, dx = s 1 { Xs }ɛ {s, Xs η}dt, dx defines an integervalued random measure on R + R d. 5
6 Proof. See [12], Chapter II, Proposition Definition A multivariate point process is an integervalued random measure ν on R + R such that νη; [, t] R < for all η, t R +. Proposition Let ν be an optional Pσfinite random measure. here exists a random measure ν p, called compensator of ν, and unique up to a P null set, such that: i ν p is a predictable random measure; ii E [h ν p ] = E [h ν ] for every nonnegative P measurable function h on Ω. Also, if h ν is increasing and locally integrable, then h ν ν p is a local martingale. Proof. See heorem [12], Chapter II, heorem 1.8. Let ν = Σ s 1 D η, sɛ s,βsηdt, du be an integer random measure and ν p its compensator. Definition We denote by G loc ν the set of all Pmeasurable realvalued function h such that the process h t η = hη, t, βη1 D η, t hη, s, xνη; s dx has the property that [Σ s. h s 2 ] 1 2 is locally integrable and increasing. 2. If h G loc ν we call stochastic integral with respect to ν ν p and denote by h ν ν p any purely discontinuous local martingale X such that h and X are indistinguishable. Definition A compensated random measure ν ν p has the representation property on H iff every square integrable martingale M t on H, F H, P H can be written as for some Pmeasurable process k. H M t = M + k ν ν p t We recall that from proposition 1.28, first chapter of [12] follows that h ν ν p t = h ν t h ν p t for every Pmeasurable process h such that h ν t is a locally integrable increasing process. Consequently, we obtain in theorem 1.21 that the integral h ν ν p, which is apriori stochastic, coincides with the Stieltjes integral h ν h ν p. It follows that every local martingale on H has finite variation. heorem Assume that ν is a multivariate point process. hen, ν ν p has the representation property with respect to the smallest filtration under which ν is optional. Proof. See theorem 4.37 of [12]. 6
7 2. he Market Model We introduce here a simple model for a market with incomplete information. We have two complete filtered probability spaces: Ω, F Ω, Ft Ω, P Ω and H, F H, Ft H, P H. We denote by W t a standard Brownian Motion on Ω, and assume that Ft Ω is the P Ω H augmentation of the filtration generated by W. Our set of states of nature is given by the product space Ω H, F Ω F H, P Ω P H. We have a riskfree asset B t and a discounted risky asset X t = S t B t, with the following dynamics: { 3 dx t ω, η = µ t η r t X t ω, ηdt + σ t ηx t ω, ηdw t ω B t = exp t r sds where r is a deterministic function of time. We assume that the equation for X admits P H a.e. a unique strong solution with respect to the filtration F W or, equivalently, that there exists a unique strong solution with respect to the filtration F t = Ft W F H. his is satisfied under fairly weak assumptions: for example, it is sufficient that µ and σ are P H a.e. bounded. Denoting by F X the filtration generated by X, we also assume that Ft X = Ft W Ft H. In particular, at time all information is revealed through the observation of the process X. he following proposition helps checking whether this condition is satisfied: Proposition 2.1. Let X t be defined as in 3. hen, if i µ t is F X t measurable, ii F H t F X t. then F X t = F W t F H t. Proof. By definition of X t, we immediately have that Ft X Ft Ω Ft H. o see that the reverse inclusion holds, observe first that by 3: 4 W t = t µ s σ s ds + t dx s σ s X s By i, the first term above is Ft X measurable. For the second, note that t n σs, η 2 Xs 2 ds = X ti+1 X ti 2 lim sup i t i+1 t i where the limit holds in probability, uniformly in t. his proves that F Ω t F X t, and by ii the proof is complete. A simpler version of this model was introduced by Delbaen and Schachermayer [3], while it can be found in the above form in Pham, Rheinländer, and Schweizer [14]. 7 i=1
8 In this framework, we study the problem of an agent wishing to hedge a certain European option HX expiring at a fixed time. Hedging performance is defined as the L 2 norm of the difference, at expiration, between the liability and the hedging portfolio. More precisely, we look for a solution to the minimization problem: 5 min E [ HX c R c G θ 2] θ Θ where G t θ = t θ s dx s and Θ = { θ LX, G t θ S 2 P } Here LX denotes the space of Xintegrable predictable processes, and S 2 the space of semimartingales Y decomposable as Y = Y + M + A, where M is a squareintegrable martingale, and A is a process of squareintegrable variation. his problem is generally nontrivial, since the agent has not access to the filtration F, but only to F X. Indeed, Reihnländer and Schweizer [16] and, independently, Gourieroux, Laurent and Pham [14], proved that problem 5 admits a unique solution for all H L 2 P, under the standing hypothesis: CL G Θ is closed Moreover, in [21] it is shown that the option price, i.e. the optimal value for c, and the optimal hedging strategy θ can be computed in terms of P, the varianceoptimal martingale measure. In fact, the option price is given by c = Ẽ [H]. Definition 2.2. We define the sets of signed martingale measures M 2 s, and of equivalent martingale measures M 2 e: { M 2 s = Q P : dq } dp L2 P, X t is a Qlocal martingale M 2 e = { Q M 2 s : Q P : } Definition 2.3. he variance optimal martingale measure is the unique solution P if it exists to the minimum problem: [ dq ] 2 6 min E Q M 2 s dp If M 2 s is nonempty, then P always exists, as it is the minimizer of the norm in a convex set: the problem is that it may not be positive definite, thereby leading to possibly negative option prices. However, if X t has continuous paths, and under the standard assumption NA M 2 e 8
9 Delbaen and Schachermayer [3] have shown that P M 2 e. Since we are dealing with continuous processes, and we will always assume NA, we need not worry about this issue. In this paper, using a representation formula from [1], we compute explicitly P for some sample models. 3. Representation of Martingale Measures Here we summarize some of the main results of [1] in order to show that the set M 2 e of equivalent martingale measures admits a convenient representation. We denote by λ t = µ t r t of risk. σ t the socalled market price heorem 3.1. If there exists a ndimensional martingale M on H such that: i [M i, M j ] for all i j; ii M has the representation property for Ft H ; then we have for every Q M 2 e: dq dp = E λ t ηdw t E k t ω, ηdm t where k t is such that E λ tηdw t E k t tω, ηdm t is a square t integrable martingale and k t M t > 1. By heorem 3.1, a martingale measure is uniquely determined by the process k which appears in its representation. In particular, k = corresponds to the minimal martingale measure ˆP introduced by Föllmer and Schweizer [7]. he next proposition identifies P. Proposition 3.2. In the same assumptions as heorem 3.1, we have: d P dp = E λ t dw t E k t ηdm t where k t is a solution of the following equation exp E k λ2 t ηdt t ηdm t = [ E exp ] λ2 t ηdt such that E λ tdw t E k t t ηdm t is a square integrable martingale. t Remark 3.3. Proposition 3.2 shows how the change of measure works on Ω and H. In fact, provided that ˆP exists, we can write: d P dp = d P d ˆP d ˆP dp 9
10 where d ˆP dp = E λ t dw t and d P d ˆP = E exp [ exp λ2 t ηdt ] λ2 t ηdt Since in our model d P does not depend on ω, we have: d ˆP d P H dp H = E [ d P dp ] [ F d H = E P ] d ˆP d ˆP dp F H = d P [ d ˆP E d ˆP dp ] F H = d P d ˆP his provides a rule of thumb for changing measure from P to P via ˆP. First change P to ˆP by a direct use of Girsanov theorem: this amounts to replacing µ with r in 3, and is the key of riskneutral valuation. P H is not affected by this step. In principle, one could repeat the same argument from ˆP to P, but this involves calculating the k t η. As we show with an example in the last section, this task may prove hard even in simple cases. A more viable alternative is calculating P H with the above formula. his avoids dealing with k directly, although its existence is still needed through Proposition 3.2. A sufficient condition for the existence of ˆP is the Novikov condition, namely: [ 1 ] E exp λ 2 t dt < 2 and is satisfied by all the examples in the last section. 4. Volatility jumps and Random Measures Although most markets models considered in the literature can be embedded in a framework consistent with heorem 3.1, there are some remarkable exceptions. For example, continuously distributed jumps in volatility can generate filtrations where no finite set of martingales has the representation property see the examples in the next section. In these cases, we can still represent martingales in terms of integrals with respect to a compensated random measure ν ν p, thereby obtaining an analogous of heorem 3.1 and Proposition 3.2. Note that the following results are complementary to those in the previous section, but do not directly generalize them: in fact any model with volatility following a diffusion process is covered in the previous section, and not in the present one. 1
11 heorem 4.1. If there exists a compensated, integervalued, random measure ν ν p on E R + R such that: i F H coincides with the smallest filtration under which ν is optional; ii ν ν p has the representation property on H, F H, P H. hen we have for every Q M 2 e: dq dp = E λ t dw t E k ν ν p where k t is such that k ν t > 1 and E λ tdw t t E k ν νp t is a square integrable martingale. As in the previous section, we need these lemmas: Lemma 4.2. In the same assumptions as heorem 4.1, every square integrable martingale M t on the space Ω E, F Ω F H, P Ω P H with respect to F t can be written as M t = M + t h s dw s + k ν ν p t Proof. Let us denote the set of martingales for which the thesis hold by M. We want to show that M = M 2 Ω H. By representation property, every square integrable martingale M t ω on Ω H depending only on ω belongs to M, since it can be written as M t = M + t h sdw s. Analogously, every N t η belongs to M, since it is of the form N t = N + k ν ν p t, where k is a Pmeasurable process and k ν t is locally integrable. Denoting P t ω, η = N t ηm t ω, we have that P t is a square integrable martingale on Ω H. Setting P t = Pt d + Pt c, where Pt c and Pt d are the continuous and purely discontinuous parts of P, we have that P t = Pt d = M t N t. By 1.2, it follows that Pt d = P d + Mk ν ν p t. Also, by Itô s formula, Pt c = P c + t N sh s dw s. his shows that any linear combination i M iωn i η belongs to M and, by a monotone class argument, it is easy to see that M is dense in M 2. Hence, for every Z t M 2 there exist a sequence Xt n of squareintegrable martingales such that Xt n = X + t hn s dw s + k n ν ν p t. By the identity: [ ] E [X n ] = E h n s 2 ds + E [ ] ks n 2 νs p it follows that h n and k n are Cauchy sequences respectively in [ ] {h t η predictable: E h 2 sds < } and {k t η, x predictable: E [ k 2 s ν p s ] < } Since these spaces are complete, the proof is finished. 11
12 Lemma 4.3. Let Z be a strictly positive, squareintegrable random variable and denote Z t = E [Z F t ]. hen, if Z t = Z + H ν ν p t, we have: H Z t = Z E ν ν p t. Z Proof. If there exists a martingale M t = M + K ν ν P t such that Z t = Z E M t, then it is unique. In fact, if N t = N + H ν ν P t and Z t = Z E N t, we immediately have M t = N t. Since M t and N t are purely discontinuous martingales by definition 1.2, they must coincide up to evanescent sets. In particular, we have that M t = Z t Z t. For all t >, Z t Z t coincides with the jumps of the purely discontinuous martingale H Z ν νp t, therefore M t exists and is given by M t = logz + H Z ν νp t. Proposition 4.4. In the same assumptions as heorem 4.1, we have: d P dp = E λ t dw t E k ν ν p where k t is a solution of the following equation exp E k ν ν p λ2 t ηdt = [ E exp ] λ2 t ηdt such that E λ tdw t k t E ν ν p is a square integrable martingale. t Proof. he proof is formally analogous to that of heorem 1.16 of [1], by the previous results and the representation property of the compensated random measure ν ν p. 5. Examples We now show how the results in the previous sections provide convenient tools for calculating P and thus pricing options in models where volatility jumps. We start with a simple model where jumps occur at fixed times, and can take only two values. We then discuss the more general cases of jumps occurring at stopping times, and with arbitrary distributions Deterministic volatility jumps. In discretetime fashion, the following model was introduced in RiskMetrics Monitor [23] as an improvement of the standard lognormal model for calculating Value at Risk. We set H = {, 1} n and denote η = {a 1,..., a n }. a 1...., a n are Bernoulli IID random variable, so that H is endowed with the product 12
13 measure from {, 1}. Ft H contains all information on jumps up to time t, therefore it is equal to the parts of {a i } ti t. Setting t i = i, the n+1 dynamics of µ and σ is given by: { µ t = 1 { t<t1 }µ + n i=1 1 {t i t<t i+1 }µ ai σ t = 1 { t<t1 }σ + n i=1 1 {t i t<t i+1 }σ ai In fact, all we need for meanvariance hedging is the dynamics for λ: n λ 2 t = λ {ti t<t i+1 }λ 2 a i i=1 It is easy to check that a martingale with the representation property on H is given by: M t = t i t 1 {a i =} p, where p = P a 1 =. We are now ready to see how the change of measure works: in fact, by heorem 3.1, we have that: d P d ˆP = d P H dp H = exp λ2 1 a1 + +a n n exp λ2 n p exp + 1 p exp λ2 n Since the density above can be written as: d P H n λ exp a 2 1 i = 1 a n i λ2 dp H p exp + 1 p exp i=1 λ2 n n a1 + +a n n λ2 1 n λ2 1 n n n it follows that under P the variables a 1,..., a n are still independent, and p is replaced by: p = p p exp λ2 n exp λ2 1 n + 1 p exp λ2 1 n 5.2. Random volatility jumps. Consider the following model, where µ and σ are constant, until some unexpected event occurs. In other words: { µ t = µ 1 1 {t<τ} + µ 2 1 {t τ} σ t = σ 1 1 {t<τ} + σ 2 1 {t τ} In fact, all we need is the dynamics for λ: λ 2 t = λ α1 {t τ} where α = λ 2 2 λ 2 1. he event τ which triggers the jump is a totally inaccessible stopping time. hat is to say, any attempt to predict it by means of previous information is deemed to failure. α represents the jump size, and it may be deterministic or random. We now solve the problem in three cases: α deterministic, α Bernoulli, and α continuously distributed. 13
14 α deterministic. Since our goal is to find the varianceoptimal martingale measure, we start exhibiting a martingale with the representation property for F H, which in this case is the filtration generated by τ. Proposition 5.1. Let τ be a stopping time with a diffuse law, and Ã the compensator of 1 {τ t}. hen the martingale M t = 1 {τ t} Ãt has the representation property. Proof. Let Q be a martingale measure for M, and ÃQ the compensator of 1 {τ t} in Q. Both M t and 1 {τ t} ÃQ are Qmartingales, therefore their difference ÃQ Ã is also a martingale. However, it is also a finite variation process, therefore it must be identically zero. Since ÃQ = Ã, by Proposition 1.9, the c.d.f. s of τ under P and Q are equal. his implies that Q = P, F τ a.e. heorem 1.12 concludes the proof. We now compute P H, that is the law of τ under P. For simplicity, assume that τ has a density, and denote it by f t. We have: f t = d P d ˆP f t = exp { λ2 sηds 1 = f t = exp λ c 1 + αt f t if t < 1 c exp λ c 1 f if t [ where c = E exp ] λ2 sηds. In this simple example we also calculate k t explicitly, although the computational effort required suggests that in more complex situations it may not be a good idea to do so. First, we see how stochastic integrals with respect to M look like. Recall that, by proposition 1.9 Ãt = at τ, where a : R + R +. Lemma 5.2. Let k t be a F τ measurable process. hen we have: k t dm t = k τ τ1 {τ } τ k t tda t Proof. By Corollary 1.3, we have that any F τ measurable process can be written as k t t τ. Hence: k t t τdm t =k τ τ1 {τ } =k τ τ1 {τ } =k τ τ1 {τ } τ τ k t t τda t τ = k t t τda t = k t tda t 14
15 he above lemma shows that k t s needs only be defined for s = t, so from now on we shall unambiguously write k t instead of k t t. Now we can compute k t : Proposition 5.3. k t is the unique solution of the following ODE: { k t = α + a t + αk t + a tkt 2 7 k = exp λ c where c = exp λ 2 2 exp αt df t + exp λ F and F t = P τ t. Proof. By lemma 5.2, and the generalization of Itô s formula for processes with jumps, we have: τ E k t ηdm t = E k τ 1 {τ } k t da t = Hence, by 3.2, we have: = 1 + k τ 1 {τ } exp τ k t da t τ 1 + kτ 1 {τ } exp k t da t = exp λ2 1 τ α [ E exp ] λ2 t dt [ aking logarithms of both sides, and setting c = E exp ] λ2 t dt, we get: ln τ 1 + k τ 1 {τ } k t da t = λ 2 1 τ α ln c Differentiating with respect to τ, for τ we obtain equation 7. Remark 5.4. Equation 7 is a Riccati ODE, and can be solved in terms of the function a t. Depending on the form of a t, explicit solutions may or may not be available α Bernoulli. In this case, α is a Bernoulli random variable, independent of τ, with values {α, α 1 }. We also set A = {α = α }, B = {α = α 1 }, and p = P B. Since the support of α is no longer a single point, a martingale will not be sufficient for representation purposes. In fact, two martingales do the job, as we prove in the following: Proposition 5.5. Let N t = 1 {τ t} 1 B p. hen the set of two martingales {M, N} has the representation property. 15
16 Proof. First we check that M and N are orthogonal. his is easily seen, since MN = Na τ. We now prove that the martingale measure is unique. Let Q be a martingale measure for {M, N}. As shown in the proof of Proposition 5.1, the distribution of τ under Q must be the same as under P. However, we also need that QB = P B, otherwise N would not be a martingale. he change from P to P is a change in the joint law of τ, α. Under P this is a product measure, since τ and α are independent. However, we cannot expect that the same holds under P. For t we have: d P H = 1 A exp λ 2 1 tα + 1 B exp λ 2 1 tα 1 dp H c herefore the law of τ under P is given by: f t = 1 c p exp λ 2 1 tα p exp λ 2 1 tα ft And the conditional law of α with respect to τ is given by: P B τ dt = = p exp λ 2 1 tα 1 p exp λ 2 1 tα p exp λ 2 1 tα In particular, it is immediately seen that α is independent of τ if and only if it degenerates in the previous case. When α is Bernoullian, calculating k involves solving a system of two Riccati ODEs, which is somewhat cumbersome. More generally, if the support of α is made of n points, it is reasonable that n martingales are required for representation purposes. As a result, the values of k would be the solutions of a system of n ODEs α continuously distributed. In this case the support of α is an infinite set, therefore heorem 3.1 is no longer applicable. In fact we need its random measure analogous, given by heorem 4.1. If the filtration F λ generated by λ t coincides with the one generated by µ t and σ t, we can assume F H = F λ. By Proposition 1.17, there exists a random measure ν associated to λ, and given by: νη; dt, dx = ɛ {τ,αη} dt, dx Since this is a multivariate point process, and F H coincides with the smallest filtration under which ν is optional, by heorem 1.22 the compensated measure ν ν p has the representation property on H. Also, k ν ν p is a purely discontinuous martingale for all Pmeasurable processes k, therefore [W, k ν ν p ] =. his means that the assumptions of heorem 4.1 are satisfied, and P is given by Proposition
17 Suppose that α has a density, say gx. We have: d P H = ht, x = exp λ2 1 x t dp H c Denoting by jt, x and jt, x the joint densities of τ, α under P and P respectively, we have: jt, x = ht, xjt, x = ht, xftgx ft = ft ht, xgxdx gx t = jt, x ft = ht, xgx ht, xgxdx If α N δ, v, the density of τ under P is given by: + 1 f t = f t c exp λ 2 1 α t α δ n dα = v = 1 λ c exp 2 1 δ t t2 v where nx is the standard normal density function. It is easy to check that the conditional density of α given τ is of the form: gx t = 1 exp 1 x α + tv 2 2πv 2 v herefore α is conditionally normal under P, with distribution N δ + tv, v. Remark 5.6. In the specific case of λ t being normally distributed, it can be shown that G Θ is not closed, therefore Proposition 3.2 does not hold. However, in [1] the above formula is proved relaxing this assumption. It is also shown that in this example G Θ is closed if and only if the support of α is bounded from above Multiple random jumps. Leaving α deterministic for simplicity, we now study the following model: n λ 2 t = λ {t τi }α i i=1 We assume that τ i+1 τ i are IID random variables, with common density fx. Denote by H the space [, ] n, endowed with the image measure of the mapping τ 1,..., τ n τ 1,..., τ n, and with the natural filtration F t generated by {τ 1 t,..., τ n t}. A martingale with the predictable representation property is given by M t = n i=1 1 {t τ i }. In this case, the density of P is given by: d P H = exp λ2 dp H 17 n i=1 α i τ i c f t
Hedging with options in models with jumps
Hedging with options in models with jumps ama Cont, Peter Tankov, Ekaterina Voltchkova Abel Symposium 25 on Stochastic analysis and applications in honor of Kiyosi Ito s 9th birthday. Abstract We consider
More informationTHE THEORY OF GOODDEAL PRICING IN FINANCIAL MARKETS A. CERNY, S.D. HODGES
ISSN 17446783 THE THEORY OF GOODDEAL PRICING IN FINANCIAL MARKETS A. CERNY, S.D. HODGES Tanaka Business School Discussion Papers: TBS/DP04/14 London: Tanaka Business School, 2004 The Theory of GoodDeal
More informationThe British Call Option
Quant. Finance Vol. 13, No. 1, 213, (95 19) Research Report No. 2, 28, Probab. Statist. Group Manchester (25 pp) The British Call Option G. Peskir & F. Samee Alongside the British put option [11] we present
More informationControllability and Observability of Partial Differential Equations: Some results and open problems
Controllability and Observability of Partial Differential Equations: Some results and open problems Enrique ZUAZUA Departamento de Matemáticas Universidad Autónoma 2849 Madrid. Spain. enrique.zuazua@uam.es
More informationA UNIQUENESS RESULT FOR THE CONTINUITY EQUATION IN TWO DIMENSIONS. Dedicated to Constantine Dafermos on the occasion of his 70 th birthday
A UNIQUENESS RESULT FOR THE CONTINUITY EQUATION IN TWO DIMENSIONS GIOVANNI ALBERTI, STEFANO BIANCHINI, AND GIANLUCA CRIPPA Dedicated to Constantine Dafermos on the occasion of his 7 th birthday Abstract.
More informationONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK
ONEDIMENSIONAL RANDOM WALKS 1. SIMPLE RANDOM WALK Definition 1. A random walk on the integers with step distribution F and initial state x is a sequence S n of random variables whose increments are independent,
More informationWhich Free Lunch Would You Like Today, Sir?: Delta Hedging, Volatility Arbitrage and Optimal Portfolios
Which Free Lunch Would You Like Today, Sir?: Delta Hedging, Volatility Arbitrage and Optimal Portfolios Riaz Ahmad Course Director for CQF, 7city, London Paul Wilmott Wilmott Associates, London Abstract:
More informationFixed Point Theorems and Applications
Fixed Point Theorems and Applications VITTORINO PATA Dipartimento di Matematica F. Brioschi Politecnico di Milano vittorino.pata@polimi.it Contents Preface 2 Notation 3 1. FIXED POINT THEOREMS 5 The Banach
More informationLevel sets and extrema of random processes and fields. JeanMarc Azaïs Mario Wschebor
Level sets and extrema of random processes and fields. JeanMarc Azaïs Mario Wschebor Université de Toulouse UPS CNRS : Institut de Mathématiques Laboratoire de Statistiques et Probabilités 118 Route de
More informationA SIMPLE OPTION FORMULA FOR GENERAL JUMPDIFFUSION AND OTHER EXPONENTIAL LÉVY PROCESSES
A SIMPLE OPTION FORMULA FOR GENERAL JUMPDIFFUSION AND OTHER EXPONENTIAL LÉVY PROCESSES ALAN L. LEWIS Envision Financial Systems and OptionCity.net August, 00 Revised: September, 00 Abstract Option values
More informationTHE KADISONSINGER PROBLEM IN MATHEMATICS AND ENGINEERING: A DETAILED ACCOUNT
THE ADISONSINGER PROBLEM IN MATHEMATICS AND ENGINEERING: A DETAILED ACCOUNT PETER G. CASAZZA, MATTHEW FICUS, JANET C. TREMAIN, ERIC WEBER Abstract. We will show that the famous, intractible 1959 adisonsinger
More informationHow to Use Expert Advice
NICOLÒ CESABIANCHI Università di Milano, Milan, Italy YOAV FREUND AT&T Labs, Florham Park, New Jersey DAVID HAUSSLER AND DAVID P. HELMBOLD University of California, Santa Cruz, Santa Cruz, California
More informationThe Statistical Analysis of SelfExciting Point Processes
Statistica Sinica (2011): Preprint 1 The Statistical Analysis of SelfExciting Point Processes KA KOPPERSCHMDT and WNFRED STUTE The Nielsen Company (Germany) GmbH, Frankfurt and Mathematical nstitute,
More informationSubspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity
Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity Wei Dai and Olgica Milenkovic Department of Electrical and Computer Engineering University of Illinois at UrbanaChampaign
More informationAn Introduction to Mathematical Optimal Control Theory Version 0.2
An Introduction to Mathematical Optimal Control Theory Version.2 By Lawrence C. Evans Department of Mathematics University of California, Berkeley Chapter 1: Introduction Chapter 2: Controllability, bangbang
More informationDecoding by Linear Programming
Decoding by Linear Programming Emmanuel Candes and Terence Tao Applied and Computational Mathematics, Caltech, Pasadena, CA 91125 Department of Mathematics, University of California, Los Angeles, CA 90095
More informationErgodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains
Ergodicity and Energy Distributions for Some Boundary Driven Integrable Hamiltonian Chains Peter Balint 1, Kevin K. Lin 2, and LaiSang Young 3 Abstract. We consider systems of moving particles in 1dimensional
More informationSampling 50 Years After Shannon
Sampling 50 Years After Shannon MICHAEL UNSER, FELLOW, IEEE This paper presents an account of the current state of sampling, 50 years after Shannon s formulation of the sampling theorem. The emphasis is
More informationPrice discrimination through communication
Price discrimination through communication Itai Sher Rakesh Vohra May 26, 2014 Abstract We study a seller s optimal mechanism for maximizing revenue when a buyer may present evidence relevant to her value.
More informationFast Greeks by algorithmic differentiation
The Journal of Computational Finance (3 35) Volume 14/Number 3, Spring 2011 Fast Greeks by algorithmic differentiation Luca Capriotti Quantitative Strategies, Investment Banking Division, Credit Suisse
More informationNotes on Mean Field Games
Notes on Mean Field Games (from P.L. Lions lectures at Collège de France) Pierre Cardaliaguet January 5, 202 Contents Introduction 2 2 Nash equilibria in games with a large number of players 4 2. Symmetric
More informationRegret in the OnLine Decision Problem
Games and Economic Behavior 29, 7 35 (1999) Article ID game.1999.0740, available online at http://www.idealibrary.com on Regret in the OnLine Decision Problem Dean P. Foster Department of Statistics,
More informationRmax A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning
Journal of achine Learning Research 3 (2002) 213231 Submitted 11/01; Published 10/02 Rmax A General Polynomial Time Algorithm for NearOptimal Reinforcement Learning Ronen I. Brafman Computer Science
More informationQuantitative Strategies Research Notes
Quantitative Strategies Research Notes March 999 More Than You Ever Wanted To Know * About Volatility Swaps Kresimir Demeterfi Emanuel Derman Michael Kamal Joseph Zou * But Less Than Can Be Said Copyright
More informationLecture notes on the Stefan problem
Lecture notes on the Stefan problem Daniele Andreucci Dipartimento di Metodi e Modelli Matematici Università di Roma La Sapienza via Antonio Scarpa 16 161 Roma, Italy andreucci@dmmm.uniroma1.it Introduction
More informationThe Backpropagation Algorithm
7 The Backpropagation Algorithm 7. Learning as gradient descent We saw in the last chapter that multilayered networks are capable of computing a wider range of Boolean functions than networks with a single
More informationQUASICONFORMAL MAPPINGS WITH SOBOLEV BOUNDARY VALUES
QUASICONFORMAL MAPPINGS WITH SOBOLEV BOUNDARY VALUES KARI ASTALA, MARIO BONK, AND JUHA HEINONEN Abstract. We consider quasiconformal mappings in the upper half space R n+1 + of R n+1, n 2, whose almost
More information1 Sets and Set Notation.
LINEAR ALGEBRA MATH 27.6 SPRING 23 (COHEN) LECTURE NOTES Sets and Set Notation. Definition (Naive Definition of a Set). A set is any collection of objects, called the elements of that set. We will most
More informationNot Only What but also When: A Theory of Dynamic Voluntary Disclosure
Not Only What but also When: A Theory of Dynamic Voluntary Disclosure By ILAN GUTTMAN, ILAN KREMER, AND ANDRZEJ SKRZYPACZ We examine a dynamic model of voluntary disclosure of multiple pieces of private
More informationThe Set Covering Machine
Journal of Machine Learning Research 3 (2002) 723746 Submitted 12/01; Published 12/02 The Set Covering Machine Mario Marchand School of Information Technology and Engineering University of Ottawa Ottawa,
More information