Jump-adapted discretization schemes for Lévy-driven SDEs



Similar documents
Sensitivity analysis of European options in jump-diffusion models via the Malliavin calculus on the Wiener space

ARBITRAGE-FREE OPTION PRICING MODELS. Denis Bell. University of North Florida

Simulating Stochastic Differential Equations

Jung-Soon Hyun and Young-Hee Kim

Lecture 10. Finite difference and finite element methods. Option pricing Sensitivity analysis Numerical examples

Monte Carlo Methods in Finance

The Black-Scholes-Merton Approach to Pricing Options

A spot price model feasible for electricity forward pricing Part II

Maximum likelihood estimation of mean reverting processes

Master of Mathematical Finance: Course Descriptions

Introduction to portfolio insurance. Introduction to portfolio insurance p.1/41

Stephane Crepey. Financial Modeling. A Backward Stochastic Differential Equations Perspective. 4y Springer

On exponentially ane martingales. Johannes Muhle-Karbe

COMPLETE MARKETS DO NOT ALLOW FREE CASH FLOW STREAMS

The Behavior of Bonds and Interest Rates. An Impossible Bond Pricing Model. 780 w Interest Rate Models

Markovian projection for volatility calibration

Introduction to Arbitrage-Free Pricing: Fundamental Theorems

Guaranteed Annuity Options

The Black-Scholes Formula

EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL

ECG590I Asset Pricing. Lecture 2: Present Value 1

Asian Option Pricing Formula for Uncertain Financial Market

A new Feynman-Kac-formula for option pricing in Lévy models

A Novel Fourier Transform B-spline Method for Option Pricing*

Monte Carlo Methods and Models in Finance and Insurance

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Numerical methods for American options

Contents. 5 Numerical results Single policies Portfolio of policies... 29

Some Research Problems in Uncertainty Theory

Decomposition of life insurance liabilities into risk factors theory and application

Asymptotics of discounted aggregate claims for renewal risk model with risky investment

Pricing of an Exotic Forward Contract

QUANTIZED INTEREST RATE AT THE MONEY FOR AMERICAN OPTIONS

Nonparametric adaptive age replacement with a one-cycle criterion

Does Black-Scholes framework for Option Pricing use Constant Volatilities and Interest Rates? New Solution for a New Problem

The Heat Equation. Lectures INF2320 p. 1/88

Mathematical Finance

Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies

Estimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia

The Effective Dimension of Asset-Liability Management Problems in Life Insurance

Discussion on the paper Hypotheses testing by convex optimization by A. Goldenschluger, A. Juditsky and A. Nemirovski.

HPCFinance: New Thinking in Finance. Calculating Variable Annuity Liability Greeks Using Monte Carlo Simulation

Introduction to General and Generalized Linear Models

Chapter 2: Binomial Methods and the Black-Scholes Formula

Rolf Poulsen, Centre for Finance, University of Gothenburg, Box 640, SE Gothenburg, Sweden.

Example of High Dimensional Contract

Stochastic volatility: option pricing using a multinomial recombining tree

Stochastic Gradient Method: Applications

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS

The Exponential Distribution

Option Pricing. Chapter 4 Including dividends in the BS model. Stefan Ankirchner. University of Bonn. last update: 6th November 2013

1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM)

5 Numerical Differentiation

Disability insurance: estimation and risk aggregation

Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.

Dirichlet forms methods for error calculus and sensitivity analysis

Explicit Option Pricing Formula for a Mean-Reverting Asset in Energy Markets

Pricing Barrier Options under Local Volatility

Adaptive Online Gradient Descent

Valuation of the Surrender Option in Life Insurance Policies

Sums of Independent Random Variables

7: The CRR Market Model

Pricing and calibration in local volatility models via fast quantization

1 The Brownian bridge construction

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Hedging Options In The Incomplete Market With Stochastic Volatility. Rituparna Sen Sunday, Nov 15

Pricing complex options using a simple Monte Carlo Simulation

Probability and Statistics

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 10

Pricing Forwards and Swaps

The Black-Scholes pricing formulas

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!

Optimal proportional reinsurance and dividend pay-out for insurance companies with switching reserves

On Black-Scholes Equation, Black- Scholes Formula and Binary Option Price

Alternative Price Processes for Black-Scholes: Empirical Evidence and Theory

A non-gaussian Ornstein-Uhlenbeck process for electricity spot price modeling and derivatives pricing

Four Derivations of the Black Scholes PDE by Fabrice Douglas Rouah

Pricing and hedging American-style options: a simple simulation-based approach

EXISTENCE AND NON-EXISTENCE RESULTS FOR A NONLINEAR HEAT EQUATION

Some stability results of parameter identification in a jump diffusion model

A Genetic Algorithm to Price an European Put Option Using the Geometric Mean Reverting Model

THE BLACK-SCHOLES MODEL AND EXTENSIONS

Vilnius University. Faculty of Mathematics and Informatics. Gintautas Bareikis

Lectures 5-6: Taylor Series

CONTROLLABILITY. Chapter Reachable Set and Controllability. Suppose we have a linear system described by the state equation

第 9 讲 : 股 票 期 权 定 价 : B-S 模 型 Valuing Stock Options: The Black-Scholes Model

Stocks paying discrete dividends: modelling and option pricing

Sensitivity analysis of utility based prices and risk-tolerance wealth processes

An exact formula for default swaptions pricing in the SSRJD stochastic intensity model

Fuzzy Differential Systems and the New Concept of Stability

LECTURE 15: AMERICAN OPTIONS

Life-insurance-specific optimal investment: the impact of stochastic interest rate and shortfall constraint

Grey Brownian motion and local times

How To Balance In A Distributed System

Master s Thesis. Pricing Constant Maturity Swap Derivatives

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

OPTIMAL CONTROL OF A COMMERCIAL LOAN REPAYMENT PLAN. E.V. Grigorieva. E.N. Khailov

Applications to Data Smoothing and Image Processing I

Transcription:

Jump-adapted discretization schemes for Lévy-driven SDEs Arturo Kohatsu-Higa Osaka University Graduate School of Engineering Sciences Machikaneyama cho -3, Osaka 56-853, Japan E-mail: kohatsu@sigmath.es.osaka-u.ac.jp Peter Tankov Ecole Polytechnique Centre de Mathématiques Appliquées 928 Palaiseau Cedex France Email: peter.tankov@polytechnique.org Abstract We present new algorithms for weak approximation of stochastic differential equations driven by pure jump Lévy processes. The method is built upon adaptive non-uniform discretization based on the jump times of the driving process coupled with suitable approximations of the solutions between these jump times. Our technique avoids the costly simulation of the increments of the Lévy process and in many cases achieves better convergence rates than the traditional schemes with equal time steps. To illustrate the method, we consider applications to simulation of portfolio strategies and option pricing in the Libor market model with jumps. Key words: Lévy-driven stochastic differential equation, Euler scheme, jumpadapted discretization, weak approximation, Libor market model with jumps. 2 Mathematics Subject Classification: Primary 6H35, Secondary 65C5, 6G5. Introduction Let Z be a d-dimensional Lévy process without diffusion component, that is, Z t = γt + y N(dy, ds) + y y > Corresponding author yn(dy, ds), t,.

Here γ d, N is a Poisson random measure on d, ) with intensity ν satisfying y 2 ν(dy) < and N(dy, ds) = N(dy, ds) ν(dy)ds denotes the compensated version of N. Further, let X be an n -valued adapted stochastic process, unique solution of the stochastic differential equation X t = X + h(x s )dz s, t,, () where h is an n d matrix. The traditional uniformly-spaced discretization schemes for () 5, suffer from two difficulties: first, in general, we do not know how to simulate the driving Lévy process and second, a large jump of Z occurring between two discretization points can lead to an important discretization error. On the other hand, in many cases of interest, the Lévy measure of Z is known and one can easily simulate the jumps of Z bigger in absolute value than a certain >. As our benchmark example, consider the one-dimensional tempered stable Lévy process 5 which has the Lévy density ν(x) = C e λ x x +α x< + C +e λ+ x x +α+ x>. (2) This process, also known as the CGMY model 3 when C = C + and α = α +, is a popular representation for the logarithm of stock price in financial markets. No easy algorithm is available for simulating the increments of this process (cf 2, 4), however, it is straightforward to simulate random variables with density ν(x) x > x > ν(x)dx (3) using the rejection method 5, example 6.9. Sometimes, other approximations of Z by a compound Poisson process, such as the series representations 6 lead to simpler simulation algorithms. A natural idea due to ubenthaler 7 (in the context of finite-intensity jump processes, this idea appears also in 2, 3), is to replace the process Z with a suitable compound Poisson approximation and place the discretization points at the jump times of the compound Poisson process. When the jumps of Z are highly concentrated around zero, however, this approximation is too rough and can lead to convergence rates which are arbitrarily slow. In this paper we are interested in approximating the solution of () in the weak sense. We use the idea of Asmussen and osiński (see also 4) and replace the small jumps of Z with a Brownian motion between the jump times of the compound Poisson approximation. We then replace the solution of the continuous SDE between the jump times by a suitable approximation, which is explicitly computable. Combined, these two steps lead to a much lower discretization error than in 7. If N is the average number of operations required to simulate a single trajectory, the discretization error of our method converges 2

to zero faster than N in all cases and faster than N if the Lévy measure of Z is locally symmetric near zero. In practice, the convergence rates are always better than these worst-case bounds: for example, if α = α + =.8 in (2), our method has the theoretical convergence rate of N.22 (compared to only N. without the Brownian approximation) and when α = α + =.5, the rate of our method is N 7. Consider a family of measurable functions (χ ) > : d, such that χ d (y)ν(dy) < and y > y 2 ( χ )(y)ν(dy) < for all > and lim χ (y) = for all y. The Lévy measure ν will be approximated by finite measures χ ν. The most simple such approximation is χ (y) := y >, but others can also be useful (see Example 7 below). We denote by N a Poisson random measure with intensity χ ν ds and by N its compensated Poisson random measure. Similarly, we denote by N the compensated Poisson random measure with intensity χ ν ds, where χ := χ. The process Z can then be represented in law as follows: d Z t = γ t + Zt + t, γ = γ yχ ν(dy) + Z t = t = y yn (dy, ds), d y N (dy, ds). d y > y χ ν(dy), We denote by λ = χ d ν(dy) the intensity of Z, by (Nt ) the Poisson process which has the same jump times as Z, and by Ti, i N, the jump times of Z. Furthermore, we denote by Σ the variance-covariance matrix of : Σ ij = y i y j χ ν(dy), d and in the one-dimensional case we set σ 2 := Σ. Sometimes we shall use the following technical assumption on (χ ): (A) n 2, C such that z n+ χ ν(dz) C z n χ ν(dz) d d for sufficiently small. This assumption, roughly, means that the approximation (χ ) does not remove small jumps faster than big jumps, and it is clearly satisfied by χ (y) = y >. for the process (3) this corresponds to having α + = α and C + = C 3

Supposing that no further discretization is done between the jump times of Z, the computational complexity of simulating a single approximate trajectory is proportional to the random number N. To compare our method to the traditional equally-spaced discretizations, we measure instead the computational complexity by the average number of jumps of Z on,, equal to λ. We say that the approximation ˆX converges weakly for some order α > if, for a sufficiently smooth function f, Ef(X ) Ef( ˆX ) Kλ α for some K > and all sufficiently small. In this paper, we provide two versions of the jump-adapted discretization scheme for pure jump Lévy processes. The scheme presented in Section 2 is easier to use and implement but works in the case d = n =. A fully general scheme is then presented in Section 3. Section 4 presents two detailed examples where our method is applied to the problems of long term portfolio simulation and of option pricing in the Libor market model with jumps. 2 One-dimensional SDE For our first scheme we take d = n = and consider the ordinary differential equation dx t = h(x t )dt, X = x. (4) In the one-dimensional case, the solution to this equation can always be written: X t := θ(t; x) = F (t + F(x)), where F is the primitive of h(x). Alternatively, a high-order discretization scheme (such as unge-kutta) can be used and is easy to construct (see also emark 4). We define inductively ˆX() = X and for i, ˆX(T i+ ) = θ( γ (T i+ T i ) + σ (W(T i+ ) W(T i )) 2 h (XT i )σ 2 (T i+ T i ); ˆX(T i )) (5) ˆX(T i ) = ˆX(T i ) + h( ˆX(T i )) Z(T i ). (6) Similarly, for any other point t (Ti, T i+ ), we define ˆX(t) = θ ( γ (t η t ) + σ (W(t) W(η t )) 2 h ( ˆX(η t ))σ 2 (t η t ); ˆX(η t ) ), (7) where we set η t := sup{ti : Ti t}. Here W denotes a one dimensional Brownian motion. Therefore, the idea is to replace the original equation with an Asmussen- osiński type approximation which is explicitly solvable between the times of 4

big jumps and is exact for all h if the driving process is deterministic, and for affine h in all cases. The purpose of this construction becomes clear from the following lemma. Lemma. Let h C. The process ( ˆX(t)) defined by (5) (7) is the solution of the stochastic differential equation { d ˆX t = h( ˆX t ) dzt + σ dw t + γ dt + } 2 (h ( ˆX t ) h ( ˆX η(t) ))σdt 2. Proof. It is enough to show that the process Y t := θ(γ t + σ W t 2 h (x)σ 2 t; x) (8) is the solution of the continuous SDE dy t = h(y t ) {σ dw t + γ dt + 2 } (h (Y t ) h (x))σ 2 dt. This follows by an application of Itô formula to (8) using θ(t, z) = h(θ(t, z)) t 2 θ(t, z) t 2 = h (θ(t, z))h(θ(t, z)). For the convergence analysis, we introduce two sets of conditions (parameterized by an integer number n): (H n ) f C n, h C n, f (k) and h (k) are bounded for k n and z 2n ν(dz) <. (H n) f C n, h C n, h (k) are bounded for k n, f (k) have at most polynomial growth for k n and z k ν(dz) < for all k. Theorem 2. (i) Assume (H 3 ) or (H 3) + (A). Then ( Ef( ˆX σ 2 ) ) f(x ) C (σ 2 λ + γ ) + y 3 χ ν(dy). (ii) Assume (H 4 ) or (H 4 ) + (A), let γ be bounded and let the measure ν satisfy y 3 χ ν(dy) y 4 χ ν (dy) (9) 5

for some measure ν. Then ( Ef( ˆX σ 2 ) ) f(x ) C (σ 2 + γ ) + y 4 χ (ν + ν)(dy) λ and in particular, Ef( ˆX ) f(x ) C ( σ 2 ) + y 4 χ (ν + ν)(dy). λ emark 3. (i)under the polynomial growth assumptions (H 3 ) or (H 4 ), the constants in the above theorem may depend on the initial value x. (ii) Condition (9) is satisfied, for example, if χ (y) = χ ( y) for all y and and if ν is locally symmetric near zero. That is, ν(dy) = (+ξ(y))ν (dy), where ν is a symmetric measure and ξ(y) = O(y) for y. emark 4. The convergence rates given in Theorem 2 are obtained under the assumption that the equation (4) is solved explicitly. If a discretization scheme is used for this equation as well, the total error will be bounded from below by the error of this scheme. However, usually this constraint is not very important since it is easy to construct a discretization scheme for (4) with a high convergence order, for example, the classical unge-kutta scheme has convergence order 4. Corollary 5 (Worst-case convergence rates). Let χ (x) = x >. Then, under the conditions of part (i) of Theorem 2, Ef( ˆX ) f(x ) o(λ 2 ), and under the conditions of part (ii) of Theorem 2, Ef( ˆX ) f(x ) o(λ ). Proof. These worst-case bounds follow from the following estimates. First, for every Lévy process, σ as. By the dominated convergence theorem, 2 λ as. This implies λ y 3 ν(dy) 2 λ σ 2 and similarly λ y y y 4 ν(dy) 2 λ σ 2. Example 6 (Stable-like behavior). Once again, let χ (x) = x >, and assume that the Lévy measure has an α-stable-like behavior near zero, meaning that ν has a density ν(z) satisfying ν(z) = g(z) z +α, 6

where g has finite nonzero right and left limits at zero as, for example, for the tempered stable process (CGMY) 5. Then σ 2 = O( 2 α ), λ = O( α ), y 3 ν(dy) = O( 3 α ) and y 4 ν(dy) = O( 4 α ), y so that in general for α (, 2) y Ef( ˆX ) f(x ) O(λ 3 α ), and if the Lévy measure is locally symmetric near zero and α (, ), Ef( ˆX ) f(x ) O(λ 4 α ). Example 7 (Simulation using series representation). In this example we explain why it can be useful to make other choices of truncation than χ (x) = x >. The gamma process has Lévy density ν(z) = ce λz z z>. If one uses the truncation function χ (x) = x >, one will need to simulate random variables with law ce λz zν((, )) z>, which may be costly. Instead, one can use one of the many series representations for the gamma process 6. Maybe the most convenient one is X t = λ e Γi/c V i Ui t, i= where (Γ i ) is a sequence of jump times of a standard Poisson process, (U i ) is an independent sequence of independent random variables uniformly distributed on, and (V i ) is an independent sequence of independent standard exponential random variables. For every τ >, the truncated sum X t = λ e Γi/c V i Ui t, i:γ i<τ defines a compound Poisson process with Lévy density ν τ (x) = c e λx e λxeτ/c x>, x and therefore this series representation corresponds to χ (x) = e λx(eτ/c ), where can be linked to τ, for example, by setting τ =. Theorem 2 will be proved after a series of lemmas. Lemma 8 (Bounds on moments of X and ˆX). Assume z p ν(dz) < for some p 2, 7

h C () and h(x) K( + x ) and h (x) K for some K <. Then there exists a constant C > (which may depend on p but not on ) such that E sup X s p C( + x p ), () s E sup ˆX s p C( + x p ). () s Proof. We shall concentrate on the bound (), the other one will follow by making go to zero. We have with ˆX t = x + γ t = γ + Z t = σ W t + z > h( ˆX s ) γ s ds + h( ˆX s )d Z s zν(dz) + 2 σ2 {h ( ˆX t ) h ( ˆX η(t) )}, z ˆN (dz, ds). Observe that γ t is bounded uniformly on t and on. Using a predictable version of the Burkholder-Davis-Gundy inequality 6, lemma 2., we then obtain for t and for some constant C < which may change from line to line E sup ˆX s p CE x p + s t ( + h 2 ( ˆX s )(σ 2 + CE x p + h( ˆX s ) p γ s p ds z 2 χ ν(dz))ds) p/2 + h( ˆX s ) p ds CE + x p + h( ˆX s ) p ds The bound () now follows from Gronwall s inequality. ˆX s p ds. z p χ ν(dz) Lemma 9 (Derivatives of the flow). Let p 2 and for an integer n assume z np ν(dz) <, h C n () and h(x) K( + x ) and h (k) (x) K for some K < and all k with k n. Then k p E T < for all k with k n. x k X(t,x) 8

Proof. See the proof of lemma 4.2 in 5. Lemma. Let u(t, x) := E (t,x) f(x T ). (i) Assume (H n ) with n 2. Then u C,n (, ), k u are uniformly x k bounded for k n and u is a solution of the equation ( u u (t, x) + γ (t, x)h(x) + u(t, x + h(x)y) u(t, x) u ) (t, x)h(x)y ν(dy) t x y x + (u(t, x + h(x)y) u(t, x)) ν (dy) = (2) y > u(, x) = f(x). (ii) Assume (H n ) with n 2. Then u C,n (, ), u is a solution of equation (2) and there exist C < and p > with k u(t, x) x k C( + x p ) for all t,, x and k n. Proof. The derivative u x satisfies u(t, x) x = E f (X (t,x) T ) x X(t,x) T The interchange of the derivative and the expectation is justified using lemma 9. The boundedness under (H n ) or the polynomial growth under (H n) then follow from lemmas 8 and 9. The other derivatives with respect to x are obtained by successive differentiations under the expectation. The derivative with respect to t is obtained from Itô s formula applied to f(x (t,x) T. Z is a Lévy process and h does not depend on t, Ef(X (t,x) T and hence it is sufficient to study the derivative Ef(X(,x) t yields Ef(X t ) = f(x) + E + E f (X s )h(x s )dz s ). In fact, note that since ) = Ef(X (,x) t ) T t ). The Itô formula {f(x s + h(x s )z) f(x s ) f (X s )h(x s )z}n(dz, ds). Denoting by Z the martingale part of Z and by γ := γ + zν(dz) the z > residual drift, and using lemma 8, we get Ef(X t ) = f(x) + ds γef (X s )h(x s ) + dse {f(x s + h(x s )z) f(x s ) f (X s )h(x s )z}ν(dz), 9

and therefore = γef (X t )h(x t ) + E Ef(X t ) t = γef (X t )h(x t ) + {f(x t + h(x t )z) f(x t ) f (X t )h(x t )z}ν(dz) dθe ( θ)h 2 (X t )z 2 2 f(x t + θzh(x t )) ν(dz). Now, once again, lemma 8 allows to prove the finiteness of this expression. Finally, equation (2) is a consequence of Itô s formula applied, this time, to u(t, X t ). Lemma. Let f :, ), ) be an increasing function. Then E f(t η(t))dt Ef(τ), where τ is an exponential random variable with intensity λ. In particular, E (t η(t)) p+ (p + )! dt λ p+. Proof. Let k = sup{k : Tk < }. Then k T i E f(t η(t))dt = E = E E i= k Ti T i i= Ti k T i i= = E T i x 2 f(t Ti )dt + f(t T k )dt T k f(t i t)dt + f(t i t)dt + f(inf{t i : T i > t} t)dt T k f( t)dt f(t k+ t)dt T k = Ef(τ). The second statement of the lemma is a direct consequence of the first one. Proof of theorem 2. From Itô s formula and lemmas 8 and, Ef( ˆX ) f(x ) = Eu(, ˆX ) u(, X ) 2 u(t, = dte ˆX t ) 2 x 2 σh 2 2 ( ˆX t ) χ ν(dy)(u(t, ˆX t + h( ˆX t )y) u(t, ˆX t ) u(t, ˆX t ) + dte 2 σ2 u(t, ˆX t ) h( x ˆX t )(h ( ˆX t ) h ( ˆX η(t) )) x h( ˆX t )y) (3). (4)

Denote the expectation in (3) by A t and the one in (4) by B t. From lemmas and 8, we then have, under the hypothesis (H 3 ), A t E 2 y3 h 3 ( ˆX t ) χ ( θ) 2 3 u(t, ˆX t + θyh( ˆX t )) x 3 dθν(dy) CE + h 3 ( ˆX t ) y 3 χ ν(dy) C( + x 3 ) y 3 χ ν(dy). If, instead, the polynomial growth condition (H 3 ) is satisfied, then for some p >, A t CE y 3 ( + h 3 ( ˆX t ))( + ˆX t p + yh( ˆX t ) p ) χ ν(dy) and once again, we use lemma 8 together with the assumption (A), because the Lévy measure is now supposed integrable with any power of y. Under the condition of part (ii) and the hypothesis (H 4 ), A t E 3 u(t, ˆX t ) x 3 h 3 ( ˆX t ) y 3 χ ν(dy) + E 24 y4 h 4 ( ˆX t ) C( + x 4 ) y 4 χ (ν + ν)(dy). ( θ) 3 4 u(t, ˆX t + θyh( ˆX t )) x 4 dθ χ ν(dy) The case of (H 4) is treated as above. To analyze the term B t, define H(t, x) = u(t, x) h(x) x and assume, to fix the notation, that (H 3 ) is satisfied. Then, once again by

lemmas and 8, Taylor formula and the Cauchy-Schwartz inequality, B t 2 σ2 E(H(t, ˆX t ) H(t, ˆX η(t) ))(h ( ˆX t ) h ( ˆX η(t) )) + 2 σ2 E H(t, ˆX η(t) )(h ( ˆX t ) h ( ˆX η(t) )) 2 σ2 E ( ˆX t ˆX η(t) ) 2 dθ H x ( ˆX η(t) + θ( ˆX t ˆX η(t) )) + 2 σ2 E H(t, ˆX η(t) )E dθ h ( ˆX η(t) + θ ( ˆX t ˆX η(t) )) (h ( ˆX t ) h ( ˆX η(t) )) F η(t) Cσ 2 E( ˆX t ˆX η(t) ) 2 ( + ˆX t + ˆX η(t) ) + 2 σ2 EH(t, ˆX η(t) )h ( ˆX η(t) )E ˆX t ˆX η(t) F η(t) + 2 σ2 E H(t, ˆX η(t) )( ˆX t ˆX η(t) ) 2 dθ( θ)h (3) ( ˆX η(t) + θ( ˆX t ˆX η(t) )) CσE( 2 ˆX t ˆX η(t) ) 4 /2 + 2 σ2 EH(t, ˆX η(t) )h ( ˆX η(t) )E ˆX t ˆX η(t) F η(t). Note that ˆX t ˆX η(t) = E ˆX t ˆX η(t) F η(t) = E η(t) (5) { h( ˆX s ) σ dw s + γ ds + } 2 (h ( ˆX s ) h ( ˆX ηs ))σds 2, h( ˆX s ) {γ ds + 2 } (h ( ˆX s ) h ( ˆX ηs ))σ 2 ds F η(t). η(t) The Burkholder-Davis-Gundy inequality and, the Cauchy-Schwartz inequality and lemma 8 then give, under the hypothesis (H 3 ): E( ˆX t ˆX η(t) ) 4 CE h( ˆX s ) {γ ds + 2 } (h ( ˆX s ) h ( ˆX 4 ηs ))σ 2 ds η(t) ( ) 2 t + Cσ 4 E h 2 ( ˆX s )ds η(t) C( γ + σ 2 )4 E(t η(t)) 4 ( + sup ˆX) 4 + Cσ 4 E(t η(t))2 ( + sup ˆX) 4 C( γ + σ 2 )4 E(t η(t)) 8 2 + Cσ 4 E(t η(t)) 4 2. Similarly, for the second term in (5), we get 2 σ2 EH(t, ˆX η(t) )h ( ˆX η(t) )E ˆX t ˆX η(t) F η(t) Cσ 2 ( γ + σ 2 )E(t η(t))2 2. 2

Assembling together the estimates for the two terms in (5), B t Cσ 2 (( γ +σ 2 )2 E(t η(t)) 8 2 +σ 2 E(t η(t)) 4 2 +( γ +σ 2 )E(t η(t))2 2 ) Using the Jensen inequality and lemma, this is further reduced to ( B t dt Cσ 2 ( γ + σ 2)2 + γ + σ 2 ). λ From the Cauchy-Schwartz inequality we get ( ) 2 y χ ν(dy) λ y 2 χ ν(dy) Cλ, y y which implies that γ C λ and finally B Cσ 2 λ. Assembling these estimates with the ones for A, we complete the proof under the assumptions (H 3 ) (or (H 4 )). Under the assumptions (H 3) or (H 4) the proof is done in a similar fashion. λ 4 γ +σ 2 3 Approximating multidimensional SDE using expansions In this section we propose an alternative approximation scheme, which yields similar rates to the ones obtained in section 2 but has the advantage of being applicable in the multidimensional case. On the other hand, it is a little more difficult to implement. As before, we start by replacing the small jumps of Z with a suitable Brownian motion, yielding the SDE d X t = h( X t ){γ dt + dwt + dz t }, (6) where W is a d-dimensional Brownian motion with covariance matrix Σ. This process can also be written as X(t) = X(η t ) + h ( ) X(s) dw (s) + η t X(T i ) = X(T i ) + h( X(T i )) Z(T i ). η t h ( X(s) ) γ ds, The idea now is to expand the solution of (6) between the jumps of Z around the solution of the deterministic dynamical system (4), treating the stochastic term as a small random perturbation (see 8, Chapter 2). Assume that the coefficient h is Lipschitz and consider a family of processes (Y α ) α defined by Y α (t) = X(η t ) + α h (Y α (s)) dw (s) + η t h (Y α (s))γ ds η t 3

Our idea is to replace the process X := Y with its first-order Taylor approximation: X(t) Y (t) + α Y α (t) α=. Therefore, the new approximation X is defined by X(t) = Y (t) + Y (t), t > η t, X(T i ) = X(T i ) + h( X(T i )) Z(Ti ), Y (t) = X(η t ) + Y (t) = η t η t h(y (t))γ ds (7) h x i (Y (s)) Y i (s)γ ds + η t h (Y (s)) dw (s) where we used the Einstein convention for summation over repeated indices. Conditionally on the jump times (Ti ) i, the random vector Y (t) is Gaussian with mean zero. Applying the integration by parts formula to Y i(t)y j (t), it can be shown that its covariance matrix Ω(t) satisfies the (matrix) linear equation Ω(t) = η t (Ω(s)M(s) + M (s)ω (s) + N(s))ds (8) where M denotes the transpose of the matrix M and M ij (t) = h ik(y (t)) x j γ k and N(t) = h(y (t))σ h (Y (t)). Successive applications of Gronwall s inequality yield the following bounds for Y and Ω: Y (t) ( Y (η t ) + C γ (t η t )) e C γ (t ηt) Ω(t) C Σ (t η t )( + Y (η t ) 2 )e C γ (+ γ )(t ηt). (9) Lemma 2. Assume ν( d ) =. Then γ 2 lim =. λ Proof. Left to the reader as an exercise. To prove Theorem 4, we need to generalize lemmas 8, 9 and to the multidimensional setting. While the generalization of the last two lemmas is straightforward, the first one requires a little work, because it needs to be adapted to the new discretization scheme. Lemma 3 (Bounds on moments of X). Assume ν( d ) =, h C b (n ) and z d z p ν(dz) < for some p 2. 4

Then there exists a constant C > (which may depend on p but not on ) such that for all sufficiently small, E sup X s p C( + x p ). (2) s Proof. Denote h t := h( X t ) and h t := h(y (t)). The SDE for X can be rewritten as d X t = h t d Z t + h t dwt + h t γdt + ( h t h t )γ dt + h (Y (t)) Y x (t)γ i dt, i where Z t = ynˆ (dy, ds), γ = γ + yν(dy). y > y > By predictable Burkholder-Davis-Gundy inequality 6, lemma 2., we then have ( E sup X s p CE x p + h s 2 s t + h s p ds z p χ ν(dz) + + CE ) p/2 z 2 χ ν(dz)ds ( ) p/2 h s 2 Σ ds t h s p γ p ds + γ p h s h s p ds + γ p Y (s) p ds x p + h s p ds + ( + γ p ) t h s h s p ds + γ p Y (s) p ds, where the constant C does not depend on and may change from line to line. Since h is bounded, we have t E sup X s p CE x p + h s p ds + ( + γ p ) Y (s) p ds. s t (2) Using (9), the last term can be estimated as E ( + γ p ) CE ( + γ p ) CE ( + γ p ) CE ( + γ p ) Y (s) p ds = CE ( + γ p ) Ω(s) p/2 ds E Y (s) p F η(s) ds ( + X η(s) ) p ) Σ p/2 (s η s ) p/2 e C γ (+ γ )(s ηs) ds ( + X η(s) ) p ) Σ p/2 τ p/2 e C γ (+ γ )τ ds, 5

where τ is an independent exponential random variable with parameter λ. Due to Lemma 2, for sufficiently small, the expectation with respect to τ exists, and computing it explicitly we obtain E ( + γ p ) Y (s) p ds C E ( + X η(s) p )ds, where C as. Inequality (2) therefore becomes t E sup X s p CE x p + h s p ds + C ( + X η(s) p ) s t CE x p + which implies that for sufficiently small, E sup X s p CE + x p + s t and we get (2) by Gronwall s lemma. Theorem 4. h s p ds + C t( + sup X s p ) s t h s p ds, (i) Assume (H 3 ) or (H 3) + (A). Then ( ) Ef( ˆX Σ ) f(x ) C ( Σ + γ ) + y 3 χ ν(dy). λ d (ii) Assume (H 4 ) or (H 4 )+(A), let γ be bounded and suppose that for some measure ν y i y j y k χ ν(dy) d y 4 χ ν (dy) d for all i, j, k and all sufficiently small. Then ( ) Ef( ˆX Σ ) f(x ) C + y 4 χ (ν + ν)(dy). λ d Proof. By Itô formula we have Ef( X ) f(x ) = Eu(, X ) u(, X ) u(t, = dte X(t)) { h ij (Y (t)) + h } ij(y (t)) Y k (t) h ij ( x i x X(t)) γ j (22) k + 2 u( X(t)) h ik (Y (t))σ kl 2 x i x h jl(y (t)) (23) j { u(t, X(t) + h( X(t))y) u(t, X(t)) u(t, X } t ) h ij ( x X(t))y j χ ν(dy). d i (24), 6

Denote the expectation term in (22) by A t and the sum of the terms in (23) and (24) by B t. The term A t satisfies u(t, A t CE X(t)) x Y (t) 2 γ Under the assumption (H 4 ) or (H 3 ), using (9), we have A CE Y (t) 2 γ CE Ω(t) γ CE + Y (η t ) 4 /2 Σ γ E(t η t )e C γ (+ γ )(t ηt). Using Lemmas and 3, we then get A t dt C( + x 2 ) Σ γ λ. Under (H 4 ) or (H 3 ) this result can be obtained along the same lines. Let us now turn to the term B t. It is rewritten via B t =E { u(t, X(t) + h( X(t))y) u(t, X(t)) d 2 u(t, X t ) x i x j h ik ( X(t))h jl ( X(t))y k y l } χ ν(dy) u(t, X(t)) h ij ( x X(t))y j i { Σ kl h ik (Y (t))h jl (Y (t)) h ik ( X(t))h jl ( X(t)) } ) + 2 u(t, Y (t)) x i x j ( 2 u(t, + X(t)) 2 u(t, Y (t)) x i x j x i x j { Σ kl h ik (Y (t))h jl (Y (t)) h ik ( X(t))h jl ( X(t)) } Denote the first two lines by B (t), the third line by B 2 (t) and the fourth by B 3 (t). Under the hypotheses (H 4 ) or (H 3 ), we get 3 u(t, X t + θh( X(t))y) B (t) E h il ( x d i x j k X(t))h jm ( X t )h kn ( X(t))y l y m y n χ ν(dy) CE h( X t ) 3 y 3 χ ν(dy) C( + x 3 ) y 3 χ ν(dy). d d Let F ikjl (x) := h ik (x)h jl (x). Using the fact that conditionally on F ηt, Y is a centered Gaussian process, B 2 (t) E 2 u(t, Y (t)) Σ kl {F x i x ikjl (Y (t)) F ikjl ( X(t)) F } ηt j E 2 u(t, Y (t)) Σ kl x i x (t)y q (t) ( θ) 2 F ikjl (Y (t) + θy (t)) dθ j x p x q CE Σ Y (t) 2 ( + sup X s ) s. 7

Using once again Lemmas and 3, we obtain B 2 dt C( + x ) Σ 2 λ. With a similar reasoning we obtain that B 3 also satisfies B 3 dt C( + x ) Σ 2 λ. The proof of the first part of the theorem under assumptions (H 4 ) or (H 3 ) is done along the same lines. 4 Applications Stochastic models driven by Lévy processes are gaining increasing popularity in financial mathematics, where they offer a much more realistic description of the underlying risks than the traditional diffusion-based models. In this context, many quantities of interest are given by solutions of stochastic differential equations which cannot be solved explicitly. In this paper we consider two such applications: portfolio management with constraints and the Libor market model with jumps. One-dimensional SDE: portfolio management with constraints The value V t of a portfolio in various common portfolio management strategies can often be expressed as solution to SDE. One example is the so-called Constant Proportion Portfolio Insurance (CPPI) strategy which aims at maintaining the value of the portfolio above some prespecified guaranteed level B t and consists in investing m(v t B t ) into the risky asset S t and the remaining amount into the risk-free bond, where B t is the time-t value of the guarantee: dv t = m(v t B t ) ds t S t + (V t m(v t B t ))rdt. For simplicity we shall suppose that the guarantee is the value of a zero-coupon bond expiring at time t with nominal amount N = : B t = e r(t t) N. This equation can then easily be solved analytically, but often additional constraints imposed on the portfolio make the analytic solution impossible. For instance, often leverage is not allowed, and the portfolio dynamics becomes dv t = min(v t, m(v t B t )) ds t S t + (V t min(v t, m(v t B t )))rdt. Suppose that the stock S follows an exponential Lévy model: dz t = ds t S t rdt, (25) 8

where Z is a pure-jump Lévy process, and let Vt = Vt B t be the value of the portfolio computed using B t as numéraire. Equation (25) can be simplified to dv t = min(v t, m(v t ))dz t. (26) This equation cannot be solved explicitly, but the corresponding deterministic ODE dx t = min(x t, m(x t ))dt admits a simple explicit solution for m > : if x Otherwise, for < x < m m X t = xe t. m m then X t = + (x )e mt, t t X t = m m (x )(m )/m e t, t t with t = m log (x )(m ). Figure shows the trajectories of the solution of (26) simulated in the variance gamma model. The variance gamma model is a pure-jump Lévy process with Lévy density ν(x) = Ce λ x x x< + Ce λ+x x>. x The variance gamma process is thus a difference of two independent gamma processes and we simulate it using the approximation of Example 7. The nearexact solution was produced using the standard Euler scheme with discretization steps. With jumps, the jump-adapted scheme has a greater error than the Euler scheme with points, because the Euler scheme uses exact simulation of increments. However for 2 jumps, the error of the jumpadapted scheme virtually disappears, whereas the Euler scheme still has a significant error even with 5 steps. This illustrates the exponential convergence of the jump-adapted scheme for the variance gamma process. Note that these graphs show the trajectories of the solution and hence illustrate the strong convergence. The effect of the weak approximation of small jumps by Brownian motion is therefore not visible, but even without this additional improvement, the convergence for the variance gamma process is very fast. Multidimensional SDE: Libor market model with jumps One important example of a non-trivial multidimensional SDE arises in the Libor market model, which describes joint arbitrage-free dynamics of a set of forward interest rates. Libor market models with jumps were considered among others by Jamshidian, Glasserman and Kou 9 and Eberlein and Özkan 7. Let T i = T +(i )δ, i =,...,n+ be a set of dates called tenor dates. The Libor rate L i t is the forward interest rate, defined at date t for the period T i, T i+. If 9

4. 3.5 Jump adapted with jumps Jump adapted with 2 jumps Near exact solution 3.5 3. Euler with points Euler with 5 points Near exact solution 3. 2.5 2.5 2. 2..5.5. 2 3 4 5 6 7 8 9. 2 3 4 5 6 7 8 9 Figure : CPPI portfolio simulation with V =.5 and m = 2 in the variance gamma model (no diffusion component added). Left: jump-adapted scheme. ight: standard Euler scheme. B t (T) is the price at time t of a zero-coupon bond, that is, a bond which pays to its holder unit at date T, the Libor rates can be computed via L i t = ( ) Bt (T i ) δ B t (T i+ ) A Libor market model (LMM) is a model describing an arbitrage-free dynamics of a set of Libor rates. In this example, we shall use a simplified version of the general semimartingale LMM given in, supposing that all Libor rates are driven by a d- dimensional pure jump Lévy process. In this case, following, an arbitragefree dynamics of L t,...,l n t can be constructed via the multi-dimensional SDE dl i t L i t = σ i (t)dz t σ i (t)z d n j=i+ ( + δlj t σj (t)z + δl j t ) ν(dz)dt. (27) Here Z is a d-dimensional martingale pure jump Lévy process with Lévy measure ν, and σ i (t) are d-dimensional deterministic volatility functions. Equation (27) gives the Libor dynamics under the so-called terminal measure, which corresponds to using the last zero-coupon bond B t (T n+ ) as numéraire. This means that the price at time t of an option having payoff H = h(l T,...,L n T ) 2

at time T is given by h(l π t (H) = B t (T n+ )E T,..., L n T ) B T (T n+ ) F t n = B t (T n+ )E h(l T,..., L n T ) ( + δl i T ) F t = i= B t (T ) n i= ( + δli t) E h(l T,..., L n T ) n ( + δl i T ) F t i= (28) The price of any such option can therefore be computed by Monte Carlo using equation (27). Introducing a n + -dimensional state process X t with Xt t and Xi t = Li t for i n, a d + -dimensional driving Lévy process Z t = (t Z t ), and a (n + ) (d + )-dimensional function h(x) defined by h =, h j = for j = 2,...,d +, h i = f i (x) and h ij = σj i (x ) with f i (x) := σ i (x )z d n j=i+ ( + δx jσ j ) (x )z + δx j ν(dz), we see that the equation (27) takes the homogeneous form dx t = h(x t )d Z t, to which the discretization scheme of section 3 can be readily applied. For the purposes of illustration, we simplify the model even further, taking d = and supposing that the functions σ i (t) are constant: σ i (t). The driving Lévy process Z is supposed to follow the CGMY model with two alternative sets of parameters: α =.5 and C =.5 in Case and α =.8 and C =. in Case 2. In both cases, we take λ + = and λ = 2. The set of tenor dates for the Libor market model is {5, 6, 7, 8, 9, } years, which corresponds to a stochastic differential equation in dimension 5. The initial values of all forward Libor rates are all equal to 5%. This big a value was taken to emphasize the non-linear effects in the simulation. For the numerical implementation of the scheme of section 3, we solved the equations (7) and (8) simultaneously using the classical 4-th order unge- Kutta scheme, with time step equal to max(, Ti+ T i ). This means that most of the time there is only one step of the unge-kutta scheme between two consecutive jump times, and our numerical tests have shown that even in this case the error associated to the unge-kutta scheme is negligible compared to the stochastic approximation error. As a sanity check, we first compute the price of a zero-coupon bond with maturity T, which corresponds to taking h in equation (28). By construction of the model, if the SDE is solved correctly, we must recover the input price of the zero-coupon bond. Figure 2 shows the ratio of the zero coupon bond price estimated using the first-order scheme of section 3 to the input value. For comparison, we also give the value computed using the -order approximation 2