Bias in the Estimation of Mean Reversion in ContinuousTime Lévy Processes


 Richard Bryant
 1 years ago
 Views:
Transcription
1 Bias in the Estimation of Mean Reversion in ContinuousTime Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA c University of International Business and Economics, Beijing, China d Singapore Management University, Singapore March 2015 Abstract This paper develops the approximate bias of the ordinary least squares estimator of the mean reversion parameter in continuoustime Lévy processes. Several cases are considered, depending on whether the longrun mean is known or unknown and whether the initial condition is xed or random. The approximate bias is used to construct a bias corrected estimator. The performance of the approximate bias and the bias corrected estimator is examined using simulated data. Keywords: Bias, Mean Reversion Parameter, Lévy processes. JEL Classi cation: C10, C22, C58. We sincerely thank the referee for helpful comments. Wang acknowledges the nancial support from the National Natural Science Foundation of China (Project No ). Yu acknowledges the nancial support from Singapore Ministry of Education Academic Research Fund Tier 2 under the grant number MOE2011T Corresponding Author Address: School of International Trade and Economics, University of International Business and Economics, Beijing, China, ; Phone: ; 1
2 1 Introduction There is an extensive literature of using di usion processes to model the dynamic behavior of nancial asset prices, including Black and Scholes (1973), Vasicek (1977), Cox, Ingersoll, and Ross (1985), among others. Many processes considered in the literature are based on the Brownian motion. In recent years, however, strong evidence of jumps in nancial variables has been reported. To capture jumps, continuoustime Lévy processes have become increasingly popular and various Lévy models have been developed in the asset pricing literature; see, for example, Barndor Nielsen (1998) and Carr and Wu (2003). In practice, one can only obtain the observations at discrete points from a nite time span. Based on discretetime observations, di erent methods have been used to estimate continuoustime models. Phillips and Yu (2009) provided an overview of some widely used estimation methods. When the drift function is linear and the process is slowly mean reverting, it is found that there exists serious bias in estimating the mean reversion parameter (say ) by almost all the methods (Phillips and Yu, 2005). Because the mean reversion parameter has important implications for asset pricing, risk management and forecasting, accurate estimation of it has received considerable attentions in the literature. For example, Yu (2012) approximated the bias of the maximum likelihood estimator (MLE) of when the longrun mean is known and the initial value is random for the Gaussian Ornstein Uhlenbeck (OU) process. Tang and Chen (2009) approximated the bias of the MLE of when the longrun mean is unknown for the Gaussian OU process and the CoxIngersollRoss (CIR) model. While the bias in estimating has been studied in continuoustime di usion processes, to the best of our knowledge, nothing has been reported on the analytical bias issue in continuoustime Lévy processes. The objective of this paper is to develop the approximate bias of the least squares (LS) estimator of under the Lévy measure. The proof of the results in this paper can be found in Bao et al. (2013). 2 Models and the Bias A Lévydriven OU process is dx(t) = ( x(t))dt + dl(t); x(0) = x 0 ; (2.1) where L(t); t 0; is a Lévy process with L(0) = 0 a.s. In the special case when L(t) is a Brownian motion, the process is the Gaussian OU process used by Vasicek (1977) to model interest rate data. When > 0, the process is stationary with being the long run mean and captures the speed of mean reversion. It is well known that the LS estimator of is ^ = ln(^) h ; (2.2) where ^ is the LS estimator of the autoregressive coe cient from the discretized AR(1) model x th = + x (t 1)h + " th ; (2.3) 2
3 in which = (1 e h ), = e h, " th = R th (t 1)h e (th s) dl(s), h is the sampling interval, t = 1; ; n such that the observed data are discretely recorded at (0; h; 2h; ; nh) in the time interval [0; T ] and nh = T. By the properties of Lévy process, the sequence of f" th g n t=1 consists of iid random variables. We assume that the moments of " th exist, up to order 4, with variance 2 "; and skewness and excess kurtosis coe cients 1 and 2 ; respectively. 1 We are interested in studying the properties of ^ estimated from the discrete sample via ^: As it is expected, the properties of ^ depend on how we spell out the initial observation x(0) = x 0 : it can be xed at a constant or can be random, independent of (" 1 ; ; " n ), such that the time series (x 0 ; x 1 ; ; x n ) is stationary. For notational convenience, we drop the subscript h; and throughout, x = (x 1 ; ; x n ) 0, x 1 = (x 0 ; ; x n 1 ) 0 ; " = (" 1 ; ; " n ) 0 : For a given ; f 1 is an n 1 vector with f 1;i = i ; f 2 = f 1 =; C 1 is a lowertriangular matrix with c 1;ij = i j ; i j; C 2 is a strict lowertriangular matrix with c 2;ij = i j 1 ; i > j: Note that by de nition, C 2 = 1 (C 1 I): The dimensions of vectors/matrices are to be read from the context, and thus we suppress the dimension subscripts in what follows. To derive the analytical bias of ^; we follow the framework of Bao (2013). Let ^ be a p nconsistent estimator of identi ed by the moment condition (^) = 0 from a sample of size n: Typically, (^) denotes the moment condition. In nite samples, ^ is usually biased and one may approximate the bias E(^) to the second order, namely, E(^) =B(^) + o(n 1 ); where B(^) is de ned as the secondorder bias. Bao (2013) showed that B(^) can be written as B(^) = 1 E(H 1 0 )vec( 1 ) E (H 2 ) ( 1 1 )vec E 0 ; (2.4) where = (); H l = r l, l = 1; 2, r denotes the derivative with respect to, and 1 = [E(H 1 )] 1. For the scalar case, it becomes 2.1 is Known B(^) = 1 [E(H 1 )] 2 E(H 1 ) 1 2[E(H 1 )] 3 E(H 2)E( 2 ): (2.5) When is known a priori (= 0, without loss of generality), we can write x = x 0 f 1 + C 1 ", x 1 = x 0 f 2 +C 2 "; " = x exp( h)x 1 : 2 The moment condition, up to some scaling constant, for estimating is () = 1 n x0 1": (2.6) Upon taking derivatives, we have H l = ( h)l x 0 n 1x 1 ; l = 1; 2; (2.7) By substituting (2.6) and (2.7) into (2.5), we derive the approximate bias of ^; when x 0 is xed, 1 This might rule out some Lévy processes. Also, in general, the moments of " th depend on the parameters, ; and the sampling frequency h: 2 When is known but may not be 0, one just needs to de ne y t = x t and work with y t. 3
4 B(^) = 1 + 3e 2h + 4e 2nh 1 e 2nh 1 + 7e 2h 4e 2nh 1 e 2h x 2 0 e 2h ne 2h (1 e 2h ) 2 "e 2h e 2h 1 e 2nh x 2 0 n 2 "e 2h + 2(1 + e h ) 1 e nh e h e nh x 0 1 n " e 2h ; (2.8) and when x 0 is random, B(^) = 1 (3 + e2h ) 2(1 e 2nh ) T n(1 e 2h ) : (2.9) Remark 1: The skewness parameter 1 matters for the bias of ^: Its e ect, however, disappears for the special case of x 0 = 0; where the bias expression simpli es to B(^) = 1 + 3e 2h + 4e 2nh 1 e 2nh 1 + 7e 2h e 2h ne 2h (1 e 2h : ) Remark 2: Equation (2.9) suggests that the result in Yu (2012) is in fact robust to nonnormality. 2.2 is Unknown When is unknown and has to be estimated, x = x 0 f 1 + C 1 + C 1 ", x 1 = x 0 f 2 + C 2 + C 2 "; = (1 exp( h)) ; " = x exp( h)x 1 ; where is an n 1 vector of unit elements. Since the pairs (; ); (; ), and (; ) have onetoone mapping into each other, and we focus on deriving the nitesample bias of ^; the reparametrized model x t = + exp ( h) x t 1 + " t with parameter vector = (; ) gives exactly the same ^ as that estimated from the original model x t = (1 exp ( h)) + exp ( h) x t 1 + " t with parameter vector (; ). Thus, we de ne the moment condition, up to some scaling constant, as () = 1 n 0 " hx 0 1 " : (2.10) By taking derivatives, we have H 1 = 1 n h 0 x 1 n h 0 x 1 h 2 x 0 1 " h2 2 x 0 1 x ; 1 H 2 = h 2 0 x 1 n 0 h 2 0 x 1 h 2 0 x 1 h 3 x 0 1 " + 3h3 2 x 0 1 x 1 : (2.11) The approximate bias of ^; when x 0 is xed, is B(^) = 5 + 2eh + e 2h + 4e 2(n 1)h + 2[e 2nh e 2(n 1)h ](x 0 ) 2 T 2 " + 1 e nh [2e h + 13e 2h + 4e 3h + e 4h + e (n 4)h + 2e (n 3)h +9e (n 2)h ] 2 (1 e 2h ) T n 4
5 and when x 0 is random, + 1 e nh [e h + 5e (n 1)h ] x T n 2 " + 1 e nh [5 + e 2h + 5e (n 2)h + 9e nh ](x 0 ) 2 n 2 " 2 1 e nh [e h e h + e 3h + 5e (n 1)h ]x 0 T n 2 " 1 1 e nh [e (n 1)h + e (n 2)h ](x 0 ) T n ", (2.12) B(^) = 5 + 2eh + e 2h 2e h 1 e nh 1 e 2h 2 2 T n 2 " + 1 e nh [e h + 4e 2h + e 3h + 2e (n 2)h ] (1 e 2h : (2.13) ) T n Remark 3: The leading term (of order O(T 1 )) in (2.13) gives the result derived in Tang and Chen (2009). Moreover, (2.13) suggests that the approximate bias of ^ under the case of random x 0 is robust to nonnormality. Remark 4: Similar as before, the skewness matters for the approximate bias. In contrast, for the special case when x 0 is xed at 0, its e ect does not disappear: B(^) = 5 + 2eh + e 2h + 4e 2(n 1)h + 2[e 2nh e 2(n 1)h ] 2 T 2 " + 1 e nh [2e h + 13e 2h + 4e 3h + e 4h + e (n 4)h + 2e (n 3)h +9e (n 2)h ] 2 (1 e 2h ) T n + 1 e nh [5 + 2e h + e 2h + 10e (n 1)h + 5e (n 2)h + 9e nh ] 2 n 2 " e nh [e (n 1)h + e (n 2)h ] T n " : Remark 5: When x 0 is xed at, however, the e ect of skewness disappears on the approximate bias: B(^) = 5 + 2eh + e 2h + 4e 2(n 1)h 2 1 e nh (e h 2e h + e 3h ) 2 T n 2 " + 1 e nh [2e h + 13e 2h + 4e 3h + e 4h + e (n 4)h + 2e (n 3)h +9e (n 2)h ] 2 (1 e 2h : ) T n Remark 6: For the random case, if further = 0 (i.e., the true model has no drift term but we still estimate the discrete AR model with an intercept), the result reduces to B(^) = 5 + 2eh + e 2h + 1 e nh [e h + 4e 2h + e 3h + 2e (n 2)h ] (1 e 2h : ) T n 5
6 3 Numerical Results Our bias formulae (2.8), (2.9), (2.12), and (2.13) involve unknown population parameters, but we can make them feasible by replacing the unknown parameters with their consistent estimates. That is, we may replace by ^; by ^ = ^=(1 ^); 2 " and 1 by their sample analogues from the LS residuals, and denote the feasible bias by ˆB(^). An immediate application of our bias results is to construct a bias corrected estimator of. Here we follow the indirect inference method introduced in Phillips and Yu (2009) to design the bias corrected estimator of as follows: ^ bc = arg min jj^ ˆB()jj; (3.1) where ˆB() is ˆB(^()) with ^() being the LS estimate of when its true value is. In (3.1) +ˆB() is the approximate mean function of ^ when the true value is. Unlike what is done for the indirect inference method that relies on simulations to obtain the mean function, we construct ^ bc without invoking simulations to approximate the mean of ^, as we utilize directly our analytical bias. We conduct Monte Carlo simulations to demonstrate the performance of our bias formulae and the bias corrected estimator in nite samples. In practice we observe only the discrete sample fx 0 ; ; x n g. So we simulate discrete time observations from the continuous time model (2.1) with the driving process being the skew normal process of Azzalini (1985) with the shape parameter = 5 (and correspondingly 1 = 0:8510 and 2 = 0:705). We set = 0:1 when it is unknown, x 0 = when it is xed, h = 1=12; = 1. Tables 1 and 2 report our feasible bias ˆB(^) and the bias corrected estimator ^ bc, in comparison with the actual bias (denoted by Bias in the tables) and the LS estimator ^, for the cases of known (= 0) and unknown, respectively. The data span is set at T = 50: The results are averaged over 10,000 replications and the standard deviations (across the simulations) of ^ and ^ bc are also reported. We observe that our bias approximation formulae work really well to capture the true bias of ^: The bias corrected estimator ^ bc performs much better than the uncorrected ^, without the tradeo of bias reduction and increased variance. Similar ndings have been recorded in Phillips and Yu (2009) regarding this feature of bias reduction based on the indirect inference approach. We also have the results for a smaller data span T = 10; available upon request. The ndings are similar, except when is small (0.1) for the case of unknown mean. (But still ^ bc is much less biased than ^.) Recall that = exp( h), so this corresponds to an discrete AR(1) process with = exp( 0:1=12) = 0:9917. Upon carefully examining the simulation results, we nd that in this near unitroot case with a sample size of 120, while the variance of ^ is small, the variance of the estimated intercept ^ is very big, with ^ ranging from about 200 to 200, which in turn substantially distorts the performance of our bias formulae. Once we use the true value of in constructing ˆB(^) and ^ bc, then the feasible bias matches very well the actual bias and ^ bc is virtually unbiased. 6
7 Table 1: Bias and Bias Correction, Known = 0 and x 0 Fixed Bias ˆB(^) ^ std(^) ^ bc std(^ bc ) = 0 and x 0 Random Bias ˆB(^) ^ std(^) ^ bc std(^ bc ) Table 2: Bias and Bias Correction, Unknown = 0:1 and x 0 Fixed Bias ˆB(^) ^ std(^) ^ bc std(^ bc ) = 0:1 and x 0 Random Bias ˆB(^) ^ std(^) ^ bc std(^ bc ) Conclusions Lévy processes have found increasing applications in economics and nance. It has been documented, however, that the typical quasi maximum likelihood estimation procedure tends to over estimate the mean reversion parameter in continuoustime Lévy processes. Based on the technique of Bao (2013), we have derived several analytical formulae to approximate the nitesample bias of the estimated mean 7
8 reversion parameter under di erent cases: known or unknown longrun mean, xed or random initial condition. Our simulation results indicate in general good performance of the approximate bias formulae in capturing the true bias behaviors of the mean reversion estimator and good performance of our bias corrected estimator. References Azzalini, A., 1985, A class of distributions which includes the normal ones. Scandinavian Journal of Statistics 12, Bao, Y., 2013, Finite sample bias of the QMLE in spatial autoregressive models. Econometric Theory 29, Bao, Y., A. Ullah, Y. Wang, and J. Yu, 2013, Bias in the Mean Reversion Estimator in ContinuousTime Gaussian and Lévy Processes. Singapore Management University, Working Paper No Barndor Nielsen, O. E., 1998, Processes of normal inverse Gaussian type. Finance and Stochastics 2, Black, F. and M. Scholes, 1973, The pricing of options and corporate liabilities. Journal of Political Economy 81, Carr, P. and L. Wu, 2003, The nite moment log stable process and option pricing. Journal of Finance 58, Cox, J., J. Ingersoll, and S. Ross, 1985, A theory of the term structure of interest rates. Econometrica 53, Phillips, P.C.B. and J. Yu, 2005, Jackkni ng Bond Option Prices. Review of Financial Studies 18, Phillips, P.C.B. and J. Yu, 2009, Maximum likelihood and Gaussian estimation of continuous time models in nance, in: T. G. Andersen, R.A. Davis, J.P. Kreiß, and T. Mikosch (Eds.), Handbook of Financial Time Series, SpringerVerlag, New York, pp Tang, C.Y. and S.X. Chen, 2009, Parameter estimation and bias correction for di usion processes. Journal of Econometrics, 149, Ullah, A., 2004, Finite Sample Econometrics. Oxford University Press, New York. Vasicek, O., 1977, An equilibrium characterization of the term structure. Journal of Financial Economics 5, Yu, J., 2012, Bias in the estimation of the mean reversion parameter in continuous time models. Journal of Econometrics 169,