Is a Brownian motion skew?



Similar documents
Maximum Likelihood Estimation

Exact Confidence Intervals

The Heat Equation. Lectures INF2320 p. 1/88

Estimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

Lectures on Stochastic Processes. William G. Faris

Bayesian logistic betting strategy against probability forecasting. Akimichi Takemura, Univ. Tokyo. November 12, 2012

MATH 425, PRACTICE FINAL EXAM SOLUTIONS.

Chapter 2: Binomial Methods and the Black-Scholes Formula

Pricing of an Exotic Forward Contract

Online Appendix. Supplemental Material for Insider Trading, Stochastic Liquidity and. Equilibrium Prices. by Pierre Collin-Dufresne and Vyacheslav Fos

Corrected Diffusion Approximations for the Maximum of Heavy-Tailed Random Walk

Maximum likelihood estimation of mean reverting processes

Two Topics in Parametric Integration Applied to Stochastic Simulation in Industrial Engineering

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

The Ergodic Theorem and randomness

MATH4427 Notebook 2 Spring MATH4427 Notebook Definitions and Examples Performance Measures for Estimators...

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Properties of Future Lifetime Distributions and Estimation

Nonparametric adaptive age replacement with a one-cycle criterion

Linear Threshold Units

CHAPTER IV - BROWNIAN MOTION

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Mathematical Finance

Lecture 3: Linear methods for classification

Trading activity as driven Poisson process: comparison with empirical data

R 2 -type Curves for Dynamic Predictions from Joint Longitudinal-Survival Models

Stephane Crepey. Financial Modeling. A Backward Stochastic Differential Equations Perspective. 4y Springer

Message-passing sequential detection of multiple change points in networks

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

Monte Carlo Simulation

e.g. arrival of a customer to a service station or breakdown of a component in some system.

Multivariate Normal Distribution

The Kelly criterion for spread bets

Lectures 5-6: Taylor Series

STAT 830 Convergence in Distribution

4 Lyapunov Stability Theory

An Internal Model for Operational Risk Computation

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Lecture 4: BK inequality 27th August and 6th September, 2007

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

Introduction to General and Generalized Linear Models

Influences in low-degree polynomials

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Cyber-Security Analysis of State Estimators in Power Systems

Grey Brownian motion and local times

The Power of the KPSS Test for Cointegration when Residuals are Fractionally Integrated

Statistical Machine Learning from Data

Monte Carlo Methods and Models in Finance and Insurance

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

1 Prior Probability and Posterior Probability

E3: PROBABILITY AND STATISTICS lecture notes

Basics of Statistical Machine Learning

Chicago Booth BUSINESS STATISTICS Final Exam Fall 2011

v w is orthogonal to both v and w. the three vectors v, w and v w form a right-handed set of vectors.

The Exponential Distribution

THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

Tutorial on Markov Chain Monte Carlo

Alternative Price Processes for Black-Scholes: Empirical Evidence and Theory

4. Continuous Random Variables, the Pareto and Normal Distributions

Brownian Motion and Stochastic Flow Systems. J.M Harrison

4 Sums of Random Variables

Trimming a Tree and the Two-Sided Skorohod Reflection

Notes for a graduate-level course in asymptotics for statisticians. David R. Hunter Penn State University

Pricing Formula for 3-Period Discrete Barrier Options

Section 5.1 Continuous Random Variables: Introduction

Chapter 3 RANDOM VARIATE GENERATION


Dongfeng Li. Autumn 2010

Martingale Ideas in Elementary Probability. Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996

Some Essential Statistics The Lure of Statistics

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

Markov random fields and Gibbs measures

Gambling Systems and Multiplication-Invariant Measures

Recent Developments of Statistical Application in. Finance. Ruey S. Tsay. Graduate School of Business. The University of Chicago

Some remarks on two-asset options pricing and stochastic dependence of asset prices

Reject Inference in Credit Scoring. Jie-Men Mok

Monte Carlo Methods and Black Scholes model

Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression

Markov Chain Monte Carlo and Numerical Differential Equations

Semi-Markov model for market microstructure and HF trading

AdaBoost. Jiri Matas and Jan Šochman. Centre for Machine Perception Czech Technical University, Prague

Transcription:

Is a Brownian motion skew? Ernesto Mordecki Sesión en honor a Mario Wschebor Universidad de la República, Montevideo, Uruguay XI CLAPEM - November 2009 - Venezuela 1 1 Joint work with Antoine Lejay and Soledad Torres

Outline What is Skew Brownian motion Maximum likelihood and main result Statistical application Testing hypothesis Some numerical considerations One open question Agradecimientos

What is Skew Brownian motion The Skew Brownian motion X = {X t : 0 t T } is the strong solution of the sde X t = x + B t + θl x t, B = {B t : 0 t T } is a standard Brownian motion defined on (Ω, F, P). x 0 is the initial condition, θ [ 1, 1] is the skweness parameter, l x = {l x t : 0 t T } is the local time at level zero of the (unknown) X.

Particular cases In case θ = 0 we have the usual BM In case θ = 1 we have the reflected BM, equal in distribution (by Skorohod s result) to x + B t. In case θ = 1 we have BM up to the first time the process hits x = 0, after, negative reflected BM.

Alternative construction of SBM (I) We can define the SBm as the weak limit of scaled markov chain on Z, with transition probabilities 1/2, if i 0, j = i ± 1 p, if i = 0, j = 1 p(i, j) = 1 p, if i = 0, j = 1 0, in other cases with p = (θ + 1)/2. Remark This construction shows that the process is Markovian

Alternative construction of SBM (II) We can construct the SBm departing from the excursions of the original BM. In this case we choose with probability p whether the excursion is positive 2. 2 Lejay, A., On the constructions of the Skew Brownian motion, Probab. Surv., 3, (2006), 413 466

Density As a Markov process, the transition density of the SBM is given by q θ (t, x, y) = p(t, y x) + sgn(y)θp(t, x + y ), where p(t, x) = 1 2πt exp ( x 2 is the density of the Gaussian random variable with variance t and mean 0. Here the parameter θ is unknown, we estimate it through an observation of a trayectory on [0, T], at times it/n (i = 0,..., n), Our asymptotics is in the time discretization, n (T is fixed). 2t ),

Maximum likelihood Denote X i := X it/n (i = 0,...,n), our sample. With the previous formula for the density, we construct the likelihood: Z n (θ) = n 1 i=0 It is also of interest the log-likelihood: q θ (, X i, X i+1 ) q 0 (, X i, X i+1 ), with = T n. n 1 L n (θ) = log q θ (, X i, X i+1 ) i=0

Explicit form of the likelihood Inserting the formula for the density, we obtain Z n (θ) = X i >0 (1 θ) X i <0 (1 + θ) = X i+1 <0 X i+1 >0 ( 1 θe 2X i X i+1 X i <0 X i+1 <0 n 1 i=0 ) X i >0 X i+1 >0 ( 1 + h( nxi, n (X i+1 X i ) ), ( ) 1 + θe 2X i X i+1 where h(x, y) = sgn(x + y) exp ( (2/ )(x(x + y)) +).

Main result Definition (Stable convergence) Consider (Ω, F, P), and a σ-algebra G F. The sequence of random variables Y 1, Y 2... converge G-stably in distribution to Y, denoted G-stably in dist. Y n Y n when E (Zf(Y n )) n E (Zf(Y)) for any bounded G measurable random variable Z, and any bounded and continuous function f.

Definition (Conditional Stable convergence) Furthermore, consider sets A, A 1, A 2,.... We say that Y 1, Y 2... conditional on A 1, A 2,..., converge G-stably in distribution to Y conditional on A, denoted Y n A n G-stably n Y A, when E (Zf(Y n ) A n ) n E (Zf(Y) A) for any bounded G measurable random variable Z, and any bounded and continuous function f.

Main result Theorem Under the null hypothesis (θ = 0), the MLE θ n satisfies n 1/4 θ n A n F-stably W(l x T ) n l x A, T where W = {W t : t 0} is a standard Brownian motion independent of B, and the sets are: A n = {ω: inf X i < 0}, A = {ω: inf X t(ω) < 0}. 1 i n 0 t T In particular, when x = 0, we have n 1/4 θ n F-stably n W(l x T ) l x. T

Some comments on the result On the set A c the limit is not defined, as l x T = 0. As the process does not hit the level x = 0, no statistical inference can be carried out. The limit is a mixture of normals, situation refered as Local mixed asyptotic normality (LAMN)in the terminology of statstical experiments. The speed of convergence is n 1/4, more slowly than the usual n 1/2, but typical in results involving the local time.

Computing the MLE by linearization As the computation of the MLE is time consuming, consider the first order approximation of L (1) n (θ) at θ = 0, that is ( ) L (1) θ n n 1/4 = L (1) n (0) + L (2) n (0) θ ( ) θ 2 n 1/4 + O n 1/2, that suggests that 1/4 L(1) n (0) α n = n L n (2) (0) is a good approximation of n 1/4 θ n the MLE.

The proof in two steps Step I: Prove that n 1/4 θ n α n 0 in probability. Step II: Prove that ( ) L n (2) (0), L(1) n (0), 1 F-stably A n n (clx T, cw(lx T ), 1 A). Here: n 1/2 A n = {ω: n 1/4 and from step II we obtain inf X i < 0}, A = {ω: inf X t(ω) < 0}. 1 i n 0 t T 1/4 L(1) n (0) α n A n = n L n (2) (0) A n F-stably W(l x T ) n l x A. T

Convergence of crossings of BM In order to prove step II, for k = 1, 2, we compute L (k) n n 1 (0) = ( 1) k 1 ( h( nxi, n(x i+1 X i )) ) 2. i=0 The limits of this type of sums for k = 2 where obtained (for certain class of functions h) by Azaïs 3, and for k = 1 by Jacod 4 3 Azaïs, JM., Approximation des trajectoires et temps local des diffusions, AIHP, PS, 25, 2, (1989), 175 194. 4 Jacod, J., Rates of convergence to the local time of a diffusion, AIHP, PS, 34, 4, (1998), 505 544.

Statistical application: testing hypothesis The limit distribution Lemma For x = 0 and T = 1, the limit distribution of W(l 0 1 )/l0 1 =: Υ is symmetric, with density f Υ (x) = + equal in distribution to 0 Υ = G(H) H dy 1 0 ( y xy 2π t exp 3 2 y 2 ) dt, 2t with H = 1 2 (U + V + U 2 ), where G(H), U and V are independent random variables, G(H), G(H) N(0, H), U N(0, t) and V exp(1/2).

Density of Υ 0.00 0.05 0.10 0.15 0.20 0.25 0.30 5 0 5 Figure: Density of Υ (solid) and density of the normal distribution with variance Var(Υ) (dashed).

Testing hypothesis It is then possible to develop a hypothesis test of θ = 0 against θ 0. For this, let us compute [ P θ n K ] [ n 1/4 P Υ K ] cn 1/4. Of course, the second type error cannot be computed, and we do not know the asymptotic behavior of L n (θ) when θ 0. However, it is rather easy to perform simulation and thus to get some numerical informations about Λ n (θ) and α n. For example, we see in Figure 2 an approximation of the density of α n /n 1/4 for θ = 0.5 compared to an approximation of the density of α n /n 1/4 for the Brownian motion with n = 1000. We can note that the histogram of α n /n 1/4 has its peak on 0.5.

Some numerical considerations: θ n against α n /n 1/4 n n 1/2 mean std dev quant. 90 % 100 0.100 0.026 0.057 0.082 200 0.070 0.028 0.083 0.057 500 0.044 0.013 0.055 0.026 1000 0.031 0.013 0.040 0.033 2000 0.022 0.006 0.025 0.015 5000 0.014 0.006 0.041 0.006 10000 0.010 0.002 0.005 0.003 Table: Statistics of θ n α n /n 1/4 over 100 paths. This table suggests that the error of θ n α n /n 1/4 is of order 1/n 1/2, so that α n /n 1/4 is a pretty good approximation of θ n, and is much more faster to compute.

0.0 0.5 1.0 1.5 2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 Figure: Histogram of α n /n 1/4 (n = 1000) with θ = 0.5 against the an approximation of the density of α n /n 1/4 for θ = 0.

One open question The convergence for θ 0 is open. This a Theorem on crossings under the alternative hypothesis, then for the SBm instead of the BM. Some natural questions arise: Whether the speed the same (n 1/4 ), Whether it is still possible to get the explicit limit distribution, Can we merge this results in Le Cam / Ibraguimov - Hasminsikii Theory of convergence of statistical experiments.

Agradecimientos A Mario Wschebor, por sus contribuciones científicas nacionales e internacionales sus contribuciones a la organización de los científicos en Uruguay, Latinoamérica, y en el mundo sus enseñanzas y su solidaridad su ejemplo permanente como ciudadano del mundo A Jean Marc Azaïs A los organizadores del IX CLAPEM, en especial, J.R. Chichi León y Stella Brassesco