The genealogy of self-similar fragmentations with negative index as a continuum random tree
|
|
- Jerome Stafford
- 7 years ago
- Views:
Transcription
1 The genealogy of self-similar fragmentations with negative index as a continuum random tree Electron. J. Probab. 9, (2004) ejpecp Bénédicte Haas & Grégory Miermont LPMA, Université Paris VI & DMA, École Normale Supérieure Paris, France 6th BS/IMSC, Barcelona, July 26-31, 2004 The genealogy of self-similar fragmentations with negative index as a continuum random tree p.1/20
2 Introduction Fragmentation processes describe a massive object that collapses into pieces as time goes. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.2/20
3 Introduction Fragmentation processes describe a massive object (with initial mass 1) that collapses into pieces as time goes. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.2/20
4 Introduction Fragmentation processes describe a massive object (with initial mass 1) that collapses into pieces as time goes. State space { } S = s = (s 1,s 2,...) : s 1 s , s i 1 i=1 The genealogy of self-similar fragmentations with negative index as a continuum random tree p.2/20
5 Introduction Fragmentation processes describe a massive object (with initial mass 1) that collapses into pieces as time goes. State space { } S = s = (s 1,s 2,...) : s 1 s , s i 1 i=1 This allows some loss of mass. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.2/20
6 Self-similar fragmentations Definition (Bertoin, 2002) A Markovian S-valued process (F(t),t 0) starting at (1, 0,...) is a ranked self-similar fragmentation with index α R if it is continuous in probability The genealogy of self-similar fragmentations with negative index as a continuum random tree p.3/20
7 Self-similar fragmentations Definition (Bertoin, 2002) A Markovian S-valued process (F(t),t 0) starting at (1, 0,...) is a ranked self-similar fragmentation with index α R if it is continuous in probability and satisfies the following fragmentation property. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.3/20
8 Self-similar fragmentations Definition (Bertoin, 2002) A Markovian S-valued process (F(t),t 0) starting at (1, 0,...) is a ranked self-similar fragmentation with index α R if it is continuous in probability and satisfies the following fragmentation property. the fragments present at time t subsequently evolve independently of the others, in a way similar to that of the original mass 1 fragment, up to a space-time renormalization depending on the mass of each considered fragment. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.3/20
9 Self-similar fragmentations Definition (Bertoin, 2002) A Markovian S-valued process (F(t),t 0) starting at (1, 0,...) is a ranked self-similar fragmentation with index α R if it is continuous in probability and satisfies the following fragmentation property. For every t,t 0, given F(t) = (x 1,x 2,...), F(t + t ) has the same law as the decreasing rearrangement of the sequences x 1 F (1) (x 1 α t ),x 2 F (2) (x 2 α t ),... where the F (i) s are independent copies of F. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.3/20
10 A crucial example Let e be (twice) a standard brownian excursion. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.4/20
11 A crucial example Let e be (twice) a standard brownian excursion. For t 0 let I t := {s [0, 1] : e(s) > t}. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.4/20
12 A crucial example Let e be (twice) a standard brownian excursion. For t 0 let I t := {s [0, 1] : e(s) > t}. Rank the lengths of interval components of I t to get the Brownian fragmentation F B (t) S,t 0. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.4/20
13 A crucial example Let e be (twice) a standard brownian excursion. For t 0 let I t := {s [0, 1] : e(s) > t}. Rank the lengths of interval components of I t to get the Brownian fragmentation F B (t) S,t 0. Then easy scaling properties of Brownian motion show: Proposition (F B (t),t 0) is a self-similar fragmentation with index α = 1/2. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.4/20
14 A crucial example Let e be (twice) a standard brownian excursion. For t 0 let I t := {s [0, 1] : e(s) > t}. Rank the lengths of interval components of I t to get the Brownian fragmentation F B (t) S,t 0. Then easy scaling properties of Brownian motion show: Proposition (F B (t),t 0) is a self-similar fragmentation with index α = 1/2. Notice α < 0, so fragments keep on collapsing faster and faster, implying some loss of mass (even extinction in finite time). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.4/20
15 A crucial example t t=1 t=1/2 0 1 The genealogy of self-similar fragmentations with negative index as a continuum random tree p.5/20
16 A crucial example t t=1 t=1/2 0 1 This one (mass m) is breaking into two fragments with sum of masses m. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.5/20
17 Continuum Random Trees Definition An R-tree is a metric space (T,d) such that for v,w T, The genealogy of self-similar fragmentations with negative index as a continuum random tree p.6/20
18 Continuum Random Trees Definition An R-tree is a metric space (T,d) such that for v,w T, 1. there exists a unique geodesic [[v,w]] going from v to w, i.e. there exists a unique isometry ϕ v,w : [0,d(v,w)] T with ϕ v,w (0) = v and ϕ v,w (d(v,w)) = w, and its image is called [[v,w]], and... The genealogy of self-similar fragmentations with negative index as a continuum random tree p.6/20
19 Continuum Random Trees Definition An R-tree is a metric space (T,d) such that for v,w T, 1. there exists a unique geodesic [[v,w]] going from v to w, i.e. there exists a unique isometry ϕ v,w : [0,d(v,w)] T with ϕ v,w (0) = v and ϕ v,w (d(v,w)) = w, and its image is called [[v,w]], and Any simple path going from v to w is [[v,w]] (tree property). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.6/20
20 Example: the Brownian tree This is a tree The genealogy of self-similar fragmentations with negative index as a continuum random tree p.7/20
21 Example: the Brownian tree t 1 root 0 1 U1 The genealogy of self-similar fragmentations with negative index as a continuum random tree p.7/20
22 Example: the Brownian tree t 1 2 root 0 U1 U2 1 The genealogy of self-similar fragmentations with negative index as a continuum random tree p.7/20
23 Example: the Brownian tree t root 0 U4 U1 U3 U2 U5 1 The genealogy of self-similar fragmentations with negative index as a continuum random tree p.7/20
24 Example: the Brownian tree t root 0 U4 U1 U3 U2 U5 1 And so on: the limiting Brownian CRT = [0, 1], with D(s,s ) = e(s) + e(s ) 2 inf s s u s s e(u). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.7/20
25 Example: the Brownian tree Therefore, the Brownian fragmentation F B can be alternatively obtained from the Brownian CRT T B, by recording in decreasing order the sizes of tree components of the forest {v T : d(root,v) > t}, where the sizes are the µ-masses of these components, µ= Lebesgue measure on [0, 1]. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.8/20
26 Example: the Brownian tree Therefore, the Brownian fragmentation F B can be alternatively obtained from the Brownian CRT T B, by recording in decreasing order the sizes of tree components of the forest {v T : d(root,v) > t}, where the sizes are the µ-masses of these components, µ= Lebesgue measure on [0, 1]. Is ANY self-similar fragmentation F, which vanishes in finite time (i.e. α < 0), of this form, for some kind of measured R-tree T F? Answer: indeed. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.8/20
27 Fragmentation trees Theorem 1 (Haas & M., 2004) Let F be a s.s.f. with no erosion The genealogy of self-similar fragmentations with negative index as a continuum random tree p.9/20
28 Fragmentation trees Theorem 1 (Haas & M., 2004) Let F be a s.s.f. with no erosion This means that every process F i (t),t 0, is pure-jump: the fragments cannot melt continuously. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.9/20
29 Fragmentation trees Theorem 1 (Haas & M., 2004) Let F be a s.s.f. with no erosion and no sudden loss of mass, The genealogy of self-similar fragmentations with negative index as a continuum random tree p.9/20
30 Fragmentation trees Theorem 1 (Haas & M., 2004) Let F be a s.s.f. with no erosion and no sudden loss of mass,meaning the function M(t) = i=1 F i(t) is continuous, The genealogy of self-similar fragmentations with negative index as a continuum random tree p.9/20
31 Fragmentation trees Theorem 1 (Haas & M., 2004) Let F be a s.s.f. with no erosion and no sudden loss of mass,meaning the function M(t) = i=1 F i(t) is continuous, then there exists a random measured R-tree (a CRT) (T F,µ) such that F has the same law as the process obtained as in the above construction, from the tree T F. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.9/20
32 Fragmentation trees Theorem 1 (Haas & M., 2004) Let F be a s.s.f. with no erosion and no sudden loss of mass,meaning the function M(t) = i=1 F i(t) is continuous, then there exists a random measured R-tree (a CRT) (T F,µ) such that F has the same law as the process obtained as in the above construction, from the tree T F. To see this: discretize space. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.9/20
33 Fragmentations and partitions of N Suppose the massive object is given with a probability mass measure µ. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.10/20
34 Fragmentations and partitions of N Suppose the massive object is given with a probability mass measure µ. One can sample n (resp. an infinite number of) points independently according to µ and create an exchangeable partition of {1, 2,...,n} (resp. N). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.10/20
35 Fragmentations and partitions of N Suppose the massive object is given with a probability mass measure µ. One can sample n (resp. an infinite number of) points independently according to µ and create an exchangeable partition of {1, 2,...,n} (resp. N) {1,2,3,4,5,6,7} {1,3},{2,5,7},{4},{6} The genealogy of self-similar fragmentations with negative index as a continuum random tree p.10/20
36 Fragmentations and partitions of N Suppose the massive object is given with a probability mass measure µ. One can sample n (resp. an infinite number of) points independently according to µ and create an exchangeable partition of {1, 2,...,n} (resp. N) {1,2,3,4,5,6,7} {1,3},{2,5,7},{4},{6} This gives a partition-valued fragmentation Π(t), t 0. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.10/20
37 A tree from a partition For i N let D i = inf{t 0 : {i} Π(t)} (death time, height of leaf i) for finite B N let D B = inf{t 0 : #(B Π(t)) 1} first splitting time of B. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.11/20
38 A tree from a partition For i N let D i = inf{t 0 : {i} Π(t)} (death time, height of leaf i) for finite B N let D B = inf{t 0 : #(B Π(t)) 1} first splitting time of B. then one may construct (Aldous 1993 general CRT theory) a random measured R-tree T F,µ so that the subtree of T F spanned by n uniform µ-picked vertices has same law as the R-tree determined by the D B,B {1,...,n}: the branchpoint of the leaves of B has height D B (height of i if B = {i}). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.11/20
39 A tree from a partition For i N let D i = inf{t 0 : {i} Π(t)} (death time, height of leaf i) for finite B N let D B = inf{t 0 : #(B Π(t)) 1} first splitting time of B. then one may construct (Aldous 1993 general CRT theory) a random measured R-tree T F,µ so that the subtree of T F spanned by n uniform µ-picked vertices has same law as the R-tree determined by the D B,B {1,...,n}: the branchpoint of the leaves of B has height D B (height of i if B = {i}). µ is the weak limit of the empirical measure on the first n leaves. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.11/20
40 Question Therefore: a self-similar fragmentation with α < 0 is a random measured R-tree. One wants to look at possible properties of this random metric space. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.12/20
41 Question Therefore: a self-similar fragmentation with α < 0 is a random measured R-tree. One wants to look at possible properties of this random metric space. The Hausdorff dimension is a natural quantity to consider. For the Brownian tree the dimension is 2. For the Duquesne-Le Gall β-stable tree (the associated fragmentation process is studied in M., 2003), the dimension is β/(β 1). We obtain it as a Corollary of Theorem 2 (Haas & M., 2004) Under mild further hypotheses on F (F B matches these), the Hausdorff dimension of T F is α 1 1, a.s. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.12/20
42 Question Therefore: a self-similar fragmentation with α < 0 is a random measured R-tree. One wants to look at possible properties of this random metric space. The Hausdorff dimension is a natural quantity to consider. For the Brownian tree the dimension is 2. For the Duquesne-Le Gall β-stable tree (the associated fragmentation process is studied in M., 2003), the dimension is β/(β 1). We obtain it as a Corollary of Theorem 2 (Haas & M., 2004) Under mild further hypotheses on F (F B matches these), the Hausdorff dimension of T F is α 1 1, a.s. Notice that α 1 is thus qualitatively different from 1 < α < 0. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.12/20
43 Idea of proof Upper bound: based on known exponential control of the tail of the death time of a marked fragment (Haas, 2002). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.13/20
44 Idea of proof Upper bound: based on known exponential control of the tail of the death time of a marked fragment (Haas, 2002). Lower Bound: one natural method is to apply Frostman s energy method with the most natural measure on T F : the mass measure µ. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.13/20
45 Idea of proof Upper bound: based on known exponential control of the tail of the death time of a marked fragment (Haas, 2002). Lower Bound: one natural method is to apply Frostman s energy method with the most natural measure on T F : the mass measure µ. One would like to show The genealogy of self-similar fragmentations with negative index as a continuum random tree p.13/20
46 Idea of proof Upper bound: based on known exponential control of the tail of the death time of a marked fragment (Haas, 2002). Lower Bound: one natural method is to apply Frostman s energy method with the most natural measure on T F : the mass measure µ. One would like to show T F µ(dx)µ(dy) T F d(x,y) γ < for all γ < α 1 1, where L 1,L 2 are independent µ-distributed. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.13/20
47 Idea of proof Upper bound: based on known exponential control of the tail of the death time of a marked fragment (Haas, 2002). Lower Bound: one natural method is to apply Frostman s energy method with the most natural measure on T F : the mass measure µ. One would like to show [ ] µ(dx)µ(dy) E T F d(x,y) γ T F = E [ 1 d(l 1,L 2 ) γ ] < for all γ < α 1 1, where L 1,L 2 are independent µ-distributed. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.13/20
48 Idea of proof d(l 1,L 2 ) is manageable: find the first time D {1,2} when 1 and 2 separate, evaluate the sizes λ 1,λ 2 of fragments containing 1 and 2 at time D {1,2} The genealogy of self-similar fragmentations with negative index as a continuum random tree p.14/20
49 Idea of proof d(l 1,L 2 ) is manageable: find the first time D {1,2} when 1 and 2 separate, evaluate the sizes λ 1,λ 2 of fragments containing 1 and 2 at time D {1,2}, then (by the fragmentation property) d(l 1,L 2 ) has same distribution as λ α 1 D + λ α 2 D, where D, D are independent, independent of λ i,i = 1, 2 with law that of D {1}. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.14/20
50 Idea of proof d(l 1,L 2 ) is manageable: find the first time D {1,2} when 1 and 2 separate, evaluate the sizes λ 1,λ 2 of fragments containing 1 and 2 at time D {1,2}, then (by the fragmentation property) d(l 1,L 2 ) has same distribution as λ α 1 D + λ α 2 D, where D, D are independent, independent of λ i,i = 1, 2 with law that of D {1}. Therefore E[d(L 1,L 2 ) γ ] 2E[D γ ]E[λ αγ 1 ;λ 1 < λ 2 ]. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.14/20
51 Idea of proof d(l 1,L 2 ) is manageable: find the first time D {1,2} when 1 and 2 separate, evaluate the sizes λ 1,λ 2 of fragments containing 1 and 2 at time D {1,2}, then (by the fragmentation property) d(l 1,L 2 ) has same distribution as λ α 1 D + λ α 2 D, where D, D are independent, independent of λ i,i = 1, 2 with law that of D {1}. Therefore E[d(L 1,L 2 ) γ ] 2E[D γ ]E[λ αγ 1 ;λ 1 < λ 2 ]. Rewrite (λ 1,λ 2 ) = λ(d {1,2} )(l 1,l 2 ), where λ(t) is the size of the fragment containing L 1,L 2 before separation (t < D {1,2} ). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.14/20
52 Idea of proof Crucial tool: the dislocation measure of F : a σ-finite measure ν on S s.t. S (1 s 1)ν(ds) <. Informally, a fragment with mass x breaks in fragments xs (with s S) at rate x α ν(ds). The knowledge of α and ν determines the law of a(n erosionless) F (Bertoin, 2002). It is the jump measure of F. If F has no sudden loss of mass then i s i = 1, ν-a.e. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.15/20
53 Idea of proof Were it not for the x α term (i.e. when α = 0), the size F (t) of the fragment containing L 1 would be exp( ξ(t)),t 0 where ξ is a subordinator with the size-biased Lévy measure m(dx) = s i ν( log(s i ) dx). i=1 The genealogy of self-similar fragmentations with negative index as a continuum random tree p.16/20
54 Idea of proof Were it not for the x α term (i.e. when α = 0), the size F (t) of the fragment containing L 1 would be exp( ξ(t)),t 0 where ξ is a subordinator with the size-biased Lévy measure m(dx) = s i ν( log(s i ) dx). i=1 One gets F by a Lamperti time-change (Bertoin, 2002): F (t) d = exp(ξ(ρ(t))) t 0 where ρ(t) = inf{s 0 : s 0 du exp( αξ(u))}, notice ρ explodes in finite time (α < 0). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.16/20
55 Idea of proof Similarly, λ(t) is a time-changed version of an exponential of subordinator with Lévy measure s 2 iν( log(s i ) dx) i killed at an exponential (rate k) time (corresponding to the separation of L 1 and L 2 ), The genealogy of self-similar fragmentations with negative index as a continuum random tree p.17/20
56 Idea of proof Similarly, λ(t) is a time-changed version of an exponential of subordinator with Lévy measure s 2 iν( log(s i ) dx) i killed at an exponential (rate k) time (corresponding to the separation of L 1 and L 2 ), and then the relative sizes after separation are independent of λ(d {1,2} ) and distributed as E[f(l 1,l 2 )] = 1 k i j f(s i,s j )s i s j ν(ds). S The genealogy of self-similar fragmentations with negative index as a continuum random tree p.17/20
57 Idea of proof Similarly, λ(t) is a time-changed version of an exponential of subordinator with Lévy measure s 2 iν( log(s i ) dx) i killed at an exponential (rate k) time (corresponding to the separation of L 1 and L 2 ), and then the relative sizes after separation are independent of λ(d {1,2} ) and distributed as E[f(l 1,l 2 )] = 1 k i j f(s i,s j )s i s j ν(ds). Then one can S compute everything needed in E[d(L 1,L 2 ) γ ] for γ < α 1 1, but... The genealogy of self-similar fragmentations with negative index as a continuum random tree p.17/20
58 We are not done yet! Indeed, the result is not always <, it is in the case ν finite and there exists N 2 with ν( N i=1 s i < 1) = 0 (at most N-ary tree). The idea is then to truncate the tree to make it look like a fragmentation tree with the two properties above, and then make the truncation resemble the initial tree more and more. The genealogy of self-similar fragmentations with negative index as a continuum random tree p.18/20
59 Possible developments Finer analysis of fragmentation trees : level sets, local times? Case α 0 (ultrametrics rather than Brownian CRT s-looking trees)? Extension x α f(x) with regularity assumptions of f near 0 is easy. Encoding functions and their Hölder properties (partially done in the paper). The genealogy of self-similar fragmentations with negative index as a continuum random tree p.19/20
60 Zi ainde Thank you! The genealogy of self-similar fragmentations with negative index as a continuum random tree p.20/20
Trimming a Tree and the Two-Sided Skorohod Reflection
ALEA, Lat. Am. J. Probab. Math. Stat. 12 (2), 811 834 (2015) Trimming a Tree and the Two-Sided Skorohod Reflection Emmanuel Schertzer UPMC Univ. Paris 06, Laboratoire de Probabilités et Modèles Aléatoires,
More informationMaximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
More informationCHAPTER IV - BROWNIAN MOTION
CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time
More informationPRÉPUBLICATIONS DU LABORATOIRE DE PROBABILITÉS & MODÈLES ALÉATOIRES. Universités de Paris 6 & Paris 7 - CNRS (UMR 7599)
Universités de Paris 6 & Paris 7 - CNRS (UMR 7599) PRÉPUBLICATIONS DU LABORATOIRE DE PROBABILITÉS & MODÈLES ALÉATOIRES 4, place Jussieu - Case 188-75 252 Paris cedex 5 http://www.proba.jussieu.fr The asymptotic
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
More informationSection 6.1 Joint Distribution Functions
Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function
More informationEstimating the Degree of Activity of jumps in High Frequency Financial Data. joint with Yacine Aït-Sahalia
Estimating the Degree of Activity of jumps in High Frequency Financial Data joint with Yacine Aït-Sahalia Aim and setting An underlying process X = (X t ) t 0, observed at equally spaced discrete times
More informationThe sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
More informationINDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS
INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem
More informationRANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis
RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied
More informationPRÉPUBLICATIONS DU LABORATOIRE DE PROBABILITÉS & MODÈLES ALÉATOIRES. Universités de Paris 6 & Paris 7 - CNRS (UMR 7599)
Universités de Paris 6 & Paris 7 - CNRS (UMR 7599) PRÉPUBLICATIONS DU LABORATOIRE DE PROBABILITÉS & MODÈLES ALÉATOIRES 4, place Jussieu - Case 188-75 252 Paris cedex 05 http://www.proba.jussieu.fr Homogeneous
More informationConstruction and asymptotics of relativistic diffusions on Lorentz manifolds
Construction and asymptotics of relativistic diffusions on Lorentz manifolds Jürgen Angst Section de mathématiques Université de Genève Rencontre de l ANR Geodycos UMPA, ÉNS Lyon, April 29 2010 Motivations
More informationMATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables
MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables Tony Pourmohamad Department of Mathematics De Anza College Spring 2015 Objectives By the end of this set of slides,
More informationLiouville Brownian motion. Nathanaël Berestycki University of Cambridge
Liouville Brownian motion Nathanaël Berestycki University of Cambridge What might a random surface look like? And what does it mean? c Nicolas Curien Physicists: Liouville Quantum Gravity = 2d generalisation
More informationSTAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE
STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE TROY BUTLER 1. Random variables and distributions We are often presented with descriptions of problems involving some level of uncertainty about
More informationInformation Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay
Information Theory and Coding Prof. S. N. Merchant Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 17 Shannon-Fano-Elias Coding and Introduction to Arithmetic Coding
More informationNotes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
More information1 The Brownian bridge construction
The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof
More informationHedging Options In The Incomplete Market With Stochastic Volatility. Rituparna Sen Sunday, Nov 15
Hedging Options In The Incomplete Market With Stochastic Volatility Rituparna Sen Sunday, Nov 15 1. Motivation This is a pure jump model and hence avoids the theoretical drawbacks of continuous path models.
More informationNotes on metric spaces
Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.
More information15 Kuhn -Tucker conditions
5 Kuhn -Tucker conditions Consider a version of the consumer problem in which quasilinear utility x 2 + 4 x 2 is maximised subject to x +x 2 =. Mechanically applying the Lagrange multiplier/common slopes
More informationFull and Complete Binary Trees
Full and Complete Binary Trees Binary Tree Theorems 1 Here are two important types of binary trees. Note that the definitions, while similar, are logically independent. Definition: a binary tree T is full
More informationMetric Spaces Joseph Muscat 2003 (Last revised May 2009)
1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of
More informationEfficiency and the Cramér-Rao Inequality
Chapter Efficiency and the Cramér-Rao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.
More informationThe Exponential Distribution
21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding
More informationt := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).
1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction
More informationSeparation Properties for Locally Convex Cones
Journal of Convex Analysis Volume 9 (2002), No. 1, 301 307 Separation Properties for Locally Convex Cones Walter Roth Department of Mathematics, Universiti Brunei Darussalam, Gadong BE1410, Brunei Darussalam
More informationEXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL
EXIT TIME PROBLEMS AND ESCAPE FROM A POTENTIAL WELL Exit Time problems and Escape from a Potential Well Escape From a Potential Well There are many systems in physics, chemistry and biology that exist
More informationRepresenting Reversible Cellular Automata with Reversible Block Cellular Automata
Discrete Mathematics and Theoretical Computer Science Proceedings AA (DM-CCG), 2001, 145 154 Representing Reversible Cellular Automata with Reversible Block Cellular Automata Jérôme Durand-Lose Laboratoire
More informationStationary random graphs on Z with prescribed iid degrees and finite mean connections
Stationary random graphs on Z with prescribed iid degrees and finite mean connections Maria Deijfen Johan Jonasson February 2006 Abstract Let F be a probability distribution with support on the non-negative
More informationProbability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
More informationOverview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
More informationIs a Brownian motion skew?
Is a Brownian motion skew? Ernesto Mordecki Sesión en honor a Mario Wschebor Universidad de la República, Montevideo, Uruguay XI CLAPEM - November 2009 - Venezuela 1 1 Joint work with Antoine Lejay and
More informationOptimal order placement in a limit order book. Adrien de Larrard and Xin Guo. Laboratoire de Probabilités, Univ Paris VI & UC Berkeley
Optimal order placement in a limit order book Laboratoire de Probabilités, Univ Paris VI & UC Berkeley Outline 1 Background: Algorithm trading in different time scales 2 Some note on optimal execution
More informationSolution to a problem arising from Mayer s theory of cluster integrals Olivier Bernardi, C.R.M. Barcelona
Solution to a problem arising from Mayer s theory of cluster integrals Olivier Bernardi, C.R.M. Barcelona October 2006, 57 th Seminaire Lotharingien de Combinatoire Olivier Bernardi p.1/37 Content of the
More informationx o R n a π(a, x o ) A R n π(a, x o ) π(a, x o ) A R n a a x o x o x n X R n δ(x n, x o ) d(a, x n ) d(, ) δ(, ) R n x n X d(a, x n ) δ(x n, x o ) a = a A π(a, xo ) a a A = X = R π(a, x o ) = (x o + ρ)
More information1 Sufficient statistics
1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =
More informationTiers, Preference Similarity, and the Limits on Stable Partners
Tiers, Preference Similarity, and the Limits on Stable Partners KANDORI, Michihiro, KOJIMA, Fuhito, and YASUDA, Yosuke February 7, 2010 Preliminary and incomplete. Do not circulate. Abstract We consider
More informationReview D: Potential Energy and the Conservation of Mechanical Energy
MSSCHUSETTS INSTITUTE OF TECHNOLOGY Department of Physics 8.01 Fall 2005 Review D: Potential Energy and the Conservation of Mechanical Energy D.1 Conservative and Non-conservative Force... 2 D.1.1 Introduction...
More information5.3 Improper Integrals Involving Rational and Exponential Functions
Section 5.3 Improper Integrals Involving Rational and Exponential Functions 99.. 3. 4. dθ +a cos θ =, < a
More informationLecture 2: Universality
CS 710: Complexity Theory 1/21/2010 Lecture 2: Universality Instructor: Dieter van Melkebeek Scribe: Tyson Williams In this lecture, we introduce the notion of a universal machine, develop efficient universal
More informationA Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails
12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint
More informationDepartment of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x
More informationAlgebraic and Transcendental Numbers
Pondicherry University July 2000 Algebraic and Transcendental Numbers Stéphane Fischler This text is meant to be an introduction to algebraic and transcendental numbers. For a detailed (though elementary)
More informationGambling Systems and Multiplication-Invariant Measures
Gambling Systems and Multiplication-Invariant Measures by Jeffrey S. Rosenthal* and Peter O. Schwartz** (May 28, 997.. Introduction. This short paper describes a surprising connection between two previously
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationMarshall-Olkin distributions and portfolio credit risk
Marshall-Olkin distributions and portfolio credit risk Moderne Finanzmathematik und ihre Anwendungen für Banken und Versicherungen, Fraunhofer ITWM, Kaiserslautern, in Kooperation mit der TU München und
More information6.2 Permutations continued
6.2 Permutations continued Theorem A permutation on a finite set A is either a cycle or can be expressed as a product (composition of disjoint cycles. Proof is by (strong induction on the number, r, of
More informationThe Goldberg Rao Algorithm for the Maximum Flow Problem
The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }
More informationLECTURE 15: AMERICAN OPTIONS
LECTURE 15: AMERICAN OPTIONS 1. Introduction All of the options that we have considered thus far have been of the European variety: exercise is permitted only at the termination of the contract. These
More informationBrownian Motion and Stochastic Flow Systems. J.M Harrison
Brownian Motion and Stochastic Flow Systems 1 J.M Harrison Report written by Siva K. Gorantla I. INTRODUCTION Brownian motion is the seemingly random movement of particles suspended in a fluid or a mathematical
More informationMath 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
More informationLecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs
CSE599s: Extremal Combinatorics November 21, 2011 Lecture 15 An Arithmetic Circuit Lowerbound and Flows in Graphs Lecturer: Anup Rao 1 An Arithmetic Circuit Lower Bound An arithmetic circuit is just like
More informationAn exact formula for default swaptions pricing in the SSRJD stochastic intensity model
An exact formula for default swaptions pricing in the SSRJD stochastic intensity model Naoufel El-Bachir (joint work with D. Brigo, Banca IMI) Radon Institute, Linz May 31, 2007 ICMA Centre, University
More informationInfluences in low-degree polynomials
Influences in low-degree polynomials Artūrs Bačkurs December 12, 2012 1 Introduction In 3] it is conjectured that every bounded real polynomial has a highly influential variable The conjecture is known
More informationNumerical Methods for Option Pricing
Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly
More informationSolutions to Homework 10
Solutions to Homework 1 Section 7., exercise # 1 (b,d): (b) Compute the value of R f dv, where f(x, y) = y/x and R = [1, 3] [, 4]. Solution: Since f is continuous over R, f is integrable over R. Let x
More informatione.g. arrival of a customer to a service station or breakdown of a component in some system.
Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be
More information1 The Line vs Point Test
6.875 PCP and Hardness of Approximation MIT, Fall 2010 Lecture 5: Low Degree Testing Lecturer: Dana Moshkovitz Scribe: Gregory Minton and Dana Moshkovitz Having seen a probabilistic verifier for linearity
More informationThe Mean Value Theorem
The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers
More informationFinite dimensional topological vector spaces
Chapter 3 Finite dimensional topological vector spaces 3.1 Finite dimensional Hausdorff t.v.s. Let X be a vector space over the field K of real or complex numbers. We know from linear algebra that the
More informationLecture Notes on Elasticity of Substitution
Lecture Notes on Elasticity of Substitution Ted Bergstrom, UCSB Economics 210A March 3, 2011 Today s featured guest is the elasticity of substitution. Elasticity of a function of a single variable Before
More informationSome stability results of parameter identification in a jump diffusion model
Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss
More informationIntroduction to Topology
Introduction to Topology Tomoo Matsumura November 30, 2010 Contents 1 Topological spaces 3 1.1 Basis of a Topology......................................... 3 1.2 Comparing Topologies.......................................
More informationMINIMIZATION OF ENTROPY FUNCTIONALS UNDER MOMENT CONSTRAINTS. denote the family of probability density functions g on X satisfying
MINIMIZATION OF ENTROPY FUNCTIONALS UNDER MOMENT CONSTRAINTS I. Csiszár (Budapest) Given a σ-finite measure space (X, X, µ) and a d-tuple ϕ = (ϕ 1,..., ϕ d ) of measurable functions on X, for a = (a 1,...,
More informationExponential Distribution
Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1
More informationLecture 2 ESTIMATING THE SURVIVAL FUNCTION. One-sample nonparametric methods
Lecture 2 ESTIMATING THE SURVIVAL FUNCTION One-sample nonparametric methods There are commonly three methods for estimating a survivorship function S(t) = P (T > t) without resorting to parametric models:
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationPrinciple of Data Reduction
Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then
More informationHydrodynamic Limits of Randomized Load Balancing Networks
Hydrodynamic Limits of Randomized Load Balancing Networks Kavita Ramanan and Mohammadreza Aghajani Brown University Stochastic Networks and Stochastic Geometry a conference in honour of François Baccelli
More informationBANACH AND HILBERT SPACE REVIEW
BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but
More informationx a x 2 (1 + x 2 ) n.
Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number
More informationWHEN DOES A RANDOMLY WEIGHTED SELF NORMALIZED SUM CONVERGE IN DISTRIBUTION?
WHEN DOES A RANDOMLY WEIGHTED SELF NORMALIZED SUM CONVERGE IN DISTRIBUTION? DAVID M MASON 1 Statistics Program, University of Delaware Newark, DE 19717 email: davidm@udeledu JOEL ZINN 2 Department of Mathematics,
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationChapter 2: Binomial Methods and the Black-Scholes Formula
Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the
More informationErrata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page
Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8
More information6.3 Conditional Probability and Independence
222 CHAPTER 6. PROBABILITY 6.3 Conditional Probability and Independence Conditional Probability Two cubical dice each have a triangle painted on one side, a circle painted on two sides and a square painted
More informationMath 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
More informationMath 151. Rumbos Spring 2014 1. Solutions to Assignment #22
Math 151. Rumbos Spring 2014 1 Solutions to Assignment #22 1. An experiment consists of rolling a die 81 times and computing the average of the numbers on the top face of the die. Estimate the probability
More information1 The EOQ and Extensions
IEOR4000: Production Management Lecture 2 Professor Guillermo Gallego September 9, 2004 Lecture Plan 1. The EOQ and Extensions 2. Multi-Item EOQ Model 1 The EOQ and Extensions This section is devoted to
More informationFor a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )
Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll
More informationON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME
ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME Alexey Chuprunov Kazan State University, Russia István Fazekas University of Debrecen, Hungary 2012 Kolchin s generalized allocation scheme A law of
More informationRandom variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.
Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()
More informationChapter 4 Lecture Notes
Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,
More informationE3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
More informationExploratory Data Analysis
Exploratory Data Analysis Johannes Schauer johannes.schauer@tugraz.at Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction
More informationAn Introduction to Partial Differential Equations
An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion
More informationThe Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
More informationSecond Order Linear Partial Differential Equations. Part I
Second Order Linear Partial Differential Equations Part I Second linear partial differential equations; Separation of Variables; - point boundary value problems; Eigenvalues and Eigenfunctions Introduction
More informationLecture 22: November 10
CS271 Randomness & Computation Fall 2011 Lecture 22: November 10 Lecturer: Alistair Sinclair Based on scribe notes by Rafael Frongillo Disclaimer: These notes have not been subjected to the usual scrutiny
More informationConductance, the Normalized Laplacian, and Cheeger s Inequality
Spectral Graph Theory Lecture 6 Conductance, the Normalized Laplacian, and Cheeger s Inequality Daniel A. Spielman September 21, 2015 Disclaimer These notes are not necessarily an accurate representation
More information4. Expanding dynamical systems
4.1. Metric definition. 4. Expanding dynamical systems Definition 4.1. Let X be a compact metric space. A map f : X X is said to be expanding if there exist ɛ > 0 and L > 1 such that d(f(x), f(y)) Ld(x,
More informationNonparametric adaptive age replacement with a one-cycle criterion
Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk
More informationA Non-Linear Schema Theorem for Genetic Algorithms
A Non-Linear Schema Theorem for Genetic Algorithms William A Greene Computer Science Department University of New Orleans New Orleans, LA 70148 bill@csunoedu 504-280-6755 Abstract We generalize Holland
More informationCHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.
Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,
More informationBinomial lattice model for stock prices
Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }
More informationTHE BANACH CONTRACTION PRINCIPLE. Contents
THE BANACH CONTRACTION PRINCIPLE ALEX PONIECKI Abstract. This paper will study contractions of metric spaces. To do this, we will mainly use tools from topology. We will give some examples of contractions,
More informationTOPIC 4: DERIVATIVES
TOPIC 4: DERIVATIVES 1. The derivative of a function. Differentiation rules 1.1. The slope of a curve. The slope of a curve at a point P is a measure of the steepness of the curve. If Q is a point on the
More informationWalrasian Demand. u(x) where B(p, w) = {x R n + : p x w}.
Walrasian Demand Econ 2100 Fall 2015 Lecture 5, September 16 Outline 1 Walrasian Demand 2 Properties of Walrasian Demand 3 An Optimization Recipe 4 First and Second Order Conditions Definition Walrasian
More informationNo: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics
No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results
More information