Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Large Sample Theory. Consider a sequence of random variables Z 1, Z 2,..., Z n. Convergence in probability: Z n"

Transcription

1 Large Samle Theory In statistics, we are interested in the roerties of articular random variables (or estimators ), which are functions of our data. In ymtotic analysis, we focus on describing the roerties of estimators when the samle size becomes arbitrarily large. The idea is that given a reonably large datet, the roerties of an estimator even when the samle size is finite are similar to the roerties of an estimator when the samle size is arbitrarily large. In these notes we focus on the large samle roerties of samle averages formed from i.i.d. data. That is, sume that X i i.i.d.f, for i = 1,..., n,.... Assume EX i = µ, for all i. The samle average after n draws is X n 1 n i X i. We focus on two imortant sets of large samle results: (1) Law of large numbers: Xn EX n. (2) Central limit theorem: n( X n EX) N(0, ). That is, n times a samle average looks like (in a recise sense to be defined later) a normal random variable n gets large. An imortant endeavor of ymtotic statistics is to show (1) and (2) under various sumtions on the data samling rocess. Consider a sequence of random variables Z 1, Z 2,...,. Convergence in robability: Z for all ɛ > 0, lim P rob ( Z < ɛ) = 1. More formally: for all ɛ > 0 and δ > 0, there exists n 0 (ɛ, δ) such that for all n > n 0 (ɛ, δ): P rob ( Z < ɛ) > 1 δ. Note that the limiting variable Z can also be a random variable. Furthermore, for convergence in robability, the random variables Z 1, Z 2,... and Z should be defined on the same robability sace: if this common samle sace is Ω with elements ω, then the statement of convergence in robability is that: P rob (ω Ω : (ω) Z(ω) < ɛ) > 1 δ. Corresonding to this convergence concet, we have the Weak Law of Large Numbers (WLLN), which is the result that X n µ. 1

2 Earlier, we had used Chebyshev s inequality to rove a version of the WLLN, under the sumtion that X i are i.i.d. with mean µ and variance σ 2. Khinchine s law of large numbers only requires that the mean exists (i.e., is finite), but does not require existence of variances. Recall Markov s inequality: for ositive random variables Z, and > 0, ɛ > 0, we have P (Z > ɛ) E(Z) /ɛ. Take Z = X n X. Then if we have: E[ X n X ] 0 : that is, X n converges in -th mean to X, then by Markov s inequality, we also have P ( X n X ɛ) 0. That is, convergence in -th mean imlies convergence in robability. Some definitions and results: Define: the robability limit of a random sequence, denoted lim, is a non-random quantity α such that α. Stochtic orders of magnitude: big-o.. : = O (n λ ) there exists an O(1) nonstochtic sequence a n such that Zn n λ a n 0. = O (1) for every δ, there exists a finite (δ) and n (δ) such that P rob( > ) < δ for all n n (δ). little-o.. : = o (n λ ) Zn n λ 0. = o (1) 0. Plim oerator theorem: Let be a k-dimensional random vector, and g( ) be a function which is continuous at a constant k-vector oint α. Then α = g(zn ) g(α). In other words: g(lim ) = lim g( ). Proof: See Serfling, Aroximation Theorems of Mathematical Statistics,. 24. Don t confuse this: lim 1 n i g(z i) = Eg(Z i ), from the LLN, but this is distinct from lim g( 1 n i Z i) = g(ez i ), using lim oerator result. The two are generally not the same; if g( ) is convex, then lim 1 n i g(z i) lim g( 1 n i Z i). 2

3 A useful intermediate result ( squeezing ): if α (a constant) and Z n lies between and α with robability 1, then Zn α. (Quick roof: for all ɛ > 0, X n α < ɛ Xn α < ɛ. Hence P rob( Xn α < ɛ) P rob( X n α < ɛ) 1.) Convergence almost surely Z P rob (ω : lim (ω) = Z(ω)) = 1. As with convergence in robability, almost sure convergence makes sense when the random variables Z 1,...,,... and Z are all defined on the same samle sace. Hence, for each element in the samle sace ω Ω, we can sociate the sequence Z 1 (ω),..., (ω),... well the limit oint Z(ω). To understand the robability statement more clearly, sume that all these RVs are defined on the samle sace (Ω, B(Ω), P ). Define some set-theoretic concets. Consider a sequence of sets S 1,..., S n, all B(Ω). Unless this sequence is monotonic, it s difficult to talk about a limit of this sequence. So we define the liminf and limsu of this set sequence. lim inf S n lim m=n S m = n=1 m=n S m = {ω Ω : ω S n, n n 0 (ω)}. Note that the sequence of sets m=n S m, for n = 1, 2, 3,... is a non-decreing sequence of sets. By taking the union, the liminf is this the limit of this monotone sequence of sets. That is, for all ω lim infs n, there exists some number n 0 (ω) such that ω S n for all n n 0 (ω). Hence we can say that lim infs n is the set of outcomes which occur eventually. lim su S n S m = lim n=1 m=n S m m=n = { ω Ω : for every m, ω S n0 (ω,m) for some n 0 (ω, m) m }. 3

4 Note that the sequence of sets m=n S n, for n = 1, 2, 3,... is a non-increing sequence of sets. Hence, limsu is limit of this monotone sequence of sets. The lim sus n is the set of outcomes ω which occurs within every tail of the set sequence S n. Hence we say that an outcome ω lim sus n occurs infinitely often. 1 Note that Hence and liminf S n limsu S n. P (limsu S n ) P (liminf S n ) P (limsu S n ) = 0 P (liminf S n ) = 0 P (liminf S n ) = 1 P (limsu S n ) = 1. Borel-Cantelli Lemma: if i=1 P (S i) < then P (lim su S n ) = 0. Proof: for each m, we have P (lim su S n ) = P (lim { m=n S m }) = lim P ( which equals zero (by the sumtion that i=1 P (S i) < ). m=n S m ) lim P (S m ) m n The first equality (interchange of limit and Prob oerations) is an alication of the Monotone Convergence Theorem, which holds only because the sequence m=n S m is monotone. To aly this to almost-sure convergence, consider the sequence of sets S n {ω : (ω) Z(ω) > ɛ}, for ɛ > 0. Almost sure convergence is the statement that P (limsu S n ) = 0: Then m=n S m denotes the ω s such that the sequence (ω) Z(ω) exceeds ɛ in the tail beyond n. 1 However, note that all outcomes ω lim infs n also occur an infinite number of times along the infinite sequence S n. 4

5 Then lim m=n S m denotes all ω s such that the sequence (ω) Z(ω) exceeds ɛ in every tail: those ω for which (ω) escaes the ɛ-ball around Z(ω) infinitely often. For these ω s, lim (ω) Z(ω). For almost-sure convergence, you require the robability of this set to equal zero, i.e. Pr(lim m=n S m) = 0. Corresonding to this convergence concet, we have the Strong Law of Large Numbers (SLLN), which is the result that X n µ. Consider that the samle averages are random variables X 1 (ω), X 2 (ω),... defined on the same robability sace, say (Ω, B, P ), where each ω indexes a sequence a sequence X 1 (ω), X 2 (ω), X 3 (ω),.... Consider the set sequence S n = { ω : X n (ω) µ > ɛ }. SLLN is the statement that P (limsus n ) = 0. Proof (sketch; Davidson, g. 296): From the above discussion, we see that the two statements X n 0 and P ( X n > ɛ, i.o.) = 0 for any ɛ > 0 are equivalent. Therefore, one way of roving a SLLN is to verify that i=1 P ( X n > ɛ) < for all ɛ > 0, and then aly the Borel-Cantelli Lemma. As a starting oint, we see that Chebyshev s inequality is not good enough by itself; it tells us P ( X n > ɛ) σ2 nɛ ; but 1 2 nɛ =. 2 n=1 However, consider the subsequence with indices n 2 : {1, 4, 9, 16,...}. Chebyshev s inequality, we have 2 Again using P ( X n 2 > ɛ) σ2 n 2 ɛ ; and 1 2 n 2 ɛ = 1 2 ɛ 2 (π2 /6) 1.64 <. ɛ 2 n=1 Hence, by BC lemma, we have that X n 2 0. Next, examine the omitted terms from the sum i=1 P ( X n > ɛ). Define D n 2 = max n 2 k<(n+1) 2 X k X n 2. Note that: ( ) X k X n 2 n 2 = k 1 X n k 2 Another of Euler s greatest hits: i=1 1 n 2 = π2 6 k t=n 2 +1 X t (the Riemann zeta function). 5

6 and, by indeendence of X i, the two terms on RHS are uncorrelated. Hence V ar( X k X n 2) = ) 2 (1 n2 σ 2 k n + k n2 σ 2 2 k ) 2 = σ 2 ( 1 n 2 1 k By Chebyshev s inequality, then, we have n 2 P (D n 2 > ɛ) σ2 ɛ σ 2 ( 1 n 2 1 (n + 1) 2 ( ) 1 n 1 ; so 2 (n + 1) 2 P (D n 2 > ɛ) σ2 ( ) 1 ɛ 2 n 1 < σ2 ( ) 1 2 (n + 1) 2 ɛ 2 n 1 = σ2 2 (n + 1) 2 ɛ 2 n imlying, using BC Lemma, that D n 2 Now, consider, for n 2 l < (n + 1) 2, n 2 0. X l = X l X n 2 + X n 2 X l X n 2 + X n 2 D n 2 + X n 2. By the above discussion, the RHS converges a.s. to 0; that is, almost surely, for every ɛ > 0, there exists (n 2 ) such that RHS n 2 < ɛ for n 2 > (n 2 ). Consequently, for l > (n 2 ), X l < ɛ almost surely. That is, Xl 0. Examle: X i i.i.d. U[0, 1]. Show that X (1:n) min i=1,...,n X i 0. Take S n { X (1:n) > ɛ }. For all ɛ > 0, P ( X (1:n) > ɛ) = P (X (1:n) > ɛ) = P (X i > ɛ, i = 1,..., n) = (1 ɛ) n. Hence, n=1 (1 ɛ)n = 1/ɛ <. So the conclusion follows by the BC Lemma. Theorem: Z = Z. ). 6

7 Proof: 0 = P ( lim = lim P m n ( m n {ω : Z m (ω) Z(ω) > ɛ} {ω : Z m (ω) Z(ω) > ɛ} lim P ({ω : (ω) Z(ω) > ɛ}). ) ) Convergence in Distribution: d Z. A sequence of real-valued random variables Z 1, Z 2, Z 3,... converges in distribution to a random variable Z if lim F (z) = F Z (z) z s.t. F Z (z) is continuous. (1) This is a statement about the CDFs of the random variables Z, and Z 1, Z 2,.... These random variables do not need to be defined on the same robability sace. F Z is also called the limiting distribution of the random sequence. Alternative definitions of convergence in distribution ( Portmanteau theorem ): Letting F ([a, b]) F (b) F (a) for a < b, convergence (1) is equivalent to F Zn ([a, b]) F Z ([a, b]) [a, b] s.t. F Z continuous at a, b. For all bounded, continuous functions g( ), g(z)df Zn (z) g(z)df Z (z) Eg( ) Eg(Z). (2) This definition of distributional convergence is more useful in advanced settings, because it is extendable to setting where and Z are general random elements taking values in metric sace. Levy s continuity theorem: A sequence {X n } of random variables converges in distribution to random variable X if and only if the sequence of characteristic function {φ Xj (t)} converges ointwise (in t) to a function φ X (t) which is continuous at the origin. Then φ X is the characteristic function of X. 7

8 Hence, this ties together convergence of characteristic functions, and convergence in distribution. This theorem will be used to rove the CLT later. For roofs of most results here, see Serfling, Aroximation Theorems in Mathematical Statistics, ch. 1. Distributional convergence, defined above, is called weak convergence. This is because there are stronger notions of distributional convergence. One such notion is: su A B(R) P Zn (A) P Z (A) 0 which is convergence in total variation norm. Examle: Multinomial distribution on [0, 1]. X n { 1, 2, 3,..., n = 1} each with n n n n robability 1. Consider A = Q (set of rational numbers in [0, 1]). n Some definitions and results: Slutsky Theorem: If Z d n Z, and Y n α (a constant), then (a) Y n Z d n αz (b) + Y d n Z + α. Theorem *: (a) Z = Zn d Z (b) d Z = Z if Z is a constant. Note: convergence in robability imlies that (roughly seaking), the random variables (for n large enough) and Z frequently have the same numerical value. Convergence in distribution need not imly this, only that the CDF s of and Z are similar. Z d n Z = = O (1). Use = max( F 1 1 Z (1 ɛ), FZ (ɛ) ) in definition of O (1). Note that the LLN tells us that X, n µ, which imlies (trivially) that X n degenerate limiting distribution. d µ, a 8

9 This is not very useful for our uroses, because we are interested in knowing (say) how far X n is from µ, which is unknown. How do we fix X n, so that it h a non-degenerate limiting distribution? Central Limit Theorem: (Lindeberg-Levy) Let X i be i.i.d. with mean µ and variance σ 2. Then n( Xn µ) d N(0, 1). σ n is also called the rate of convergence of the sequence Xn. By definition, a rate of convergence is the lowest olynomial in n for which n ( X n µ) converges in distribution to a nondegenerate distribution. The rate of convergence n make sense: if you blow u X n by a constant (no matter how big), you still get a degenerate limiting distribution. If you blow u by n, then the sequence S n n X n = n i=1 X n will diverge. Proof via characteristic functions. LLCLT sumes indeendence (across n) well identically distributed. We extend this to the indeendent, non-identically distributed setting. Lindeberg-Feller CLT: For each n, let Y n,1,..., Y n,n be indeendent random variables with finite (ossibly non-identical) means and variances σ 2 n,i such that (1/C n ) 2 n E Y n,i EY n,i 2 1 { Yn,i EY n,i /C n>ɛ} 0 every ɛ > 0 i=1 where C n [ n i=1 σ2 n,i] 1/2. Then [ n i=1 (Y n,i EY n,i )] C n d N(0, 1) The samling framework is known a triangular array. Note that the (n 1)-th observation in the n-th samle (Y n,n 1 ) need not coincide with Y n 1,n 1, the (n 1)-th observation in the (n 1)-th samle. is a stan- C n is just the standard deviation of n i=1 Y n,i. Hence [ n i=1 (Y n,i EY n,i )] C n dardized sum. This is useful for showing ymtotic normality of Let-Squares regression. Let y = β 0 X + ɛ with ɛ i.i.d. with mean zero and variance σ 2, and x being an n 1 vector of covariates (here we have just 1 RHS variable). 9

10 The OLS estimator is ˆβ = (X X) 1 X Y. To aly the LFCLT, we consider the normalized difference between the estimated and true β s: ((X X) 1/2 /σ) ( ˆβ n β 0 ) = (1/σ) (X X) 1/2 X ɛ (1/σ) a n,i ɛ i where a n,i corresonds to the i-th comonent of the 1 n vector (1/σ) (X X) 1/2 X. So we just need to show that the Lindeberg condition alies to Y n,i = a n,i ɛ i. Note that n i=1 V (a n,iɛ i ) = V ( n i=1 a n,iɛ i ) = (1/σ 2 ) V ((X X) 1/2 X ɛ) = σ 2 /σ 2 1 = Cn. 2 Asymtotic aroximations for X n CLT tells us that n( X n µ)/σ h a limiting standard normal distribution. We can use this true result to say something about the distribution of Xn, even when n is finite. That is, we use the CLT to derive an ymtotic aroximation for the finite-samle distribution of Xn. The aroximation is follows: we fli over the result of the CLT: X n a σ n N(0, 1) + µ X n a N(µ, 1 n σ2 ). i=1 The notation a makes exlicit that what is on the RHS is an aroximation. Note that X d n N(µ, 1 n σ2 ) is definitely not true! This aroximation intuitively makes sense: under the sumtions of the LLCLT, we know that E X n = µ and V ar( X n ) = σ 2 /n. What the ymtotic aroximation tells us is that the distribution of Xn is aroximately normal. 3 Asymtotic aroximations for functions of X n Oftentimes, we are not interested er se in aroximating the finite-samle distribution of the samle mean X n, but rather functions of a samle mean. (Later, you will see that the ymtotic aroximations for many statistics and estimators that you run across are derived by exressing them samle averages.) Continuous maing theorem: Let g( ) be a continuous function. Then d Z = g( ) d g(z). Proof: Serfling, Aroximation Theorems of Mathematical Statistics, Note that the aroximation is exactly right if we sumed that X i N(µ, σ 2 ), for all i. 10

11 (Note: you still have to figure out what the limiting distribution of g(z) is. But if you know F X, then you can get F g(x) by the change of variables formul.) Note that for any linear function g( X n ) = a X n +b, deriving the limiting distribution of g( X n ) is no roblem (just use Slutsky s Theorem to get a X n + b a N(aµ + b, 1 n a2 σ 2 )). The roblem in deriving the distribution of g( X n ) is that the X n is inside the g function: 1. Use the Mean Value Theorem: Let g be a continuous function on [a, b] that is differentiable on (a, b). Then, there exists (at let one) λ (a, b) such that g (λ) = g(b) g(a) b a g(b) g(a) = g (λ)(b a). Using the MVT, we can write where X n is an RV strictly between X n and µ. 2. On the RHS of Eq. (3): g( X n ) g(µ) = g (X n)( X n µ) (3) (a) g (X n) g (µ) by the squeezing result, and the lim oerator theorem. (b) If we multily by n and divide by σ, we can aly the CLT to get n( Xn µ)/σ d N(0, 1). (c) Hence, n RHS = n[g( Xn ) g(µ)] using Slutsky s theorem. d N(0, g (µ) 2 σ 2 ), (4) 3. Now, in order to get the ymtotic aroximation for the distribution of g( X n ), we fli over to get g( X n ) a N(g(µ), 1 n g (µ) 2 σ 2 ). 4. Check that g satisfies sumtions for these results to go through: continuity and differentiability for MVT, and continuous at µ for lim oerator theorem. Examles: 1/ X n, ex( X n ), ( X n ) 2, etc. Eq. (4) is a general result, known the Delta method. For the uroses of this cls, I want you to derive the aroximate distributions for g( X n ) from first rinciles ( we did above). 11

12 1 Some extra toics (CAN SKIP) 1.1 Convergence in distribution vs. a.s. Combining two results above, we have Z = d Z. The converse is not generally true. (Indeed, {Z 1, Z 2, Z 3,...} need not be defined on the same robability sace, in which ce s.d. convergence makes no sense.) However, consider Z n = F 1 (U), Z = F 1 Z (U); U U[0, 1]. Here F 1 (U) denotes the quantile function corresonding to the CDF F (z): Z F 1 (τ) = inf {z : F (z) > τ} τ [0, 1]. We have F (F 1 (τ)) = τ. (Note that quantile function is also right-continuous; discontinuity oints of the quantile function arise where the CDF function is flat.) Then P (Z n z) = P (F 1 (U) z) = P (U F Zn (z)) = F Zn (z) so that Zn d =. (Even though their domains are different!) Similarly, Z = d Z. The notation = d means identically distributed. Moreover, it turns out (quite intuitive) that the convergence F 1 (U) F 1 Z (U) fails only at oints where F 1 Z (U) is discontinuous (corresonding to flat ortions of F Z (z)). Since these oints of discontinuity are a countable set, their robability (under U[0, 1]) is equal to zero, so that F 1 (U) F 1 Z (U) for almost-all U. So what we have here, is a result that for real-valued random variables Z d n Z, we can construct identically distributed variables such that both Zn d Z and Zn Z. This is called the Skorokhod construction. Skorokhod reresentation: Let (n = 1, 2,...) be random elements defined on robability saces (Ω n, B(Ω n ), P n ) and d Z, where X is defined on (Ω, B(Ω), P ). Then there exist random variables Z n (n = 1, 2,...) and Z defined on a common robability sace ( Ω, B( Ω), P ) such that Z n d = (n = 1, 2,...); Z = d Z; Zn Z. Alications: 12

13 Continuous maing theorem: Z d n Z Zn Z ; for h( ) continuous, we have h(zn) h(z ) h(zn) d h(z ) which imlies h( ) d h(z). Building on the above, if h is bounded, then we get Eh( ) = Eh(Z n) Eh(Z ) = Eh(Z) under the bounded convergence theorem, which shows one direction of the Portmanteau theorem, Eq. (2). 1.2 Functional Central limit theorems There are a set of distributional convergence results, which are known functional CLT s (or Donsker theorems), because they deal with convergence of random functions (or, interchangeably, rocesses). These are indisensible tools in finance. One of the simlest random functions is the Wiener rocess W(t), which is viewed a random function on the unit interval t [0, 1]. This is also known a Brownian motion rocess. Features of the Wiener rocess: 1. W(0) = 0 2. Gaussian marginals: W(t) = d N(0, t); that is, P (W(t) a) = 1 a ( ) u 2 ex du. 2πt 2t 3. Indeendent increments: Define x t W(t). For any set 0 t 0 t , the differences x t1 x t0, x t2 x t1,... are indeendent. 4. Given the two above features, we have that the increments are themselves normally distributed: x ti x ti 1 d = N(0, t i t i 1 ). Moreover, from t 1 t 0 = V (x 1 x 0 ) = E[(x 1 x 0 ) 2 ] 13 = Ex 2 1 2Ex 1 x 0 + Ex 2 0 = t 1 + t 0 2Ex 1 x 0

14 imlying Ex 1 x 0 = Cov(x 1, x 0 ) = t Furthermore, we know that any finite collection (x t1, x t2,...) is jointly distributed multivariate normal, with mean 0 and variance matrix Σ described above. Take the conditions of the LLCLT: X 1, X 2,... are iid with mean zero and finite variance σ 2. For a given n, define the artial sum S k = X 1 + X X k for k n. Now, for t [0, 1], define the normalized artial sum rocess S n (t) = S [tn] σ n + (tn [tn]) 1 σ n X [tn]+1 where [a] denotes the largest integer a. Sn (t) is a random function on [0, 1]. (Grah) Then the functional CLT is the following result: Functional CLT: S n d W. Note that S n is a random element taking values in C[0, 1], the sace of continuous functions on [0,1]. Proving this result is beyond our focus in this cls. 4 Note that the functional CLT imlies more than the LLCLT. For examle, take t = 1. Then S n (1) = By the FCLT, we have n i=1 X i σ n = n X n σ which is the same result the LLCLT. P ( S n (1) a) P (W(1) a) = P (N(0, 1) a) Similarly, for k n < n but kn τ [0, 1], we have n ( ) k k i=1 S n = X i n σ n = k X k σ n d N (0, τ). Also, it turns out, by a functional version of the continuous maing theorem, we can get convergence results for functionals of the artial-sum rocess. For instance, su t Sn (t) d su t W(t), and it turns out P (su t W(t) a) = 2 a ex( u 2 /2) du; a 0. 2π (Recall that W 0 = 0, so a < 0 makes no sense.) 4 See, for instance, Billingsley, Convergence of Probability Meures, ch. 2, section

1 Gambler s Ruin Problem

1 Gambler s Ruin Problem Coyright c 2009 by Karl Sigman 1 Gambler s Ruin Problem Let N 2 be an integer and let 1 i N 1. Consider a gambler who starts with an initial fortune of $i and then on each successive gamble either wins

More information

PRIME NUMBERS AND THE RIEMANN HYPOTHESIS

PRIME NUMBERS AND THE RIEMANN HYPOTHESIS PRIME NUMBERS AND THE RIEMANN HYPOTHESIS CARL ERICKSON This minicourse has two main goals. The first is to carefully define the Riemann zeta function and exlain how it is connected with the rime numbers.

More information

Lecture 21 and 22: The Prime Number Theorem

Lecture 21 and 22: The Prime Number Theorem Lecture and : The Prime Number Theorem (New lecture, not in Tet) The location of rime numbers is a central question in number theory. Around 88, Legendre offered eerimental evidence that the number π()

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

Lecture 13: Martingales

Lecture 13: Martingales Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

More information

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh

Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem

More information

POISSON PROCESSES. Chapter 2. 2.1 Introduction. 2.1.1 Arrival processes

POISSON PROCESSES. Chapter 2. 2.1 Introduction. 2.1.1 Arrival processes Chater 2 POISSON PROCESSES 2.1 Introduction A Poisson rocess is a simle and widely used stochastic rocess for modeling the times at which arrivals enter a system. It is in many ways the continuous-time

More information

Probability and Statistics

Probability and Statistics CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

More information

The Heat Equation. Lectures INF2320 p. 1/88

The Heat Equation. Lectures INF2320 p. 1/88 The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)

More information

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks

6.042/18.062J Mathematics for Computer Science December 12, 2006 Tom Leighton and Ronitt Rubinfeld. Random Walks 6.042/8.062J Mathematics for Comuter Science December 2, 2006 Tom Leighton and Ronitt Rubinfeld Lecture Notes Random Walks Gambler s Ruin Today we re going to talk about one-dimensional random walks. In

More information

THE CENTRAL LIMIT THEOREM TORONTO

THE CENTRAL LIMIT THEOREM TORONTO THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010 Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette......................................

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

The Ergodic Theorem and randomness

The Ergodic Theorem and randomness The Ergodic Theorem and randomness Peter Gács Department of Computer Science Boston University March 19, 2008 Peter Gács (Boston University) Ergodic theorem March 19, 2008 1 / 27 Introduction Introduction

More information

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

More information

Chapter 1 - σ-algebras

Chapter 1 - σ-algebras Page 1 of 17 PROBABILITY 3 - Lecture Notes Chapter 1 - σ-algebras Let Ω be a set of outcomes. We denote by P(Ω) its power set, i.e. the collection of all the subsets of Ω. If the cardinality Ω of Ω is

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Lectures on Stochastic Processes. William G. Faris

Lectures on Stochastic Processes. William G. Faris Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3

More information

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i ) Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

More information

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e.

CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. CHAPTER II THE LIMIT OF A SEQUENCE OF NUMBERS DEFINITION OF THE NUMBER e. This chapter contains the beginnings of the most important, and probably the most subtle, notion in mathematical analysis, i.e.,

More information

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

More information

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

More information

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics

Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

9 More on differentiation

9 More on differentiation Tel Aviv University, 2013 Measure and category 75 9 More on differentiation 9a Finite Taylor expansion............... 75 9b Continuous and nowhere differentiable..... 78 9c Differentiable and nowhere monotone......

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

More information

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let

1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as

More information

LECTURE 15: AMERICAN OPTIONS

LECTURE 15: AMERICAN OPTIONS LECTURE 15: AMERICAN OPTIONS 1. Introduction All of the options that we have considered thus far have been of the European variety: exercise is permitted only at the termination of the contract. These

More information

CHAPTER IV - BROWNIAN MOTION

CHAPTER IV - BROWNIAN MOTION CHAPTER IV - BROWNIAN MOTION JOSEPH G. CONLON 1. Construction of Brownian Motion There are two ways in which the idea of a Markov chain on a discrete state space can be generalized: (1) The discrete time

More information

x if x 0, x if x < 0.

x if x 0, x if x < 0. Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

Chapter 2: Binomial Methods and the Black-Scholes Formula

Chapter 2: Binomial Methods and the Black-Scholes Formula Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the

More information

1 Norms and Vector Spaces

1 Norms and Vector Spaces 008.10.07.01 1 Norms and Vector Spaces Suppose we have a complex vector space V. A norm is a function f : V R which satisfies (i) f(x) 0 for all x V (ii) f(x + y) f(x) + f(y) for all x,y V (iii) f(λx)

More information

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails 12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

More information

The Lebesgue Measure and Integral

The Lebesgue Measure and Integral The Lebesgue Measure and Integral Mike Klaas April 12, 2003 Introduction The Riemann integral, defined as the limit of upper and lower sums, is the first example of an integral, and still holds the honour

More information

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d).

t := maxγ ν subject to ν {0,1,2,...} and f(x c +γ ν d) f(x c )+cγ ν f (x c ;d). 1. Line Search Methods Let f : R n R be given and suppose that x c is our current best estimate of a solution to P min x R nf(x). A standard method for improving the estimate x c is to choose a direction

More information

Mathematical Methods of Engineering Analysis

Mathematical Methods of Engineering Analysis Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................

More information

Monitoring Frequency of Change By Li Qin

Monitoring Frequency of Change By Li Qin Monitoring Frequency of Change By Li Qin Abstract Control charts are widely used in rocess monitoring roblems. This aer gives a brief review of control charts for monitoring a roortion and some initial

More information

2.3 Convex Constrained Optimization Problems

2.3 Convex Constrained Optimization Problems 42 CHAPTER 2. FUNDAMENTAL CONCEPTS IN CONVEX OPTIMIZATION Theorem 15 Let f : R n R and h : R R. Consider g(x) = h(f(x)) for all x R n. The function g is convex if either of the following two conditions

More information

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have

More information

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME Alexey Chuprunov Kazan State University, Russia István Fazekas University of Debrecen, Hungary 2012 Kolchin s generalized allocation scheme A law of

More information

The Online Freeze-tag Problem

The Online Freeze-tag Problem The Online Freeze-tag Problem Mikael Hammar, Bengt J. Nilsson, and Mia Persson Atus Technologies AB, IDEON, SE-3 70 Lund, Sweden mikael.hammar@atus.com School of Technology and Society, Malmö University,

More information

BANACH AND HILBERT SPACE REVIEW

BANACH AND HILBERT SPACE REVIEW BANACH AND HILBET SPACE EVIEW CHISTOPHE HEIL These notes will briefly review some basic concepts related to the theory of Banach and Hilbert spaces. We are not trying to give a complete development, but

More information

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS STEVEN P. LALLEY AND ANDREW NOBEL Abstract. It is shown that there are no consistent decision rules for the hypothesis testing problem

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

More information

Section 3 Sequences and Limits, Continued.

Section 3 Sequences and Limits, Continued. Section 3 Sequences and Limits, Continued. Lemma 3.6 Let {a n } n N be a convergent sequence for which a n 0 for all n N and it α 0. Then there exists N N such that for all n N. α a n 3 α In particular

More information

Stochastic Derivation of an Integral Equation for Probability Generating Functions

Stochastic Derivation of an Integral Equation for Probability Generating Functions Journal of Informatics and Mathematical Sciences Volume 5 (2013), Number 3,. 157 163 RGN Publications htt://www.rgnublications.com Stochastic Derivation of an Integral Equation for Probability Generating

More information

Gambling Systems and Multiplication-Invariant Measures

Gambling Systems and Multiplication-Invariant Measures Gambling Systems and Multiplication-Invariant Measures by Jeffrey S. Rosenthal* and Peter O. Schwartz** (May 28, 997.. Introduction. This short paper describes a surprising connection between two previously

More information

DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS

DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS DEGREES OF ORDERS ON TORSION-FREE ABELIAN GROUPS ASHER M. KACH, KAREN LANGE, AND REED SOLOMON Abstract. We construct two computable presentations of computable torsion-free abelian groups, one of isomorphism

More information

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,

More information

Exercises with solutions (1)

Exercises with solutions (1) Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS

COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS IGOR V. EROVENKO AND B. SURY ABSTRACT. We compute commutativity degrees of wreath products A B of finite abelian groups A and B. When B

More information

Math 526: Brownian Motion Notes

Math 526: Brownian Motion Notes Math 526: Brownian Motion Notes Definition. Mike Ludkovski, 27, all rights reserved. A stochastic process (X t ) is called Brownian motion if:. The map t X t (ω) is continuous for every ω. 2. (X t X t

More information

Some representability and duality results for convex mixed-integer programs.

Some representability and duality results for convex mixed-integer programs. Some representability and duality results for convex mixed-integer programs. Santanu S. Dey Joint work with Diego Morán and Juan Pablo Vielma December 17, 2012. Introduction About Motivation Mixed integer

More information

Transformations and Expectations of random variables

Transformations and Expectations of random variables Transformations and Epectations of random variables X F X (): a random variable X distributed with CDF F X. Any function Y = g(x) is also a random variable. If both X, and Y are continuous random variables,

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Some stability results of parameter identification in a jump diffusion model

Some stability results of parameter identification in a jump diffusion model Some stability results of parameter identification in a jump diffusion model D. Düvelmeyer Technische Universität Chemnitz, Fakultät für Mathematik, 09107 Chemnitz, Germany Abstract In this paper we discuss

More information

1 The Brownian bridge construction

1 The Brownian bridge construction The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof

More information

Universiteit-Utrecht. Department. of Mathematics. Optimal a priori error bounds for the. Rayleigh-Ritz method

Universiteit-Utrecht. Department. of Mathematics. Optimal a priori error bounds for the. Rayleigh-Ritz method Universiteit-Utrecht * Deartment of Mathematics Otimal a riori error bounds for the Rayleigh-Ritz method by Gerard L.G. Sleijen, Jaser van den Eshof, and Paul Smit Prerint nr. 1160 Setember, 2000 OPTIMAL

More information

Taylor Polynomials and Taylor Series Math 126

Taylor Polynomials and Taylor Series Math 126 Taylor Polynomials and Taylor Series Math 26 In many problems in science and engineering we have a function f(x) which is too complicated to answer the questions we d like to ask. In this chapter, we will

More information

Hypothesis Testing COMP 245 STATISTICS. Dr N A Heard. 1 Hypothesis Testing 2 1.1 Introduction... 2 1.2 Error Rates and Power of a Test...

Hypothesis Testing COMP 245 STATISTICS. Dr N A Heard. 1 Hypothesis Testing 2 1.1 Introduction... 2 1.2 Error Rates and Power of a Test... Hypothesis Testing COMP 45 STATISTICS Dr N A Heard Contents 1 Hypothesis Testing 1.1 Introduction........................................ 1. Error Rates and Power of a Test.............................

More information

Adaptive Online Gradient Descent

Adaptive Online Gradient Descent Adaptive Online Gradient Descent Peter L Bartlett Division of Computer Science Department of Statistics UC Berkeley Berkeley, CA 94709 bartlett@csberkeleyedu Elad Hazan IBM Almaden Research Center 650

More information

Fuzzy Differential Systems and the New Concept of Stability

Fuzzy Differential Systems and the New Concept of Stability Nonlinear Dynamics and Systems Theory, 1(2) (2001) 111 119 Fuzzy Differential Systems and the New Concept of Stability V. Lakshmikantham 1 and S. Leela 2 1 Department of Mathematical Sciences, Florida

More information

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.

a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given

More information

THE ROBUSTNESS AGAINST DEPENDENCE OF NONPARAMETRIC TESTS FOR THE TWO-SAMPLE LOCATION PROBLEM

THE ROBUSTNESS AGAINST DEPENDENCE OF NONPARAMETRIC TESTS FOR THE TWO-SAMPLE LOCATION PROBLEM APPLICATIOES MATHEMATICAE,4 (1995), pp. 469 476 P. GRZEGORZEWSKI (Warszawa) THE ROBUSTESS AGAIST DEPEDECE OF OPARAMETRIC TESTS FOR THE TWO-SAMPLE LOCATIO PROBLEM Abstract. onparametric tests for the two-sample

More information

x a x 2 (1 + x 2 ) n.

x a x 2 (1 + x 2 ) n. Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number

More information

Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the

More information

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics

No: 10 04. Bilkent University. Monotonic Extension. Farhad Husseinov. Discussion Papers. Department of Economics No: 10 04 Bilkent University Monotonic Extension Farhad Husseinov Discussion Papers Department of Economics The Discussion Papers of the Department of Economics are intended to make the initial results

More information

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São

More information

Mathematics for Econometrics, Fourth Edition

Mathematics for Econometrics, Fourth Edition Mathematics for Econometrics, Fourth Edition Phoebus J. Dhrymes 1 July 2012 1 c Phoebus J. Dhrymes, 2012. Preliminary material; not to be cited or disseminated without the author s permission. 2 Contents

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

Topic 4: Multivariate random variables. Multiple random variables

Topic 4: Multivariate random variables. Multiple random variables Topic 4: Multivariate random variables Joint, marginal, and conditional pmf Joint, marginal, and conditional pdf and cdf Independence Expectation, covariance, correlation Conditional expectation Two jointly

More information

COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS

COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS Bull Austral Math Soc 77 (2008), 31 36 doi: 101017/S0004972708000038 COMMUTATIVITY DEGREES OF WREATH PRODUCTS OF FINITE ABELIAN GROUPS IGOR V EROVENKO and B SURY (Received 12 April 2007) Abstract We compute

More information

Bipan Hazarika ON ACCELERATION CONVERGENCE OF MULTIPLE SEQUENCES. 1. Introduction

Bipan Hazarika ON ACCELERATION CONVERGENCE OF MULTIPLE SEQUENCES. 1. Introduction F A S C I C U L I M A T H E M A T I C I Nr 51 2013 Bipan Hazarika ON ACCELERATION CONVERGENCE OF MULTIPLE SEQUENCES Abstract. In this article the notion of acceleration convergence of double sequences

More information

Economics 241B Hypothesis Testing: Large Sample Inference

Economics 241B Hypothesis Testing: Large Sample Inference Economics 241B Hyothesis Testing: Large Samle Inference Statistical inference in large-samle theory is base on test statistics whose istributions are nown uner the truth of the null hyothesis. Derivation

More information

SECTION 6: FIBER BUNDLES

SECTION 6: FIBER BUNDLES SECTION 6: FIBER BUNDLES In this section we will introduce the interesting class o ibrations given by iber bundles. Fiber bundles lay an imortant role in many geometric contexts. For examle, the Grassmaniann

More information

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied

More information

Lecture 4: BK inequality 27th August and 6th September, 2007

Lecture 4: BK inequality 27th August and 6th September, 2007 CSL866: Percolation and Random Graphs IIT Delhi Amitabha Bagchi Scribe: Arindam Pal Lecture 4: BK inequality 27th August and 6th September, 2007 4. Preliminaries The FKG inequality allows us to lower bound

More information

SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q. 1. Quadratic Extensions

SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q. 1. Quadratic Extensions SOME PROPERTIES OF EXTENSIONS OF SMALL DEGREE OVER Q TREVOR ARNOLD Abstract This aer demonstrates a few characteristics of finite extensions of small degree over the rational numbers Q It comrises attemts

More information

A MOST PROBABLE POINT-BASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION

A MOST PROBABLE POINT-BASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION 9 th ASCE Secialty Conference on Probabilistic Mechanics and Structural Reliability PMC2004 Abstract A MOST PROBABLE POINT-BASED METHOD FOR RELIABILITY ANALYSIS, SENSITIVITY ANALYSIS AND DESIGN OPTIMIZATION

More information

Finitely Additive Dynamic Programming and Stochastic Games. Bill Sudderth University of Minnesota

Finitely Additive Dynamic Programming and Stochastic Games. Bill Sudderth University of Minnesota Finitely Additive Dynamic Programming and Stochastic Games Bill Sudderth University of Minnesota 1 Discounted Dynamic Programming Five ingredients: S, A, r, q, β. S - state space A - set of actions q(

More information

The Exponential Distribution

The Exponential Distribution 21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

More information

10.2 Series and Convergence

10.2 Series and Convergence 10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

More information

1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM)

1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM) Copyright c 2013 by Karl Sigman 1 Simulating Brownian motion (BM) and geometric Brownian motion (GBM) For an introduction to how one can construct BM, see the Appendix at the end of these notes A stochastic

More information

Maximum Entropy. Information Theory 2013 Lecture 9 Chapter 12. Tohid Ardeshiri. May 22, 2013

Maximum Entropy. Information Theory 2013 Lecture 9 Chapter 12. Tohid Ardeshiri. May 22, 2013 Maximum Entropy Information Theory 2013 Lecture 9 Chapter 12 Tohid Ardeshiri May 22, 2013 Why Maximum Entropy distribution? max f (x) h(f ) subject to E r(x) = α Temperature of a gas corresponds to the

More information

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010

Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010 Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte

More information

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100

Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100 Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,

More information

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1 ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

More information

Lecture notes on Moral Hazard, i.e. the Hidden Action Principle-Agent Model

Lecture notes on Moral Hazard, i.e. the Hidden Action Principle-Agent Model Lecture notes on Moral Hazard, i.e. the Hidden Action Principle-Agent Model Allan Collard-Wexler April 19, 2012 Co-Written with John Asker and Vasiliki Skreta 1 Reading for next week: Make Versus Buy in

More information

TRANSCENDENTAL NUMBERS

TRANSCENDENTAL NUMBERS TRANSCENDENTAL NUMBERS JEREMY BOOHER. Introduction The Greeks tried unsuccessfully to square the circle with a comass and straightedge. In the 9th century, Lindemann showed that this is imossible by demonstrating

More information

Joint Distributions. Lecture 5. Probability & Statistics in Engineering. 0909.400.01 / 0909.400.02 Dr. P. s Clinic Consultant Module in.

Joint Distributions. Lecture 5. Probability & Statistics in Engineering. 0909.400.01 / 0909.400.02 Dr. P. s Clinic Consultant Module in. -3σ -σ -σ +σ +σ +3σ Joint Distributions Lecture 5 0909.400.01 / 0909.400.0 Dr. P. s Clinic Consultant Module in Probabilit & Statistics in Engineering Toda in P&S -3σ -σ -σ +σ +σ +3σ Dealing with multile

More information

Arbitrage and asset market equilibrium in infinite dimensional economies with risk-averse expected utilities

Arbitrage and asset market equilibrium in infinite dimensional economies with risk-averse expected utilities Arbitrage and asset market equilibrium in infinite dimensional economies with risk-averse expected utilities Thai Ha-Huy, Cuong Le Van and Manh-Hung Nguyen April 22, 2013 Abstract We consider a model with

More information

2. Mean-variance portfolio theory

2. Mean-variance portfolio theory 2. Mean-variance portfolio theory (2.1) Markowitz s mean-variance formulation (2.2) Two-fund theorem (2.3) Inclusion of the riskfree asset 1 2.1 Markowitz mean-variance formulation Suppose there are N

More information

S. Boyd EE102. Lecture 1 Signals. notation and meaning. common signals. size of a signal. qualitative properties of signals.

S. Boyd EE102. Lecture 1 Signals. notation and meaning. common signals. size of a signal. qualitative properties of signals. S. Boyd EE102 Lecture 1 Signals notation and meaning common signals size of a signal qualitative properties of signals impulsive signals 1 1 Signals a signal is a function of time, e.g., f is the force

More information

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

More information

Metric Spaces. Chapter 1

Metric Spaces. Chapter 1 Chapter 1 Metric Spaces Many of the arguments you have seen in several variable calculus are almost identical to the corresponding arguments in one variable calculus, especially arguments concerning convergence

More information