Probability Generating Functions
|
|
|
- Posy Hunt
- 9 years ago
- Views:
Transcription
1 page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence {a i : i = 0,, 2, } of real numbers: the numbers can be parcelled up in several kinds of generating functions The ordinary generating function of the sequence is defined as G(s) = a i s i for those values of the parameter s for which the sum converges For a given sequence, there exists a radius of convergence R( 0) such that the sum converges absolutely if s < R and diverges if s > R G(s) may be differentiated or integrated term by term any number of times when s < R For many well-defined sequences, G(s) can be written in closed form, and the individual numbers in the sequence can be recovered either by series expansion or by taking derivatives i=0 32 Definitions and Properties Consider a count rv X, ie a discrete rv taking non-negative values Write p k = P(X = k), k = 0,, 2, (3) (if X takes a finite number of values, we simply attach zero probabilities to those values which cannot occur) The probability generating function (PGF) of X is defined as G X (s) = p k s k = E(s X ) (32) Note that G X () =, so the series converges absolutely for s Also G X (0) = p 0 For some of the more common distributions, the PGFs are as follows: (i) Constant rv if p c =, p k = 0, k c, then G X (s) = E(s X ) = s c (33)
2 page 40 (ii) Bernoulli rv if p = p, p 0 = p = q, p k = 0, k 0 or, then G X (s) = E(s X ) = q + ps (34) (iii) Geometric rv if p k = pq k, k =, 2, ; q = p, then G X (s) = ps qs (iv) Binomial rv if X Bin(n, p), then if s < q (see HW Sheet 4) (35) G X (s) = (q + ps) n, (q = p) (see HW Sheet 4) (36) (v) Poisson rv if X Poisson(λ), then G X (s) = k! λk e λ s k = e λ(s ) (37) (vi) Negative binomial rv if X NegBin(n, p), then ( ) ( ) k ps n G X (s) = p n q k n s k = if s < q and p + q = (38) n qs Uniqueness Theorem If X and Y have PGFs G X and G Y respectively, then G X (s) = G Y (s) for all s (a) iff P(X = k) = P(Y = k) for k = 0,, (b) ie if and only if X and Y have the same probability distribution Proof We need only prove that (a) implies (b) The radii of convergence of G X and G Y are, so they have unique power series expansions about the origin: G X (s) = G Y (s) = s k P(X = k) s k P(Y = k) If G X = G Y, these two power series have identical coefficients ps Example: If X has PGF with q = p, then we can conclude that ( qs) X Geometric(p) Given a function A(s) which is known to be a PGF of a count rv X, we can obtain p k = P(X = k) either by expanding A(s) in a power series in s and setting p k = coefficient of s k ; or by differentiating A(s) k times with respect to s and setting s = 0
3 page 4 We can extend the definition of PGF to functions of X The PGF of Y = H(X) is ( G Y (s) = G H(X) (s) = E s H(X)) = k P(X = k)s H(k) (39) If H is fairly simple, it may be possible to express G Y (s) in terms of G X (s) Example: Let Y = a + bx Then G Y (s) = E(s Y ) = E(s a+bx ) = s a E[(s b ) X ] = s a G X (s b ) (30) 33 Moments Theorem Let X be a count rv and G (r) X () the rth derivative of its PGF G X(s) at s = Then Proof (informal) G (r) X () = E[X(X )(X r + )] (3) G (r) dr X (s) = ds r [G X(s)] = dr ds r = k [ ] p k s k k p k k(k )(k r + )s k r (assuming the interchange of d r ds r and k is justified) This series is convergent for s, so G (r) X () = E[X(X )(X r + )], r In particular: and so G () X () (or G X()) = E(X) (32) G (2) X () (or G X ()) = E[X(X )] = E(X 2 ) E(X) = Var(X) + [E(X)] 2 E(X) Var(X) = G (2) [ X () G () ] 2 X () () + G X () (33) Example: If X Poisson(λ), then G X (s) = e λ(s ) ; G () X (s) = λeλ(s ) so E(X) = G () X () = λe0 = λ G (2) X (s) = λ2 e λ(s ) so Var(X) = λ 2 λ 2 + λ = λ
4 page Sums of independent random variables Theorem Let X and Y be independent count rvs with PGFs G X (s) and G Y (s) respectively, and let Z = X + Y Then Proof Corollary G Z (s) = G X+Y (s) = G X (s)g Y (s) (34) G Z (s) = E(s Z ) = E(s X+Y ) = E(s X )E(s Y ) (independence) = G X (s)g Y (s) If X,, X n are independent count rvs with PGFs G X (s),, G Xn (s) respectively (and n is a known integer), then Example 3 G X ++X n (s) = G X (s)g Xn (s) (35) Find the distribution of the sum of n independent rvs X i, i =, n, where X i Poisson(λ i ) Solution So This is the PGF of a Poisson rv, ie Example 32 G Xi (s) = e λ i(s ) G X +X 2 + +X n (s) = n e λ i(s ) i= = e (λ + +λ n)(s ) n n X i Poisson( λ i ) (36) i= i= In a sequence of n independent Bernoulli trials Let {, if the ith trial yields a success (probability p) I i = 0, if the ith trial yields a failure (probability q = p) n Let X = I i = number of successes in n trials What is the probability distribution of X? i= Solution Since the trials are independent, the rvs I, I n are independent So G X (s) = G I G I2 G In (s) But G Ii (s) = q + ps, Then ie X Bin(n, p) i =, n So G X (s) = (q + ps) n = n x=0 ( ) n (ps) x q n x x P(X = x) = coefficient of s x in G X (s) = ( n x) p x q n x, x = 0,, n
5 page Sum of a random number of independent rvs Theorem Let N, X, X 2, be independent count rvs If the {X i } are identically distributed, each with PGF G X, then has PGF S N = X + + X N (37a) G SN = G N (G X (s)) (37b) (Note: We adopt the convention that X + + X N = 0 for N = 0) This is an example of the process known as compounding with respect to a parameter Proof We have ( G SN (s) = E = E = = = n=0 n=0 n=0 n=0 ) s S N ( ) s S N N = n P(N = n) (conditioning on N) ) E (s Sn P(N = n) G X + +X n (s)p(n = n) [G X (s)] n P(N = n) (using Corollary in previous section) = G N (G X (s)) by definition of G N Corollaries E(S N ) = E(N)E(X) (38) Proof d ds [G S N (s)] = d ds [G N (G X (s))] = dg N (u) du where u = G X (s) du ds Setting s =, (so that u = G X () = ), we get E(S N ) = Similarly it can be deduced that [ G () ] [ N () G () ] X () = E(N)E(X) 2 Var(S N ) = E(N)Var(X) + Var(N) [E(X)] 2 (39) (prove!)
6 page 44 Example 33 The Poisson Hen A hen lays N eggs, where N Poisson(λ) Each egg hatches with probability p, independently of the other eggs Find the probability distribution of the number of chicks, Z Solution We have Z = X + + X N, where X, are independent Bernoulli rvs with parameter p Then G N (s) = e λ(s ), G X (s) = q + ps So G Z (s) = G N (G X (s)) = e λp(s ) the PGF of a Poisson rv, ie Z Poisson(λp) 36 *Using GFs to solve recurrence relations [NOTE: Not required for examination purposes] Instead of appealing to the theory of difference relations when attempting to solve the recurrence relations which arise in the course of solving problems by conditioning, one can often convert the recurrence relation into a linear differential equation for a generating function, to be solved subject to appropriate boundary conditions Example 34 The Matching Problem (yet again) At the end of Chapter [Example (0)], the recurrence relation p n = n n p n + n p n 2, n 3 was derived for the probability p n that no matches occur in an n-card matching problem Solve this using an appropriate generating function Solution Multiply through by ns n and sum over all suitable values of n to obtain Introducing the GF ns n p n = s (n )s n 2 p n + s s n 2 p n 2 n=3 n=3 n=3 G(s) = p n s n n= (NB: this is not a PGF, since {p n : n } is not a probability distribution), we can rewrite this as G (s) p 2p 2 s = s[g (s) p ] + sg(s) ie ( s)g (s) = sg(s) + s (since p = 0, p 2 = 2 ) This first order differential equation now has to be solved subject to the boundary condition The result is G(0) = 0 G(s) = ( s) e s Now expand G(s) as a power series in s, and extract the coefficient of s n : this yields p n = + ( )! + ( )n (n )! + ( )n, n n! the result obtained previously
7 page Branching Processes 37 Definitions Consider a hypothetical organism which lives for exactly one time unit and then dies in the process of giving birth to a family of similar organisms We assume that: (i) the family sizes are independent rvs, each taking the values 0,,2,; (ii) the family sizes are identically distributed rvs, the number of children in a family, C, having the distribution P(C = k) = p k, k = 0,, 2, (320) The evolution of the total population as time proceeds is termed a branching process: this provides a simple model for bacterial growth, spread of family names, etc Let X n = number of organisms born at time n (ie, the size of the nth generation) (32) The evolution of the population is described by the sequence of rvs X 0, X, X 2, (an example of a stochastic process of which more later) We assume that X 0 =, ie we start with precisely one organism A typical example can be shown as a family tree: 0 X 0 = Generation 2 X = 3 X 2 = 6 3 X 3 = 9 We can use PGFs to investigate this process 372 Evolution of the population Let G(s) be the PGF of C: G(s) = P(C = k)s k, (322) and G n (s) the PGF of X n : G n (s) = P(X n = x)s x (323) x=0 Now G 0 (s) = s, since P(X 0 = ) = ; P(X 0 = x) = 0 for x, and G (s) = G(s) Also X n = C + C C Xn, (324) where C i is the size of the family produced by the ith member of the (n )th generation
8 page 46 So, since X n is the sum of a random number of independent and identically distributed (iid) rvs, we have G n (s) = G n (G(s)) for n = 2, 3, (325) and this is also true for n = Iterating this result, we have ie, G n is the nth iterate of G G n (s) = G n (G(s)) = G n 2 (G(G(s))) = = G (G(G((s)))) = G(G(G((s)))), n = 0,, 2, Now ie E(X n ) = G n () = G n (G())G () = G n ()G () [since G() = ] where µ = E(C) is the mean family size Hence It follows that E(X n ) = E(X n )µ (326) E(X n ) = µe(x n ) = µ 2 E(X n 2 ) = = µ n E(X 0 ) = µ n 0, if µ < E(X n ), if µ = (critical value), if µ > (327) It can be shown similarly that nσ 2, if µ = Var(X n ) = σ 2 µ n µn µ, if µ (328) Example 35 Investigate a branching process in which C has the modified geometric distribution p k = pq k, k = 0,, 2, ; 0 < p = q <, with p q [NB] Solution The PGF of C is G(s) = pq k s k = p qs if s < q We now need to solve the functional equations G n (s) = G n (G(s))
9 page 47 First, if s, G (s) = G(s) = (remember we have assumed that p q) Then We therefore conjecture that p qs = p (q p) (q 2 p 2 ) qs(q p) p G 2 (s) = G(G(s)) = qp qs p( qs) = qp qs = p (q2 p 2 ) qs(q p) (q 3 p 3 ) qs(q 2 p 2 ) G n (s) = p (qn p n ) qs(q n p n ) (q n+ p n+ ) qs(q n p n ) for n =, 2, and s, and this result can be proved by induction on n We could derive the entire probability distribution of X n by expanding the rhs as a power series in s, but the result is rather complicated In particular, however, q n p n P(X n = 0) = G n (0) = p q n+ p n+ = µn µ n+, where µ = q/p is the mean family size It follows that ultimate extinction is certain if µ < and less than certain if µ > Separate consideration of the case p = q = 2 (see homework) shows that ultimate extinction is also certain when µ = We now proceed to show that these conclusions are valid for all family size distributions 373 The Probability of Extinction The probability that the process is extinct by the nth generation is e n = P(X n = 0) (329) Now e n, and e n e n+ (since X n = 0 implies that X n+ = 0) - ie {e n } is a bounded monotonic sequence So exists and is called the probability of ultimate extinction Theorem e is the smallest non-negative root of the equation x = G(x) Proof Note that e n = P(X n = 0) = G n (0) Now e = lim n e n (330) G n (s) = G n (G(s)) = = G(G(s))) = G(G n (s))
10 page 48 Set s = 0 Then e n = G n (0) = G(e n ), n =, 2, with boundary condition e 0 = 0 In the limit n we have e = G(e) Now let η be any non-negative root of the equation x = G(x) G is non-decreasing on the interval [0, ] (since it has non-negative coefficients), and so and by induction So e = G(e 0 ) = G(0) G(η) = η; e 2 = G(e ) G(η) = η [using previous line] e n η, n =, 2, e = lim n e n η It follows that e is the smallest non-negative root Theorem 2 e = if and only if µ [Note: we rule out the special case p = ; p k = 0 for k, when µ = but e = 0] Proof We can suppose that p 0 > 0 (since otherwise e = 0 and µ > ) Now on the interval [0, ], G is (i) continuous (since radius of convergence ); (ii) non-decreasing (since G (x) = k kp k x k 0); (iii) convex (since G (x) = k k(k )p k x k 2 0) It follows that in [0, ] the line y = x has either or 2 intersections with the curve y = G(x),as shown below: y y y=g(x) y=x e y=g(x) y=x e (a) x (b) x Since G () = µ, it follows that The curve y = G(x) and the line y = x, in the two cases when (a) G () > and (b) G () e = if and only if µ
11 page 49 Example 36 Find the probability of ultimate extinction when C has the modified geometric distribution p k = pq k, k = 0,, 2, ; 0 < p = q < (the case p = q is now permitted - cf Example 35) Solution The PGF of C is G(s) = p qs, s < q, and e is the smallest non-negative root of The roots are So G(x) = p qx = x ± (2p ) 2( p) if p < 2 (ie µ = q/p > ), e = p/q(= µ ); if p 2 (ie µ ), e = in agreement with our earlier discussion in Example 35 [Note: in the above discussion, the continuity of probability measures (see Notes below eqn (0)) has been invoked more than once without comment]
The Exponential Distribution
21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding
Aggregate Loss Models
Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing
2WB05 Simulation Lecture 8: Generating random variables
2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating
Statistics 100A Homework 4 Solutions
Problem 1 For a discrete random variable X, Statistics 100A Homework 4 Solutions Ryan Rosario Note that all of the problems below as you to prove the statement. We are proving the properties of epectation
HOMEWORK 5 SOLUTIONS. n!f n (1) lim. ln x n! + xn x. 1 = G n 1 (x). (2) k + 1 n. (n 1)!
Math 7 Fall 205 HOMEWORK 5 SOLUTIONS Problem. 2008 B2 Let F 0 x = ln x. For n 0 and x > 0, let F n+ x = 0 F ntdt. Evaluate n!f n lim n ln n. By directly computing F n x for small n s, we obtain the following
Lectures on Stochastic Processes. William G. Faris
Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3
Solved Problems. Chapter 14. 14.1 Probability review
Chapter 4 Solved Problems 4. Probability review Problem 4.. Let X and Y be two N 0 -valued random variables such that X = Y + Z, where Z is a Bernoulli random variable with parameter p (0, ), independent
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms
e.g. arrival of a customer to a service station or breakdown of a component in some system.
Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be
Practice problems for Homework 11 - Point Estimation
Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:
PUTNAM TRAINING POLYNOMIALS. Exercises 1. Find a polynomial with integral coefficients whose zeros include 2 + 5.
PUTNAM TRAINING POLYNOMIALS (Last updated: November 17, 2015) Remark. This is a list of exercises on polynomials. Miguel A. Lerma Exercises 1. Find a polynomial with integral coefficients whose zeros include
A power series about x = a is the series of the form
POWER SERIES AND THE USES OF POWER SERIES Elizabeth Wood Now we are finally going to start working with a topic that uses all of the information from the previous topics. The topic that we are going to
ECE302 Spring 2006 HW4 Solutions February 6, 2006 1
ECE302 Spring 2006 HW4 Solutions February 6, 2006 1 Solutions to HW4 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in
CONTINUED FRACTIONS AND PELL S EQUATION. Contents 1. Continued Fractions 1 2. Solution to Pell s Equation 9 References 12
CONTINUED FRACTIONS AND PELL S EQUATION SEUNG HYUN YANG Abstract. In this REU paper, I will use some important characteristics of continued fractions to give the complete set of solutions to Pell s equation.
IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem
IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh
Modern Optimization Methods for Big Data Problems MATH11146 The University of Edinburgh Peter Richtárik Week 3 Randomized Coordinate Descent With Arbitrary Sampling January 27, 2016 1 / 30 The Problem
STAT 830 Convergence in Distribution
STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31
Chapter 4 Lecture Notes
Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,
Binomial lattice model for stock prices
Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }
ECE302 Spring 2006 HW3 Solutions February 2, 2006 1
ECE302 Spring 2006 HW3 Solutions February 2, 2006 1 Solutions to HW3 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in
Tenth Problem Assignment
EECS 40 Due on April 6, 007 PROBLEM (8 points) Dave is taking a multiple-choice exam. You may assume that the number of questions is infinite. Simultaneously, but independently, his conscious and subconscious
Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
Exponential Distribution
Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1
The Mean Value Theorem
The Mean Value Theorem THEOREM (The Extreme Value Theorem): If f is continuous on a closed interval [a, b], then f attains an absolute maximum value f(c) and an absolute minimum value f(d) at some numbers
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
Stochastic Inventory Control
Chapter 3 Stochastic Inventory Control 1 In this chapter, we consider in much greater details certain dynamic inventory control problems of the type already encountered in section 1.3. In addition to the
Lecture 7: Continuous Random Variables
Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider
Math 461 Fall 2006 Test 2 Solutions
Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two
What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference
0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures
Discrete Mathematics: Homework 7 solution. Due: 2011.6.03
EE 2060 Discrete Mathematics spring 2011 Discrete Mathematics: Homework 7 solution Due: 2011.6.03 1. Let a n = 2 n + 5 3 n for n = 0, 1, 2,... (a) (2%) Find a 0, a 1, a 2, a 3 and a 4. (b) (2%) Show that
The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.
Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()
1.5. Factorisation. Introduction. Prerequisites. Learning Outcomes. Learning Style
Factorisation 1.5 Introduction In Block 4 we showed the way in which brackets were removed from algebraic expressions. Factorisation, which can be considered as the reverse of this process, is dealt with
INSURANCE RISK THEORY (Problems)
INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,
Notes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
Wald s Identity. by Jeffery Hein. Dartmouth College, Math 100
Wald s Identity by Jeffery Hein Dartmouth College, Math 100 1. Introduction Given random variables X 1, X 2, X 3,... with common finite mean and a stopping rule τ which may depend upon the given sequence,
it is easy to see that α = a
21. Polynomial rings Let us now turn out attention to determining the prime elements of a polynomial ring, where the coefficient ring is a field. We already know that such a polynomial ring is a UF. Therefore
SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION
SPARE PARS INVENORY SYSEMS UNDER AN INCREASING FAILURE RAE DEMAND INERVAL DISRIBUION Safa Saidane 1, M. Zied Babai 2, M. Salah Aguir 3, Ouajdi Korbaa 4 1 National School of Computer Sciences (unisia),
Lecture 6: Discrete & Continuous Probability and Random Variables
Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September
1. (First passage/hitting times/gambler s ruin problem:) Suppose that X has a discrete state space and let i be a fixed state. Let
Copyright c 2009 by Karl Sigman 1 Stopping Times 1.1 Stopping Times: Definition Given a stochastic process X = {X n : n 0}, a random time τ is a discrete random variable on the same probability space as
Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
Random access protocols for channel access. Markov chains and their stability. Laurent Massoulié.
Random access protocols for channel access Markov chains and their stability [email protected] Aloha: the first random access protocol for channel access [Abramson, Hawaii 70] Goal: allow machines
MATH 425, PRACTICE FINAL EXAM SOLUTIONS.
MATH 45, PRACTICE FINAL EXAM SOLUTIONS. Exercise. a Is the operator L defined on smooth functions of x, y by L u := u xx + cosu linear? b Does the answer change if we replace the operator L by the operator
Generating Functions
Chapter 10 Generating Functions 10.1 Generating Functions for Discrete Distributions So far we have considered in detail only the two most important attributes of a random variable, namely, the mean and
Math 431 An Introduction to Probability. Final Exam Solutions
Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <
Notes on the Negative Binomial Distribution
Notes on the Negative Binomial Distribution John D. Cook October 28, 2009 Abstract These notes give several properties of the negative binomial distribution. 1. Parameterizations 2. The connection between
UNIT I: RANDOM VARIABLES PART- A -TWO MARKS
UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0
Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?
ECS20 Discrete Mathematics Quarter: Spring 2007 Instructor: John Steinberger Assistant: Sophie Engle (prepared by Sophie Engle) Homework 8 Hints Due Wednesday June 6 th 2007 Section 6.1 #16 What is the
5.1 Radical Notation and Rational Exponents
Section 5.1 Radical Notation and Rational Exponents 1 5.1 Radical Notation and Rational Exponents We now review how exponents can be used to describe not only powers (such as 5 2 and 2 3 ), but also roots
1.5 / 1 -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de. 1.5 Transforms
.5 / -- Communication Networks II (Görg) -- www.comnets.uni-bremen.de.5 Transforms Using different summation and integral transformations pmf, pdf and cdf/ccdf can be transformed in such a way, that even
MULTIVARIATE PROBABILITY DISTRIBUTIONS
MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined
Homework 4 - KEY. Jeff Brenion. June 16, 2004. Note: Many problems can be solved in more than one way; we present only a single solution here.
Homework 4 - KEY Jeff Brenion June 16, 2004 Note: Many problems can be solved in more than one way; we present only a single solution here. 1 Problem 2-1 Since there can be anywhere from 0 to 4 aces, the
Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution
Recall: Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution A variable is a characteristic or attribute that can assume different values. o Various letters of the alphabet (e.g.
5. Continuous Random Variables
5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be
ST 371 (IV): Discrete Random Variables
ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible
Lecture 13: Martingales
Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of
MATH 289 PROBLEM SET 4: NUMBER THEORY
MATH 289 PROBLEM SET 4: NUMBER THEORY 1. The greatest common divisor If d and n are integers, then we say that d divides n if and only if there exists an integer q such that n = qd. Notice that if d divides
Zeros of Polynomial Functions
Zeros of Polynomial Functions The Rational Zero Theorem If f (x) = a n x n + a n-1 x n-1 + + a 1 x + a 0 has integer coefficients and p/q (where p/q is reduced) is a rational zero, then p is a factor of
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Lecture 8. Confidence intervals and the central limit theorem
Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
6 PROBABILITY GENERATING FUNCTIONS
6 PROBABILITY GENERATING FUNCTIONS Certain derivations presented in this course have been somewhat heavy on algebra. For example, determining the expectation of the Binomial distribution (page 5.1 turned
Joint Exam 1/P Sample Exam 1
Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question
Maximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
Real Roots of Univariate Polynomials with Real Coefficients
Real Roots of Univariate Polynomials with Real Coefficients mostly written by Christina Hewitt March 22, 2012 1 Introduction Polynomial equations are used throughout mathematics. When solving polynomials
CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY
January 10, 2010 CHAPTER SIX IRREDUCIBILITY AND FACTORIZATION 1. BASIC DIVISIBILITY THEORY The set of polynomials over a field F is a ring, whose structure shares with the ring of integers many characteristics.
Numerical Methods for Option Pricing
Chapter 9 Numerical Methods for Option Pricing Equation (8.26) provides a way to evaluate option prices. For some simple options, such as the European call and put options, one can integrate (8.26) directly
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
Manual for SOA Exam MLC.
Chapter 5. Life annuities. Extract from: Arcones Manual for the SOA Exam MLC. Spring 2010 Edition. available at http://www.actexmadriver.com/ 1/114 Whole life annuity A whole life annuity is a series of
Normal distribution. ) 2 /2σ. 2π σ
Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a
Properties of moments of random variables
Properties of moments of rom variables Jean-Marie Dufour Université de Montréal First version: May 1995 Revised: January 23 This version: January 14, 23 Compiled: January 14, 23, 1:5pm This work was supported
n k=1 k=0 1/k! = e. Example 6.4. The series 1/k 2 converges in R. Indeed, if s n = n then k=1 1/k, then s 2n s n = 1 n + 1 +...
6 Series We call a normed space (X, ) a Banach space provided that every Cauchy sequence (x n ) in X converges. For example, R with the norm = is an example of Banach space. Now let (x n ) be a sequence
1. First-order Ordinary Differential Equations
Advanced Engineering Mathematics 1. First-order ODEs 1 1. First-order Ordinary Differential Equations 1.1 Basic concept and ideas 1.2 Geometrical meaning of direction fields 1.3 Separable differential
1. Prove that the empty set is a subset of every set.
1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: [email protected] Proof: For any element x of the empty set, x is also an element of every set since
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
minimal polyonomial Example
Minimal Polynomials Definition Let α be an element in GF(p e ). We call the monic polynomial of smallest degree which has coefficients in GF(p) and α as a root, the minimal polyonomial of α. Example: We
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 2 Proofs Intuitively, the concept of proof should already be familiar We all like to assert things, and few of us
ECE302 Spring 2006 HW5 Solutions February 21, 2006 1
ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics
Math 55: Discrete Mathematics
Math 55: Discrete Mathematics UC Berkeley, Fall 2011 Homework # 5, due Wednesday, February 22 5.1.4 Let P (n) be the statement that 1 3 + 2 3 + + n 3 = (n(n + 1)/2) 2 for the positive integer n. a) What
CS 103X: Discrete Structures Homework Assignment 3 Solutions
CS 103X: Discrete Structures Homework Assignment 3 s Exercise 1 (20 points). On well-ordering and induction: (a) Prove the induction principle from the well-ordering principle. (b) Prove the well-ordering
Zeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
APPLIED MATHEMATICS ADVANCED LEVEL
APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications
Chapter 2: Binomial Methods and the Black-Scholes Formula
Chapter 2: Binomial Methods and the Black-Scholes Formula 2.1 Binomial Trees We consider a financial market consisting of a bond B t = B(t), a stock S t = S(t), and a call-option C t = C(t), where the
6.4 Logarithmic Equations and Inequalities
6.4 Logarithmic Equations and Inequalities 459 6.4 Logarithmic Equations and Inequalities In Section 6.3 we solved equations and inequalities involving exponential functions using one of two basic strategies.
Notes on Probability Theory
Notes on Probability Theory Christopher King Department of Mathematics Northeastern University July 31, 2009 Abstract These notes are intended to give a solid introduction to Probability Theory with a
Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page
Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8
Metric Spaces Joseph Muscat 2003 (Last revised May 2009)
1 Distance J Muscat 1 Metric Spaces Joseph Muscat 2003 (Last revised May 2009) (A revised and expanded version of these notes are now published by Springer.) 1 Distance A metric space can be thought of
Linear Algebra I. Ronald van Luijk, 2012
Linear Algebra I Ronald van Luijk, 2012 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents 1. Vector spaces 3 1.1. Examples 3 1.2. Fields 4 1.3. The field of complex numbers. 6 1.4.
Some special discrete probability distributions
University of California, Los Angeles Department of Statistics Statistics 100A Instructor: Nicolas Christou Some special discrete probability distributions Bernoulli random variable: It is a variable that
DIFFERENTIABILITY OF COMPLEX FUNCTIONS. Contents
DIFFERENTIABILITY OF COMPLEX FUNCTIONS Contents 1. Limit definition of a derivative 1 2. Holomorphic functions, the Cauchy-Riemann equations 3 3. Differentiability of real functions 5 4. A sufficient condition
THE DYING FIBONACCI TREE. 1. Introduction. Consider a tree with two types of nodes, say A and B, and the following properties:
THE DYING FIBONACCI TREE BERNHARD GITTENBERGER 1. Introduction Consider a tree with two types of nodes, say A and B, and the following properties: 1. Let the root be of type A.. Each node of type A produces
Practice with Proofs
Practice with Proofs October 6, 2014 Recall the following Definition 0.1. A function f is increasing if for every x, y in the domain of f, x < y = f(x) < f(y) 1. Prove that h(x) = x 3 is increasing, using
Mathematics 31 Pre-calculus and Limits
Mathematics 31 Pre-calculus and Limits Overview After completing this section, students will be epected to have acquired reliability and fluency in the algebraic skills of factoring, operations with radicals
Gambling Systems and Multiplication-Invariant Measures
Gambling Systems and Multiplication-Invariant Measures by Jeffrey S. Rosenthal* and Peter O. Schwartz** (May 28, 997.. Introduction. This short paper describes a surprising connection between two previously
Mathematical Methods of Engineering Analysis
Mathematical Methods of Engineering Analysis Erhan Çinlar Robert J. Vanderbei February 2, 2000 Contents Sets and Functions 1 1 Sets................................... 1 Subsets.............................
Random variables, probability distributions, binomial random variable
Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that
Numerical methods for American options
Lecture 9 Numerical methods for American options Lecture Notes by Andrzej Palczewski Computational Finance p. 1 American options The holder of an American option has the right to exercise it at any moment
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
