# The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

Save this PDF as:

Size: px
Start display at page:

Download "The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]."

## Transcription

1 Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real number between 0 and 1. The sample space for such an experiment is the set of all possible outcomes. For example: The sample space for a pair of die rolls is the set {1, 2, 3, 4, 5, 6} {1, 2, 3, 4, 5, 6}. The sample space for three coin flips is the set {0, 1} 3, where 0 represents heads and 1 represents tails. The sample space for a random number between 0 and 1 is the interval [0, 1]. An event is any statement about the outcome of an experiment to which we can assign a probability. For example, if we roll a pair of dice, possible events include: Both dice show even numbers. The first die shows a 5 or 6. The sum of the values shown by the dice is greater than or equal to 7. From a formal point of view, events are usually defined to be certain subsets of the sample space. Thus the event both dice show even numbers refers to the subset {2, 4, 6} {2, 4, 6}. Despite this, it is more common to write statements than subsets when referring to a specific event. In the special case of an experiment with finitely many outcomes, we can define the probability of any subset of the sample space, and therefore every subset is an event. In the general case, however, probability is a measure on the sample space, and only measurable subsets of the sample space are events.

2 2 The following definition serves as the foundation of modern probability theory: Definition: Probability Space A probability space is an ordered triple (Ω, E, P ), where: Ω is a set (the sample space). Elements of Ω are called outcomes. E is a σ-algebra over Ω. Elements of E are called events. P : E [0, 1] is a measure satisfying P (Ω) = 1. This is the probability measure on Ω. In general, a measure µ: M [0, ] on a set S is called a probability measure if µ(s) = 1, in which case the triple ( S, M, µ ) forms a probability space. EXAMPLE 1 Finite Probability Spaces Consider an experiment with finitely many outcomes ω 1,..., ω n, with corresponding probabilities p 1,..., p n. Such an experiment corresponds to a finite probability space (Ω, E, P ), where: Ω is the set {ω 1,..., ω n }, E is the collection of all subsets of Ω, and P : E [0, 1] is the probability measure on Ω defined by the formula P ({ω i1,..., ω ik }) = p i1 + + p ik. EXAMPLE 2 The Interval [0, 1] Consider an experiment whose outcome is a random number between 0 and 1. We can model such an experiment by the probability space (Ω, E, P ), where: Ω is the interval [0, 1], E is the collection of all Lebesgue measurable subsets of [0, 1], and P : E [0, 1] is Lebesgue measure on [0, 1]. Using this model, the probability that the outcome lies in a given set E [0, 1] is equal to the Lebesgue measure of E. For example, the probability that the outcome is rational is 0, and the probability that the outcome lies between 0.3 and 0.4 is 1/10. EXAMPLE 3 Product Spaces Let (Ω 1, E 1, P 1 ) and (Ω 2, E 2, P 2 ) be probability spaces corresponding to two possible

3 3 (a) (b) Figure 1: (a) Each infinite sequence of coin flips corresponds to a path down an infinite binary tree. In this case, the sequence begins with 010. (b) The leaves of an infinite binary tree form a Cantor set. experiments. Now, imagine that we perform both experiments, recording the outcome for each. The combined outcome for this experiment is an ordered pair (ω 1, ω 2 ), where ω 1 Ω 1 and ω 2 Ω 2. In fact, the combined experiment corresponds to a probability space (Ω, E, P ), where: Ω is the Cartesian product Ω 1 Ω 2. E is the σ-algebra generated by all events of the form E 1 E 2, where E 1 E 1 and E 2 E 2. P : E [0, 1] is the product of the measures P 1 and P 2. That is, P is the unique measure with domain E satisfying P (E 1 E 2 ) = P 1 (E 1 ) P 2 (E 2 ) for all E 1 E 1 and E 2 E 2. For example, if we pick two random numbers between 0 and 1, the corresponding sample space is the square [0, 1] [0, 1], with the probability measure being twodimensional Lebesgue measure. EXAMPLE 4 The Cantor Set Suppose we flip an infinite sequence of coins, recording 0 for heads and 1 for tails. The outcome of this experiment will be an infinite sequence (1, 0, 1, 1, 0, 1, 0, 0, 1,...) of 0 s and 1 s, so the sample space Ω is the infinite product {0, 1} N. Although the sample space {0, 1} N is infinite-dimensional, we can visualize each outcome as a path down an infinite binary tree, as shown in Figure 1a. If we imagine that this tree has leaves at infinity, then each outcome corresponds to one such leaf. Indeed, it is possible to draw this tree so that the leaves are visible, as shown in Figure 1b. As you can see, the leaves of the tree form a Cantor set. Indeed, under

4 4 the product topology, the sample space Ω = {0, 1} N is homeomorphic to the standard middle-thirds Cantor set. It is not too hard to put a measure on Ω. Given a finite sequence b 1,..., b n of 0 s and 1 s, let B(b 1,..., b n ) be the set of outcomes whose first n flips are b 1,..., b n, and define P 0 ( B(b1,..., b n ) ) = 1 2 n. Let B be the collection of all such sets, and let P (E) = inf { P 0 (B n ) B1, B 2,... B and E B n } for every E Ω. Then P is an outer measure on Ω, and the resulting measure P is a probability measure. The mechanism described above for putting a measure on {0, 1} N can be modified to put a measure on Ω N for any probability space (Ω, E, P ). For example, it is possible to talk about an experiment in which we roll an infinite sequence of dice, or pick an infinite sequence of random numbers between 0 and 1, and for each of these there is a corresponding probability space. andom Variables A random variable is a quantity whose value is determined by the results of a random experiment. For example, if we roll two dice, then the sum of the values of the dice is a random variable. In general, a random variable may take values from any set S. Definition: andom Variable A random variable on a probability space (Ω, E, P ) is a function X : Ω S, where S is any set. In the case where S = (or more generally if S is a topological space), we usually require a random variable to be a measurable function, i.e. X 1 (U) should be measurable for every open set U S. EXAMPLE 5 Here are some basic examples of random variables: If we roll two dice, then the values X and Y that show on the dice are random variables Ω {1, 2, 3, 4, 5, 6}. Expressions involving X and Y, such as the sum X + Y or the quantity X 2 + Y 3, are also random variables.

5 5 If we pick a random number between 0 and 1, then the number X that we pick is a random variable Ω [0, 1]. For an infinite number of coin flips, we can define a sequence C 1, C 2,... of random variables Ω {0, 1} by { 0 if the nth flip is heads, C n = 1 if the nth flip is tails. Although a random variable is defined as a function, we usually think of it as a variable that depends on the outcome ω Ω. In particular, we will often write X when we really mean X(ω). For example, if X is a real-valued random variable, then P (X 3) would refer to P ( {ω Ω X(ω) 3} ). Definition: Distribution of a andom Variable The distribution of a random variable X : Ω S is the probability measure P X on S defined by P X (T ) = P (X T ). for any measurable set T S. The expression P (X T ) in the definition above refers to the probability that the value of X is an element of T, i.e. the probability of the event X 1 (T ). Thus the distribution of X is defined by the equation P X (T ) = P ( X 1 (T ) ) Note that the set X 1 (T ) is automatically measurable since X is a measurable function. In measure-theoretic terms, P X is the pushforward of the probability measure P by to the set S via X. EXAMPLE 6 Die oll Let X : Ω {1, 2, 3, 4, 5, 6} be the value of a die roll. {1, 2, 3, 4, 5, 6}, we have P X (T ) = P (X T ) = T 6. Then for any subset T of That is, P X is the probability measure on {1, 2, 3, 4, 5, 6} where each point has probability 1/6.

6 6 EXAMPLE 7 andom eal Number Let X : Ω [0, 1] be a random real number between 0 and 1. Assuming X is equally likely to lie in any portion of the interval [0, 1], the distribution P X is just Lebesgue measure on [0, 1]. This is known as the uniform distribution on [0, 1]. The most basic type of random variable is a discrete variable: Definition: Discrete andom Variable A random variable X : Ω S is said to be discrete if S is finite or countable. If X : Ω S is discrete, then the probability distribution P X for X is completely determined by the probability of each element of S. In particular: P X (T ) = x T P X ({x}) for any subset T of S. EXAMPLE 8 Difference of Dice Let X and Y be random die rolls, and let Z = X Y. Then Z is a random variable Ω {0, 1, 2, 3, 4, 5}, with the following probability distribution: P Z ({0}) = 1 6, P Z({1}) = 10 36, P Z({2}) = 8 36, P Z ({3}) = 6 36, P Z({4}) = 4 36, P Z({5}) = We end with a useful formula for integrating with respect to a probability distribution. This is essentially just a restatement of the formula for the Lebesgue integral with respect to a pushforward measure. Proposition 1 Integration Formula for Distributions Let X : Ω S be a random variable, and let g : S be a measurable function. Then g dp X = g(x) dp, S Ω where g(x) denotes the random variable g X : Ω.

7 7 Continuous andom Variables One particularly important type of random variable is a continuous variable on. Definition: Continuous andom Variable Let X : Ω be a random variable with probability distribution P X. We say that X is continuous if there exists a measurable function f X : [0, ] such that P X (T ) = f X dm for every measurable set T, where m denotes Lebesgue measure. In this case, the function f X is called a probability density function for X. T That is, X is continuous if P X is absolutely continuous with respect to Lebesgue measure, i.e. if dp X = f X dm for some non-negative measurable function f X. ecall the following formula for integrals with respect to such measures. Proposition 2 Weighted Integration Let X : Ω be a continuous random variable with probability density f X, and let g : be a measurable function. Then g dp X = gf X dm. EXAMPLE 9 Standard Normal Distribution Consider a random variable X : Ω with probability density function defined by f X (x) = 1 exp ( 12 ) 2π x2. In this case, X is said to have the standard normal distribution. A graph of the function f X is shown in Figure 2a. For such an X, the probability P X (T ) that the value X lies in any set T is given by the formula P X (T ) = f X dm. T

8 Pa X b (a) (b) a b Figure 2: (a) The probability density f X for a standard normal distribution. (b) Area under the graph of f X on the interval (a, b). For example, if (a, b) is an open interval, then P (a < X < b) = (a,b) f X dm = b a 1 exp ( 12 ) 2π x2 dx. This is illustrated in Figure 2b. The probability density function f X can be thought of as describing the probability per unit length of P X. The following theorem gives us a formula for this function: Theorem 3 Density Formula Let X : Ω be a random variable, and define a function f X : [0, ] by ( ) [x h, x + h] f X (x) = P X lim h 0 + Then f X (x) is defined for almost all x and f X is measurable. Moreover: 1. If f X dm = 1, then X is a continuous random variable. 2. If X is continuous, then f X is a probability density function for X. 2h. POOF This is related the Lebesgue differentiation theorem, though it does not follows from it immediately.

9 Figure 3: Probability density for X 2, where X is chosen uniformly from [0, 1]. EXAMPLE 10 Density of X 2 Let X : Ω [0, 1] be uniformly distributed, and let Y = X 2. Then for any interval [a, b] [0, 1], we have Y [a, b] X [ a, b ] so Therefore, f Y (x) = P Y lim h 0 + P Y ( [a, b] ) = PX ( [ a, b ] ) = b a. ( [x h, x + h] ) 2h = lim h 0 + x + h x h 2ɛ = 1 2 x. A plot of this function is shown in Figure 3. Note that the total area under the graph of the function is 1, which proves that Y is continuous, and that f Y is a probability density function for Y. Finally, it should be mentioned that most of the discussion of continuous random variables generalizes to random variables X : Ω n. Such a variable is called continuous if there exists a function f X : n [0, ] so that P X (T ) = f X dµ for every measurable T n, where µ denotes Lebesgue measure on n. The probability density for such a variable is determined by the formula ( ) B(p, r) f X (p) = lim T r 0 + P X µ ( B(p, r) ), where B(p, r) denotes a ball of radius r in n centered at the point p.

10 10 Expected Value Definition: Expected Value Let X : Ω be a random variable on a probability space (Ω, E, P ). The expected value of X is defined as follows: EX = X dp. The expected value is sometimes called the average value or mean of X, and is also denoted X or X. Ω It is possible to calculate the expected value of a random variable directly from the distribution: Proposition 4 Expected Value for a Distribution Let X : Ω be a random variable with distribution P X. Then EX = x dp X (x). POOF Let g : be the function g(x) = x. By the integration formula for distributions (Proposition 1), we have x dp X (x) = g dp X = g(x) dp = X dp = EX. Ω EXAMPLE 11 Expected Value of a Discrete Variable Let X : Ω S be a discrete random variable, where S is a finite subset of. Then it follows from the above theorem that EX = x S x P ({x}). Ω For example, if X is the value of a die roll, then ( ) ( ) ( ) ( ) ( ) ( ) EX = = 3.5.

11 11 EXAMPLE 12 Expected Value of a Continuous Variable Let X : Ω be a continuous random variable with probability density function f X. Then it follows from the weighted integration formula (Proposition 2) that EX = x f X (x) dm(x). For example, if X is a random number from [0, 1] with uniform distribution, then EX = x dm(x) = 1 2. [0,1] For another example, consider the random variable Y = X 2. The probability density function for Y is f Y (x) = 1/(2 x) (see Example 10), so EY = x 1 2 x dm(x) = 1 3. [0,1] EXAMPLE 13 Infinite Expected Value It is possible for a random variable to have infinite expected value. For example, let X : Ω [1, ) be a continuous random variable with f X (x) = 1 x 2. Note that f [1, ) X dm = 1, so this is indeed a legitimate probability density function. Then the expected value of X is 1 EX = x f X (x) dm(x) = dx =. x [1, ) The same phenomenon can occur for a discrete random variable. X : Ω N satisfies P X ({n}) = 1 Cn, 2 where C = n=1 1/n2 = π 2 /6, then EX = n P X ({n}) = n=1 n=1 1 1 Cn =. For example, if Note that both of these examples involve positive random variables. For a general variable X : Ω, it is also possible for EX to be entirely undefined. The following proposition lists some basic properties of expected value. These follow directly from the corresponding properties for the Lebesgue integral:

12 12 Proposition 5 Properties of Expected Value Let X : Ω and Y : Ω be random variables with finite expected value. Then: 1. If C is constant, then EC = C. 2. E[αX] = α EX for any α. 3. E[X + Y ] = EX + EY. 4. EX E X. 5. If X 0, then EX 0, with equality if and only if P (X = 0) = 1. Variance and Standard Deviation There is much more to the distribution of a variable than its expected value. For example, the average length of an adult blue whale is about 80 feet, but this tells us very little about whale lengths: are most blue whales about feet long, or is the range more variable, say feet? The tendency of a random variable to take values far away from the mean is called dispersion. Two common measures of dispersion are the variance and standard deviation: Definition: Variance and Standard Deviation Let X : Ω be a random variable with finite expected value µ. The variance of X is the quantity Var(X) = E [ (X µ) 2]. The square root of Var(X) is called the standard deviation of X. Though the variance has nicer theoretical properties, the standard deviation has more meaning, since it is measured in the same units as X. For example, if X is a random variable with units of length, then the standard deviation of X is measured in feet, while the variance is measured in square feet. The standard deviation in the length of an adult blue whale is about 10 feet, meaning that most whales are about feet long. To give you a feel for these quantities, Figure 4a shows several normal distributions with different standard deviations. The standard normal distribution has standard deviation 1, and normal distributions with standard deviations of 0.5 and 2 have also been plotted. As a general rule, a normally distributed random variable has roughly

13 13 ¾ 0.5 ¾ 1 ¾ (a) Area (b) Figure 4: (a) Normal distributions with standard deviations of 0.5, 1, and 2. (b) A normally distributed variable has a 68.2% chance of being within one standard deviation of the mean. a 68.2% probability of being within one standard deviation of the mean, as shown in Figure 4b. In particular, 1 1 exp ( 12 ) 2π x2 dx For example, if we assume that the length of a random adult blue whale is normally distributed with a mean of 80 feet and a standard deviation of 10 feet, then approximately 68% of blue whales will be between 70 and 90 feet long. As with expected value, we can compute the variance of a random variable directly from the distribution: Proposition 6 Variance of a Distribution Let X : Ω be a random variable with finite mean µ and distribution P X. Then Var(X) = (x µ) 2 dp X (x). POOF Let g : be the function g(x) = (x µ) 2. By the integration formula for distributions (Proposition 1), we have g dp X = g(x) dp = (X µ) 2 dp = E [ (X µ) 2] = Var(X). Ω Ω

14 14 EXAMPLE 14 Variance of a Discrete Variable Let X : Ω S be a discrete random variable, where S, and let µ = EX. Then Var(X) = µ) x S(x 2 P ({x}). For example, if X is the value of a die roll, then EX = 3.5, so Var(X) = 6 (n 3.5) 2 n= Taking the square root yields a standard deviation of approximately EXAMPLE 15 Variance of a Continuous Variable Let X : Ω be a continuous random variable with probability density function f X. Then it follows from the weighted integration formula (Proposition 2) that Var(X) = (x µ) 2 f X (x) dm(x). For example, if X is a random number from [0, 1] with uniform distribution, then EX = 1/2, so ( Var(X) = x 1 ) 2 dm(x) = [0,1] Taking the square root yields a standard deviation of approximately EXAMPLE 16 Normal Distributions A random variable X : Ω is said to be normally distributed if it has a probability density function of the form. ( 1 f X (x) = σ 2π exp 1 ( x µ ) ) 2 2 σ Such a variable has mean µ and standard deviation σ, as can be seen from the following integral formulas: ( x σ 2π exp 1 ( x µ ) ) 2 dm(x) = µ. 2 σ (x µ) 2 σ 2π ( exp 1 2 ( x µ σ ) 2 ) dm(x) = σ 2.

15 15 EXAMPLE 17 Infinite Variance As with the expected value, it is possible for the variance of a random variable to be infinite or undefined. For example, let X : Ω be a random variable with distribution C f X = 1 + x 3 where C = 3 3/(4π). Then X has finite expected value, with Cx EX = dm(x) = x 3 But Var(X) = Cx 2 dm(x) =. 1 + x 3 From a measure-theoretic point of view, this random variable is an L 1 function on the probability space Ω, but it is not L 2. We end by stating another standard formula for variance. Proposition 7 Alternative Formula for Variance Let X : Ω be a random variable with finite expected value. Then Var(X) = E[X 2 ] (EX) 2. POOF Let µ = EX. Then Var(X) = E [ (X µ) 2] = E [ X 2 2µX + µ 2] = E[X 2 ] 2µ(EX) + µ 2 = E[X 2 ] 2µ 2 + µ 2 = E[X 2 ] µ 2. It follows from this formula that Var(X) is finite if and only if E[X 2 ] <, i.e. if and only if X L 2 (Ω). Exercises 1. Let X : Ω (0, 1] have the uniform distribution, and let Y = 1/X. a) Find the probability density function f Y : [1, ) [0, ] for Y.

16 16 b) What is the expected value of Y? 2. Let X : Ω [0, 1] have the uniform distribution, and let Y = sin(8x). Use the integration formula for distributions (Proposition 1) to compute EY. 3. Let X and Y be the values of two die rolls, and let Z = max(x, Y ). a) Find P Z ({n}) for n {1, 2, 3, 4, 5, 6}. b) Determine EZ, and find the standard deviation for Z. 4. Let X : Ω [1, ) be a continuous random variable with probability density function f X (x) = 3 x 4. Compute EX and Var(X).

### Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

### Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

### ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

### Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

### MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete

### Probability Theory. Florian Herzog. A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T..

Probability Theory A random variable is neither random nor variable. Gian-Carlo Rota, M.I.T.. Florian Herzog 2013 Probability space Probability space A probability space W is a unique triple W = {Ω, F,

### Notes on Continuous Random Variables

Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

### Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

### MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions...

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo c Copyright 2004-2012 by Jenny A. Baglivo. All Rights Reserved. Contents 3 MT426 Notebook 3 3 3.1 Definitions............................................

### Course 221: Analysis Academic year , First Semester

Course 221: Analysis Academic year 2007-08, First Semester David R. Wilkins Copyright c David R. Wilkins 1989 2007 Contents 1 Basic Theorems of Real Analysis 1 1.1 The Least Upper Bound Principle................

### University of California, Los Angeles Department of Statistics. Random variables

University of California, Los Angeles Department of Statistics Statistics Instructor: Nicolas Christou Random variables Discrete random variables. Continuous random variables. Discrete random variables.

### 4.1 4.2 Probability Distribution for Discrete Random Variables

4.1 4.2 Probability Distribution for Discrete Random Variables Key concepts: discrete random variable, probability distribution, expected value, variance, and standard deviation of a discrete random variable.

### Probabilities and Random Variables

Probabilities and Random Variables This is an elementary overview of the basic concepts of probability theory. 1 The Probability Space The purpose of probability theory is to model random experiments so

### E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

### 3 Multiple Discrete Random Variables

3 Multiple Discrete Random Variables 3.1 Joint densities Suppose we have a probability space (Ω, F,P) and now we have two discrete random variables X and Y on it. They have probability mass functions f

### 3. Continuous Random Variables

3. Continuous Random Variables A continuous random variable is one which can take any value in an interval (or union of intervals) The values that can be taken by such a variable cannot be listed. Such

### Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

### An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

### Lecture 13: Martingales

Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

### 5. Continuous Random Variables

5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

### Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

### Example. Two fair dice are tossed and the two outcomes recorded. As is familiar, we have

Lectures 9-10 jacques@ucsd.edu 5.1 Random Variables Let (Ω, F, P ) be a probability space. The Borel sets in R are the sets in the smallest σ- field on R that contains all countable unions and complements

### Summary of Probability

Summary of Probability Mathematical Physics I Rules of Probability The probability of an event is called P(A), which is a positive number less than or equal to 1. The total probability for all possible

### Probability distributions

Probability distributions (Notes are heavily adapted from Harnett, Ch. 3; Hayes, sections 2.14-2.19; see also Hayes, Appendix B.) I. Random variables (in general) A. So far we have focused on single events,

### Chapter 1 - σ-algebras

Page 1 of 17 PROBABILITY 3 - Lecture Notes Chapter 1 - σ-algebras Let Ω be a set of outcomes. We denote by P(Ω) its power set, i.e. the collection of all the subsets of Ω. If the cardinality Ω of Ω is

### Lecture 7: Continuous Random Variables

Lecture 7: Continuous Random Variables 21 September 2005 1 Our First Continuous Random Variable The back of the lecture hall is roughly 10 meters across. Suppose it were exactly 10 meters, and consider

### ( ) = P Z > = P( Z > 1) = 1 Φ(1) = 1 0.8413 = 0.1587 P X > 17

4.6 I company that manufactures and bottles of apple juice uses a machine that automatically fills 6 ounce bottles. There is some variation, however, in the amounts of liquid dispensed into the bottles

### Senior Secondary Australian Curriculum

Senior Secondary Australian Curriculum Mathematical Methods Glossary Unit 1 Functions and graphs Asymptote A line is an asymptote to a curve if the distance between the line and the curve approaches zero

### 4. Joint Distributions

Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 4. Joint Distributions Basic Theory As usual, we start with a random experiment with probability measure P on an underlying sample space. Suppose

### 4. Joint Distributions of Two Random Variables

4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint

### Variances and covariances

Chapter 4 Variances and covariances 4.1 Overview The expected value of a random variable gives a crude measure for the center of location of the distribution of that random variable. For instance, if the

### MEASURE AND INTEGRATION. Dietmar A. Salamon ETH Zürich

MEASURE AND INTEGRATION Dietmar A. Salamon ETH Zürich 12 May 2016 ii Preface This book is based on notes for the lecture course Measure and Integration held at ETH Zürich in the spring semester 2014. Prerequisites

### Definition of Random Variable A random variable is a function from a sample space S into the real numbers.

.4 Random Variable Motivation example In an opinion poll, we might decide to ask 50 people whether they agree or disagree with a certain issue. If we record a for agree and 0 for disagree, the sample space

### Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22

Math 151. Rumbos Spring 2014 1 Solutions to Assignment #22 1. An experiment consists of rolling a die 81 times and computing the average of the numbers on the top face of the die. Estimate the probability

### A (random) experiment is an activity with observable results. The sample space S of an experiment is the set of all outcomes.

Chapter 7 Probability 7.1 Experiments, Sample Spaces, and Events A (random) experiment is an activity with observable results. The sample space S of an experiment is the set of all outcomes. Each outcome

### Lecture 8: Continuous random variables, expectation and variance

Lecture 8: Continuous random variables, expectation and variance Lejla Batina Institute for Computing and Information Sciences Digital Security Version: spring 2012 Lejla Batina Version: spring 2012 Wiskunde

### Probability and Statistics Vocabulary List (Definitions for Middle School Teachers)

Probability and Statistics Vocabulary List (Definitions for Middle School Teachers) B Bar graph a diagram representing the frequency distribution for nominal or discrete data. It consists of a sequence

### Lecture 4: Random Variables

Lecture 4: Random Variables 1. Definition of Random variables 1.1 Measurable functions and random variables 1.2 Reduction of the measurability condition 1.3 Transformation of random variables 1.4 σ-algebra

### Limit processes are the basis of calculus. For example, the derivative. f f (x + h) f (x)

SEC. 4.1 TAYLOR SERIES AND CALCULATION OF FUNCTIONS 187 Taylor Series 4.1 Taylor Series and Calculation of Functions Limit processes are the basis of calculus. For example, the derivative f f (x + h) f

### Feb 7 Homework Solutions Math 151, Winter 2012. Chapter 4 Problems (pages 172-179)

Feb 7 Homework Solutions Math 151, Winter 2012 Chapter Problems (pages 172-179) Problem 3 Three dice are rolled. By assuming that each of the 6 3 216 possible outcomes is equally likely, find the probabilities

### Lecture 16: Expected value, variance, independence and Chebyshev inequality

Lecture 16: Expected value, variance, independence and Chebyshev inequality Expected value, variance, and Chebyshev inequality. If X is a random variable recall that the expected value of X, E[X] is the

### Random Variables, Expectation, Distributions

Random Variables, Expectation, Distributions CS 5960/6960: Nonparametric Methods Tom Fletcher January 21, 2009 Review Random Variables Definition A random variable is a function defined on a probability

### Continuous Distributions

MAT 2379 3X (Summer 2012) Continuous Distributions Up to now we have been working with discrete random variables whose R X is finite or countable. However we will have to allow for variables that can take

### Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note 11

CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note Conditional Probability A pharmaceutical company is marketing a new test for a certain medical condition. According

### NOTES ON MEASURE THEORY. M. Papadimitrakis Department of Mathematics University of Crete. Autumn of 2004

NOTES ON MEASURE THEORY M. Papadimitrakis Department of Mathematics University of Crete Autumn of 2004 2 Contents 1 σ-algebras 7 1.1 σ-algebras............................... 7 1.2 Generated σ-algebras.........................

### Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

### Notes on metric spaces

Notes on metric spaces 1 Introduction The purpose of these notes is to quickly review some of the basic concepts from Real Analysis, Metric Spaces and some related results that will be used in this course.

### The basics of probability theory. Distribution of variables, some important distributions

The basics of probability theory. Distribution of variables, some important distributions 1 Random experiment The outcome is not determined uniquely by the considered conditions. For example, tossing a

### Section 5.1 Continuous Random Variables: Introduction

Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

### Topic 4: Multivariate random variables. Multiple random variables

Topic 4: Multivariate random variables Joint, marginal, and conditional pmf Joint, marginal, and conditional pdf and cdf Independence Expectation, covariance, correlation Conditional expectation Two jointly

### 6.8 Taylor and Maclaurin s Series

6.8. TAYLOR AND MACLAURIN S SERIES 357 6.8 Taylor and Maclaurin s Series 6.8.1 Introduction The previous section showed us how to find the series representation of some functions by using the series representation

### Introduction to Probability

Introduction to Probability EE 179, Lecture 15, Handout #24 Probability theory gives a mathematical characterization for experiments with random outcomes. coin toss life of lightbulb binary data sequence

### P (A) = lim P (A) = N(A)/N,

1.1 Probability, Relative Frequency and Classical Definition. Probability is the study of random or non-deterministic experiments. Suppose an experiment can be repeated any number of times, so that we

### Probability for Estimation (review)

Probability for Estimation (review) In general, we want to develop an estimator for systems of the form: x = f x, u + η(t); y = h x + ω(t); ggggg y, ffff x We will primarily focus on discrete time linear

### 4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

### Cork Institute of Technology. CIT Mathematics Examination, Paper 2 Sample Paper A

Cork Institute of Technology CIT Mathematics Examination, 2015 Paper 2 Sample Paper A Answer ALL FIVE questions. Each question is worth 20 marks. Total marks available: 100 marks. The standard Formulae

### Chap2: The Real Number System (See Royden pp40)

Chap2: The Real Number System (See Royden pp40) 1 Open and Closed Sets of Real Numbers The simplest sets of real numbers are the intervals. We define the open interval (a, b) to be the set (a, b) = {x

### Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

### 10.2 Series and Convergence

10.2 Series and Convergence Write sums using sigma notation Find the partial sums of series and determine convergence or divergence of infinite series Find the N th partial sums of geometric series and

### Conditional expectation

A Conditional expectation A.1 Review of conditional densities, expectations We start with the continuous case. This is sections 6.6 and 6.8 in the book. Let X, Y be continuous random variables. We defined

### Discrete and Continuous Random Variables. Summer 2003

Discrete and Continuous Random Variables Summer 003 Random Variables A random variable is a rule that assigns a numerical value to each possible outcome of a probabilistic experiment. We denote a random

### RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. DISCRETE RANDOM VARIABLES.. Definition of a Discrete Random Variable. A random variable X is said to be discrete if it can assume only a finite or countable

### WHERE DOES THE 10% CONDITION COME FROM?

1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay

### Random Variables and Measurable Functions.

Chapter 3 Random Variables and Measurable Functions. 3.1 Measurability Definition 42 (Measurable function) Let f be a function from a measurable space (Ω, F) into the real numbers. We say that the function

### 6. Jointly Distributed Random Variables

6. Jointly Distributed Random Variables We are often interested in the relationship between two or more random variables. Example: A randomly chosen person may be a smoker and/or may get cancer. Definition.

### Continuous random variables

Continuous random variables So far we have been concentrating on discrete random variables, whose distributions are not continuous. Now we deal with the so-called continuous random variables. A random

### Measure Theory and Probability

Chapter 2 Measure Theory and Probability 2.1 Introduction In advanced probability courses, a random variable (r.v.) is defined rigorously through measure theory, which is an in-depth mathematics discipline

### 1 Sufficient statistics

1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

### Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability

CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces

### PROBLEM SET 7: PIGEON HOLE PRINCIPLE

PROBLEM SET 7: PIGEON HOLE PRINCIPLE The pigeonhole principle is the following observation: Theorem. Suppose that > kn marbles are distributed over n jars, then one jar will contain at least k + marbles.

### and s n (x) f(x) for all x and s.t. s n is measurable if f is. REAL ANALYSIS Measures. A (positive) measure on a measurable space

RAL ANALYSIS A survey of MA 641-643, UAB 1999-2000 M. Griesemer Throughout these notes m denotes Lebesgue measure. 1. Abstract Integration σ-algebras. A σ-algebra in X is a non-empty collection of subsets

sheng@mail.ncyu.edu.tw 1 Content Introduction Expectation and variance of continuous random variables Normal random variables Exponential random variables Other continuous distributions The distribution

### Exercises with solutions (1)

Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

### Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

### Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 3: Discrete Uniform Distribution Binomial Distribution Sections 3-5, 3-6 Special discrete random variable distributions we will cover

### MATH 201. Final ANSWERS August 12, 2016

MATH 01 Final ANSWERS August 1, 016 Part A 1. 17 points) A bag contains three different types of dice: four 6-sided dice, five 8-sided dice, and six 0-sided dice. A die is drawn from the bag and then rolled.

### Our goal first will be to define a product measure on A 1 A 2.

1. Tensor product of measures and Fubini theorem. Let (A j, Ω j, µ j ), j = 1, 2, be two measure spaces. Recall that the new σ -algebra A 1 A 2 with the unit element is the σ -algebra generated by the

### Chapter 2, part 2. Petter Mostad

Chapter 2, part 2 Petter Mostad mostad@chalmers.se Parametrical families of probability distributions How can we solve the problem of learning about the population distribution from the sample? Usual procedure:

### Set theory, and set operations

1 Set theory, and set operations Sayan Mukherjee Motivation It goes without saying that a Bayesian statistician should know probability theory in depth to model. Measure theory is not needed unless we

### Chapter 4 Expected Values

Chapter 4 Expected Values 4. The Expected Value of a Random Variables Definition. Let X be a random variable having a pdf f(x). Also, suppose the the following conditions are satisfied: x f(x) converges

### REAL ANALYSIS LECTURE NOTES: 1.4 OUTER MEASURE

REAL ANALYSIS LECTURE NOTES: 1.4 OUTER MEASURE CHRISTOPHER HEIL 1.4.1 Introduction We will expand on Section 1.4 of Folland s text, which covers abstract outer measures also called exterior measures).

### TOPIC 3: CONTINUITY OF FUNCTIONS

TOPIC 3: CONTINUITY OF FUNCTIONS. Absolute value We work in the field of real numbers, R. For the study of the properties of functions we need the concept of absolute value of a number. Definition.. Let

### Structure of Measurable Sets

Structure of Measurable Sets In these notes we discuss the structure of Lebesgue measurable subsets of R from several different points of view. Along the way, we will see several alternative characterizations

### The Ergodic Theorem and randomness

The Ergodic Theorem and randomness Peter Gács Department of Computer Science Boston University March 19, 2008 Peter Gács (Boston University) Ergodic theorem March 19, 2008 1 / 27 Introduction Introduction

### Problem Set. Problem Set #2. Math 5322, Fall December 3, 2001 ANSWERS

Problem Set Problem Set #2 Math 5322, Fall 2001 December 3, 2001 ANSWERS i Problem 1. [Problem 18, page 32] Let A P(X) be an algebra, A σ the collection of countable unions of sets in A, and A σδ the collection

### Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average

PHP 2510 Expectation, variance, covariance, correlation Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average Variance Variance is the average of (X µ) 2

### Math 431 An Introduction to Probability. Final Exam Solutions

Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

### Sets and functions. {x R : x > 0}.

Sets and functions 1 Sets The language of sets and functions pervades mathematics, and most of the important operations in mathematics turn out to be functions or to be expressible in terms of functions.

### x if x 0, x if x < 0.

Chapter 3 Sequences In this chapter, we discuss sequences. We say what it means for a sequence to converge, and define the limit of a convergent sequence. We begin with some preliminary results about the

### Probability Generating Functions

page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

### 1. Prove that the empty set is a subset of every set.

1. Prove that the empty set is a subset of every set. Basic Topology Written by Men-Gen Tsai email: b89902089@ntu.edu.tw Proof: For any element x of the empty set, x is also an element of every set since

### MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

### Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

### Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

ECS20 Discrete Mathematics Quarter: Spring 2007 Instructor: John Steinberger Assistant: Sophie Engle (prepared by Sophie Engle) Homework 8 Hints Due Wednesday June 6 th 2007 Section 6.1 #16 What is the

### 6. Distribution and Quantile Functions

Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 6. Distribution and Quantile Functions As usual, our starting point is a random experiment with probability measure P on an underlying sample spac

### Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

### You flip a fair coin four times, what is the probability that you obtain three heads.

Handout 4: Binomial Distribution Reading Assignment: Chapter 5 In the previous handout, we looked at continuous random variables and calculating probabilities and percentiles for those type of variables.

### RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis

RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied