Notes 11 Autumn 2005

Size: px
Start display at page:

Download "Notes 11 Autumn 2005"

Transcription

1 MAS 08 Probabilit I Notes Autumn 005 Two discrete random variables If X and Y are discrete random variables defined on the same sample space, then events such as X = and Y = are well defined. The joint distribution of X and Y is a list of the probabilities p X,Y (,) = P(X = and Y = ) for all values of and that occur. The list is usuall shown as a two-wa table. This table is called the joint probabilit mass function of X and Y. We abbreviate p X,Y (,) to p(,) if X and Y can be understood from the contet. Eample (Muffins: part ) I have a bag containing 5 chocolate-chip muffins, 3 blueberr muffins and lemon muffins. I choose three muffins from this bag at random. Let X be the number of lemon muffins chosen and Y be the number of blueberr muffins chosen. We calculate the values of the joint probablit mass function. First note that S = 0 C 3 =. Then P(X = 0 and Y = 0) = P(3 chocolate-chip are chosen) = 5 C 3 = 0. P(X = 0 and Y = ) = P( blueberr and chocolate chip are chosen) = 3 C 5 C = 30.

2 The other values are found b similar calculations, giving the following table. Values 0 0 of X 0 5 Values of Y Of course, we check that the probabilities in the table add up to. To obtain the distributions of X and Y individuall, find the row sums and the column sums: these give their probabilit distributions. The marginal probabilit mass function p X of X is the list of probabilities p X () = P(X = ) = p X,Y (,) for all values of which occur. Similarl, the marginal probabilit mass function p Y of Y is given b p Y () = P(Y = ) = p X,Y (,). The distributions of X and Y are said to be marginal to the joint distribution. The are just ordinar distributions. Eample (Muffins: part ) Here the marginal p.m.f. of X is and the marginl p.m.f. of Y is 0 p X () p Y () Therefore E(X) = 3/5 and E(Y ) = 9/0. We can define events in terms of X and Y, such as X < Y or X +Y = 3. To find the probabilit of such an event, find the probabilit of the set of all pairs (,) for which the statement is true. We can also define new random variables as functions of X and Y.

3 Proposition 0 If g is a real function of two variables then E(g(X,Y )) = g(,)p X,Y (,). The proof is just like the proof of Proposition 8. Eample (Muffins: part 3) Put U = X +Y and V = XY. Then E(U) = = 3 = E(X) + E(Y ). However, Theorem 7 E(V ) = = E(X)E(Y ). 5 (a) E(X +Y ) = E(X) + E(Y ) alwas. (b) E(XY ) is not necessaril equal to E(X)E(Y ). Proof (a) E(X +Y ) = = = ( + )p X,Y (,) b Proposition 0 p X,Y (,) + = = E(X) + E(Y ). p X,Y (,) + p X () + p Y () p X,Y (,) p X,Y (,) (b) A single counter-eample is all that we need. In the muffins eample we saw that E(XY ) E(X)E(Y ). Independence Random variables X and Y defined on the same sample space are defined to be independent of each other if the events X = and Y = are independent for all values of and ; that is p X,Y (,) = p X ()p Y () for all and. 3

4 Eample (Muffins: part 4) Here P(X = 0) = 7/5 and P(Y = 0) = 35/ but P(X = 0 and Y = 0) = = P(X = 0)P(Y = 0), so X and Y are not independent of each other. Note that a single pair of values of and where the probabilities do not multipl is enough to show that X and Y are not independent. On the other hand, if I roll a die twice, and X and Y are the numbers that come up on the first and second throws, then X and Y will be independent, even if the die is not fair (so that the outcomes are not all equall likel). If we have more than two random variables (for eample X, Y, Z), we sa that the are mutuall independent if the events that the random variables take specific values (for eample, X = a, Y = b, Z = c) are mutuall independent. Theorem 8 If X and Y are independent of each other then E(XY ) = E(X)E(Y ). Proof E(XY ) = = = [ p X,Y (,) p X ()p Y () = E(X)E(Y ). p X ()] [ p Y () if X and Y are independent Note that the converse is not true, as the following eample shows. Eample Suppose that X and Y have the joint p.m.f. in this table. ] Values of Y 0 Values of X The marginal distributions are 0 p X () 4

5 and 0 p Y () 4 4 Therefore P(X = 0 and Y = ) = 0 but P(X = 0)P(Y = ) = 4 0 so X and Y are not independent of each other. However, E(X) = / and E(Y ) = 0 so E(X)E(Y ) = 0, while so E(XY ) = E(X)E(Y ). Covariance and correlation E(XY ) = = 0, A measure of the joint spread of X and Y is the covariance of X and Y, which is defined b Cov(X,Y ) = E ((X µ X )(Y µ Y )), where µ X = E(X) and µ Y = E(Y ). Theorem 9 (Properties of covariance) (a) Cov(X,X) = Var(X). (b) Cov(X,Y ) = E(XY ) E(X)E(Y ). (c) If a is a constant then Cov(aX,Y ) = Cov(X, ay ) = a Cov(X,Y ). (d) If b is constant then Cov(X,Y + b) = Cov(X + b,y ) = Cov(X,Y ). (e) If X and Y are independent, then Cov(X,Y ) = 0. Proof (a) This follows directl from the definition. (b) Cov(X,Y ) = E ((X µ X )(Y µ Y )) = E (XY µ X Y µ Y X + µ X µ Y ) = E(XY ) + E( µ X Y ) + E( µ Y X) + E(µ X µ Y ) b Theorem 7(a) = E(XY ) µ X E(Y ) µ Y E(X) + µ X µ Y b Theorem 4 = E(XY ) E(X)E(Y ) E(Y )E(X) + E(X)E(Y ) = E(XY ) E(X)E(Y ). 5

6 (c) From part (b), Cov(aX,Y ) = E(aXY ) E(aX)E(Y ). From Theorem 4, E(aXY ) = ae(xy ) and E(aX) = ae(x). Therefore Cov(aX,Y ) = ae(xy ) ae(x)e(y ) = a(e(xy ) E(X)E(Y )) = acov(x,y ). (d) Theorem 4 shows that E(Y + b) = E(Y ) + b, so we have (Y + b) E(Y + b) = Y E(Y ). Therefore Cov(X,Y + b) = E(X E(X))(Y + b E(Y + b)) = E(X E(X))(Y E(Y )) = Cov(X,Y ). (e) From part (b), Cov(X,Y ) = E(XY ) E(X)E(Y ). Theorem 8 shows that E(XY ) E(X)E(Y ) = 0 if X and Y are independent of each other. The covariance of X and Y is the product of the distance of X from its mean and the distance of Y from its mean. Therefore, if X µ X and Y µ Y tend to be either both postive or both negative then Cov(X,Y ) is positive. On the other hand, if X µ X and Y µ Y tend to have opposite signs then Cov(X,Y ) is negative. If Var(X) or Var(Y ) is large then Cov(X,Y ) ma also be large; in fact, multipling X b a constant a multiplies Var(X) b a and Cov(X,Y ) b a. To avoid this dependence on scale, we define the correlation coefficient, corr(x,y ), which is just a normalised version of the covariance. It is defined as follows: corr(x,y ) = Cov(X,Y ) Var(X)Var(Y ). The point of this is the first and last parts of the following theorem. Theorem (Properties of correlation) Let X and Y be random variables. Then (a) corr(x,y ) ; (b) if X and Y are independent, then corr(x,y ) = 0; (c) if Y = mx + c for some constants m 0 and c, then corr(x,y ) = if m > 0, and corr(x,y ) = if m < 0; this is the onl wa in which corr(x,y ) can be equal to ±; (d) corr(x,y ) is independent of the units of measurement in the sense that if X or Y is multilpied b a constant, or has a constant added to it, then corr(x,y ) is unchanged. 6

7 The proof of the first part won t be given here. But note that this is another check on our calculations: if ou calculate a correlation coefficient which is bigger than or smaller than, then ou have made a mistake. Part (b) follows immediatel from part (e) of the preceding theorem. For part (c), suppose that Y = mx + c. Let Var(X) = α. Now we just calculate everthing in sight. Var(Y ) = Var(mX + c) = Var(mX) = m Var(X) = m α Cov(X,Y ) = Cov(X,mX + c) = Cov(X,mX) = mcov(x,x) = mα corr(x,y ) = Cov(X,Y )/ Var(X)Var(Y ) = mα/ m α { + if m > 0, = if m < 0. The proof of the converse will not be given here. Part (d) follows from Theorems 4, 5 and 9. Thus the correlation coefficient is a measure of the etent to which the two variables are linearl related. It is + if Y increases linearl with X; 0 if there is no linear relation between them; and if Y decreases linearl as X increases. More generall, a positive correlation indicates a tendenc for larger X values to be associated with larger Y values; a negative value, for smaller X values to be associated with larger Y values. We call two random variables X and Y uncorrelated if Cov(X,Y ) = 0 (in other words, if corr(x,y ) = 0). The preceding theorem and eample show that we can sa: Independent random variables are uncorrelated, but uncorrelated random variables need not be independent. Eample In some ears there are two tests in the Probabilit class. We can take as the sample space the set of all students who take both tests, and choose a student at random. Put X(s) = the mark of student s on Test and Y (s) = the mark of student s on Test. We would epect a student who scores better than average on Test to do so again on Test, and one who scores worse than average to do so again on Test, so there should be a positive correlation between X and Y. However, we would not epect the Test scores to be perfectl predictable as a linear function of the Test scores. The marks for one particular ear are shown in Figure. The correlation is 0.69 to decimal places. 7

8 Test Test Figure : Marks on two Probabilit tests 8

9 Theorem 0 If X and Y are random variables and a and b are constants then Proof First, Then Var(aX + by ) = a Var(X) + abcov(x,y ) + b Var(Y ). E(aX + by ) = E(aX) + E(bY ) b Theorem 7 = ae(x) + be(y ) b Theorem 4 = aµ X + bµ Y. Var(aX + by ) = E[(aX + by ) E(aX + by )] b definition of variance = E[aX + by aµ X bµ Y ] = E[a(X µ X ) + b(y µ Y )] = E[a (X µ X ) + ab(x µ X )(Y µ Y ) + b (Y µ Y ) ] = E[a (X µ X ) ] + E[ab(X µ X )(Y µ Y )] + E[b (Y µ Y ) ] b Theorem 7 = a E[(X µ X ) ] + abe[(x µ X )(Y µ Y )] + b E[(Y µ Y ) ] b Theorem 4 = a Var(X) + abcov(x,y ) + b Var(Y ). Corollar If X and Y are independent, then (a) Var(X +Y ) = Var(X) + Var(Y ). (b) Var(X Y ) = Var(X) + Var(Y ). 9

10 Mean and variance of binomial Here is the third wa of finding the mean and variance of a binomial distribution, which is more sophisticated than the methods in Notes 8 et easier than both. If X Bin(n, p) then X can be thought of as the the number of successes in n mutuall independent Bernoulli trials, each of which has probabilit p of success. For i =,..., n, let X i be the number of successes at the i-th trial. Then X i Bernoulli(p), so E(X i ) = p and Var(X i ) = pq, where q = p. Moreover, the random variables X,..., X n are mutuall independent. B Theorem 7, B the Corollar to Theorem 0, E(X) = E(X ) + E(X ) + + E(X n ) = p + p + + p } {{ } n times = np. Var(X) = Var(X ) + Var(X ) + + Var(X n ) = pq + pq + + pq } {{ } n times = npq. Note that if also Y Bin(m, p), and X and Y are independent of each other, then Y = X n+ + +X n+m, where X n+,..., X n+m are mutuall independent Bernoulli(p) random variables which are all independent of X,..., X n, so X +Y = X + X n + X n+ + X n+m, which is the sum of n + m mutuall independent Bernoulli(p) random variables, so X +Y Bin(n + m, p). 0

Examples: Joint Densities and Joint Mass Functions Example 1: X and Y are jointly continuous with joint pdf

Examples: Joint Densities and Joint Mass Functions Example 1: X and Y are jointly continuous with joint pdf AMS 3 Joe Mitchell Eamples: Joint Densities and Joint Mass Functions Eample : X and Y are jointl continuous with joint pdf f(,) { c 2 + 3 if, 2, otherwise. (a). Find c. (b). Find P(X + Y ). (c). Find marginal

More information

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

More information

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8.

Random variables P(X = 3) = P(X = 3) = 1 8, P(X = 1) = P(X = 1) = 3 8. Random variables Remark on Notations 1. When X is a number chosen uniformly from a data set, What I call P(X = k) is called Freq[k, X] in the courseware. 2. When X is a random variable, what I call F ()

More information

Statistics 100A Homework 4 Solutions

Statistics 100A Homework 4 Solutions Problem 1 For a discrete random variable X, Statistics 100A Homework 4 Solutions Ryan Rosario Note that all of the problems below as you to prove the statement. We are proving the properties of epectation

More information

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i ) Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

More information

ST 371 (IV): Discrete Random Variables

ST 371 (IV): Discrete Random Variables ST 371 (IV): Discrete Random Variables 1 Random Variables A random variable (rv) is a function that is defined on the sample space of the experiment and that assigns a numerical variable to each possible

More information

ISyE 6761 Fall 2012 Homework #2 Solutions

ISyE 6761 Fall 2012 Homework #2 Solutions 1 1. The joint p.m.f. of X and Y is (a) Find E[X Y ] for 1, 2, 3. (b) Find E[E[X Y ]]. (c) Are X and Y independent? ISE 6761 Fall 212 Homework #2 Solutions f(x, ) x 1 x 2 x 3 1 1/9 1/3 1/9 2 1/9 1/18 3

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

Math 431 An Introduction to Probability. Final Exam Solutions

Math 431 An Introduction to Probability. Final Exam Solutions Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

More information

Covariance and Correlation

Covariance and Correlation Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

More information

University of California, Los Angeles Department of Statistics. Random variables

University of California, Los Angeles Department of Statistics. Random variables University of California, Los Angeles Department of Statistics Statistics Instructor: Nicolas Christou Random variables Discrete random variables. Continuous random variables. Discrete random variables.

More information

Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

More information

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density

Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,

More information

So, using the new notation, P X,Y (0,1) =.08 This is the value which the joint probability function for X and Y takes when X=0 and Y=1.

So, using the new notation, P X,Y (0,1) =.08 This is the value which the joint probability function for X and Y takes when X=0 and Y=1. Joint probabilit is the probabilit that the RVs & Y take values &. like the PDF of the two events, and. We will denote a joint probabilit function as P,Y (,) = P(= Y=) Marginal probabilit of is the probabilit

More information

Correlation in Random Variables

Correlation in Random Variables Correlation in Random Variables Lecture 11 Spring 2002 Correlation in Random Variables Suppose that an experiment produces two random variables, X and Y. What can we say about the relationship between

More information

Probability Generating Functions

Probability Generating Functions page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

More information

Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22

Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22 Math 151. Rumbos Spring 2014 1 Solutions to Assignment #22 1. An experiment consists of rolling a die 81 times and computing the average of the numbers on the top face of the die. Estimate the probability

More information

Example. A casino offers the following bets (the fairest bets in the casino!) 1 You get $0 (i.e., you can walk away)

Example. A casino offers the following bets (the fairest bets in the casino!) 1 You get $0 (i.e., you can walk away) : Three bets Math 45 Introduction to Probability Lecture 5 Kenneth Harris aharri@umich.edu Department of Mathematics University of Michigan February, 009. A casino offers the following bets (the fairest

More information

), 35% use extra unleaded gas ( A

), 35% use extra unleaded gas ( A . At a certain gas station, 4% of the customers use regular unleaded gas ( A ), % use extra unleaded gas ( A ), and % use premium unleaded gas ( A ). Of those customers using regular gas, onl % fill their

More information

MAS108 Probability I

MAS108 Probability I 1 QUEEN MARY UNIVERSITY OF LONDON 2:30 pm, Thursday 3 May, 2007 Duration: 2 hours MAS108 Probability I Do not start reading the question paper until you are instructed to by the invigilators. The paper

More information

Chapter 5. Random variables

Chapter 5. Random variables Random variables random variable numerical variable whose value is the outcome of some probabilistic experiment; we use uppercase letters, like X, to denote such a variable and lowercase letters, like

More information

Lecture 4: Joint probability distributions; covariance; correlation

Lecture 4: Joint probability distributions; covariance; correlation Lecture 4: Joint probability distributions; covariance; correlation 10 October 2007 In this lecture we ll learn the following: 1. what joint probability distributions are; 2. visualizing multiple variables/joint

More information

Random variables, probability distributions, binomial random variable

Random variables, probability distributions, binomial random variable Week 4 lecture notes. WEEK 4 page 1 Random variables, probability distributions, binomial random variable Eample 1 : Consider the eperiment of flipping a fair coin three times. The number of tails that

More information

Lecture Notes 1. Brief Review of Basic Probability

Lecture Notes 1. Brief Review of Basic Probability Probability Review Lecture Notes Brief Review of Basic Probability I assume you know basic probability. Chapters -3 are a review. I will assume you have read and understood Chapters -3. Here is a very

More information

SOLUTIONS: 4.1 Probability Distributions and 4.2 Binomial Distributions

SOLUTIONS: 4.1 Probability Distributions and 4.2 Binomial Distributions SOLUTIONS: 4.1 Probability Distributions and 4.2 Binomial Distributions 1. The following table contains a probability distribution for a random variable X. a. Find the expected value (mean) of X. x 1 2

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

3.4. The Binomial Probability Distribution. Copyright Cengage Learning. All rights reserved.

3.4. The Binomial Probability Distribution. Copyright Cengage Learning. All rights reserved. 3.4 The Binomial Probability Distribution Copyright Cengage Learning. All rights reserved. The Binomial Probability Distribution There are many experiments that conform either exactly or approximately

More information

Mathematical Expectation

Mathematical Expectation Mathematical Expectation Properties of Mathematical Expectation I The concept of mathematical expectation arose in connection with games of chance. In its simplest form, mathematical expectation is the

More information

Sample Questions for Mastery #5

Sample Questions for Mastery #5 Name: Class: Date: Sample Questions for Mastery #5 Multiple Choice Identify the choice that best completes the statement or answers the question.. For which of the following binomial experiments could

More information

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit?

Question: What is the probability that a five-card poker hand contains a flush, that is, five cards of the same suit? ECS20 Discrete Mathematics Quarter: Spring 2007 Instructor: John Steinberger Assistant: Sophie Engle (prepared by Sophie Engle) Homework 8 Hints Due Wednesday June 6 th 2007 Section 6.1 #16 What is the

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

More information

Polynomials and Factoring

Polynomials and Factoring 7.6 Polynomials and Factoring Basic Terminology A term, or monomial, is defined to be a number, a variable, or a product of numbers and variables. A polynomial is a term or a finite sum or difference of

More information

5. Continuous Random Variables

5. Continuous Random Variables 5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

More information

Solution to HW - 1. Problem 1. [Points = 3] In September, Chapel Hill s daily high temperature has a mean

Solution to HW - 1. Problem 1. [Points = 3] In September, Chapel Hill s daily high temperature has a mean Problem 1. [Points = 3] In September, Chapel Hill s daily high temperature has a mean of 81 degree F and a standard deviation of 10 degree F. What is the mean, standard deviation and variance in terms

More information

Statistics 100A Homework 8 Solutions

Statistics 100A Homework 8 Solutions Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

More information

Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011

Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this

More information

2. Discrete random variables

2. Discrete random variables 2. Discrete random variables Statistics and probability: 2-1 If the chance outcome of the experiment is a number, it is called a random variable. Discrete random variable: the possible outcomes can be

More information

Notes on the Negative Binomial Distribution

Notes on the Negative Binomial Distribution Notes on the Negative Binomial Distribution John D. Cook October 28, 2009 Abstract These notes give several properties of the negative binomial distribution. 1. Parameterizations 2. The connection between

More information

Pearson s Correlation Coefficient

Pearson s Correlation Coefficient Pearson s Correlation Coefficient In this lesson, we will find a quantitative measure to describe the strength of a linear relationship (instead of using the terms strong or weak). A quantitative measure

More information

Opgaven Onderzoeksmethoden, Onderdeel Statistiek

Opgaven Onderzoeksmethoden, Onderdeel Statistiek Opgaven Onderzoeksmethoden, Onderdeel Statistiek 1. What is the measurement scale of the following variables? a Shoe size b Religion c Car brand d Score in a tennis game e Number of work hours per week

More information

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

More information

MAT188H1S Lec0101 Burbulla

MAT188H1S Lec0101 Burbulla Winter 206 Linear Transformations A linear transformation T : R m R n is a function that takes vectors in R m to vectors in R n such that and T (u + v) T (u) + T (v) T (k v) k T (v), for all vectors u

More information

How To Find The Correlation Of Random Bits With The Xor Operator

How To Find The Correlation Of Random Bits With The Xor Operator Exclusive OR (XOR) and hardware random number generators Robert B Davies February 28, 2002 1 Introduction The exclusive or (XOR) operation is commonly used to reduce the bias from the bits generated by

More information

Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

More information

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance

More information

2 Binomial, Poisson, Normal Distribution

2 Binomial, Poisson, Normal Distribution 2 Binomial, Poisson, Normal Distribution Binomial Distribution ): We are interested in the number of times an event A occurs in n independent trials. In each trial the event A has the same probability

More information

e.g. arrival of a customer to a service station or breakdown of a component in some system.

e.g. arrival of a customer to a service station or breakdown of a component in some system. Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

More information

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 3: Discrete Uniform Distribution Binomial Distribution Sections 3-5, 3-6 Special discrete random variable distributions we will cover

More information

Lecture 5: Mathematical Expectation

Lecture 5: Mathematical Expectation Lecture 5: Mathematical Expectation Assist. Prof. Dr. Emel YAVUZ DUMAN MCB1007 Introduction to Probability and Statistics İstanbul Kültür University Outline 1 Introduction 2 The Expected Value of a Random

More information

Homework 4 - KEY. Jeff Brenion. June 16, 2004. Note: Many problems can be solved in more than one way; we present only a single solution here.

Homework 4 - KEY. Jeff Brenion. June 16, 2004. Note: Many problems can be solved in more than one way; we present only a single solution here. Homework 4 - KEY Jeff Brenion June 16, 2004 Note: Many problems can be solved in more than one way; we present only a single solution here. 1 Problem 2-1 Since there can be anywhere from 0 to 4 aces, the

More information

Math 202-0 Quizzes Winter 2009

Math 202-0 Quizzes Winter 2009 Quiz : Basic Probability Ten Scrabble tiles are placed in a bag Four of the tiles have the letter printed on them, and there are two tiles each with the letters B, C and D on them (a) Suppose one tile

More information

Martingale Ideas in Elementary Probability. Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996

Martingale Ideas in Elementary Probability. Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996 Martingale Ideas in Elementary Probability Lecture course Higher Mathematics College, Independent University of Moscow Spring 1996 William Faris University of Arizona Fulbright Lecturer, IUM, 1995 1996

More information

Lecture 9: Introduction to Pattern Analysis

Lecture 9: Introduction to Pattern Analysis Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns

More information

INVESTIGATIONS AND FUNCTIONS 1.1.1 1.1.4. Example 1

INVESTIGATIONS AND FUNCTIONS 1.1.1 1.1.4. Example 1 Chapter 1 INVESTIGATIONS AND FUNCTIONS 1.1.1 1.1.4 This opening section introduces the students to man of the big ideas of Algebra 2, as well as different was of thinking and various problem solving strategies.

More information

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015. Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

More information

Measurements of central tendency express whether the numbers tend to be high or low. The most common of these are:

Measurements of central tendency express whether the numbers tend to be high or low. The most common of these are: A PRIMER IN PROBABILITY This handout is intended to refresh you on the elements of probability and statistics that are relevant for econometric analysis. In order to help you prioritize the information

More information

Sampling Distributions

Sampling Distributions Sampling Distributions You have seen probability distributions of various types. The normal distribution is an example of a continuous distribution that is often used for quantitative measures such as

More information

APPLIED MATHEMATICS ADVANCED LEVEL

APPLIED MATHEMATICS ADVANCED LEVEL APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

More information

WHERE DOES THE 10% CONDITION COME FROM?

WHERE DOES THE 10% CONDITION COME FROM? 1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay

More information

Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution

Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution Recall: Ch5: Discrete Probability Distributions Section 5-1: Probability Distribution A variable is a characteristic or attribute that can assume different values. o Various letters of the alphabet (e.g.

More information

Section 5.1 Continuous Random Variables: Introduction

Section 5.1 Continuous Random Variables: Introduction Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

More information

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation

Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 13. Random Variables: Distribution and Expectation CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 3 Random Variables: Distribution and Expectation Random Variables Question: The homeworks of 20 students are collected

More information

Without data, all you are is just another person with an opinion.

Without data, all you are is just another person with an opinion. OCR Statistics Module Revision Sheet The S exam is hour 30 minutes long. You are allowed a graphics calculator. Before you go into the exam make sureyou are fully aware of the contents of theformula booklet

More information

Gaussian Probability Density Functions: Properties and Error Characterization

Gaussian Probability Density Functions: Properties and Error Characterization Gaussian Probabilit Densit Functions: Properties and Error Characterization Maria Isabel Ribeiro Institute for Sstems and Robotics Instituto Superior Tcnico Av. Rovisco Pais, 1 149-1 Lisboa PORTUGAL {mir@isr.ist.utl.pt}

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

More information

Point Biserial Correlation Tests

Point Biserial Correlation Tests Chapter 807 Point Biserial Correlation Tests Introduction The point biserial correlation coefficient (ρ in this chapter) is the product-moment correlation calculated between a continuous random variable

More information

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS

6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS 6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:3-9:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total

More information

0 x = 0.30 x = 1.10 x = 3.05 x = 4.15 x = 6 0.4 x = 12. f(x) =

0 x = 0.30 x = 1.10 x = 3.05 x = 4.15 x = 6 0.4 x = 12. f(x) = . A mail-order computer business has si telephone lines. Let X denote the number of lines in use at a specified time. Suppose the pmf of X is as given in the accompanying table. 0 2 3 4 5 6 p(.0.5.20.25.20.06.04

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

2WB05 Simulation Lecture 8: Generating random variables

2WB05 Simulation Lecture 8: Generating random variables 2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating

More information

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

More information

Exponential and Logarithmic Functions

Exponential and Logarithmic Functions Chapter 6 Eponential and Logarithmic Functions Section summaries Section 6.1 Composite Functions Some functions are constructed in several steps, where each of the individual steps is a function. For eample,

More information

Chapter 5: Normal Probability Distributions - Solutions

Chapter 5: Normal Probability Distributions - Solutions Chapter 5: Normal Probability Distributions - Solutions Note: All areas and z-scores are approximate. Your answers may vary slightly. 5.2 Normal Distributions: Finding Probabilities If you are given that

More information

ECE302 Spring 2006 HW4 Solutions February 6, 2006 1

ECE302 Spring 2006 HW4 Solutions February 6, 2006 1 ECE302 Spring 2006 HW4 Solutions February 6, 2006 1 Solutions to HW4 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in

More information

Section 6.1 Joint Distribution Functions

Section 6.1 Joint Distribution Functions Section 6.1 Joint Distribution Functions We often care about more than one random variable at a time. DEFINITION: For any two random variables X and Y the joint cumulative probability distribution function

More information

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1]. Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

More information

Normal distribution. ) 2 /2σ. 2π σ

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

More information

2.6. The Circle. Introduction. Prerequisites. Learning Outcomes

2.6. The Circle. Introduction. Prerequisites. Learning Outcomes The Circle 2.6 Introduction A circle is one of the most familiar geometrical figures and has been around a long time! In this brief Section we discuss the basic coordinate geometr of a circle - in particular

More information

X: 0 1 2 3 4 5 6 7 8 9 Probability: 0.061 0.154 0.228 0.229 0.173 0.094 0.041 0.015 0.004 0.001

X: 0 1 2 3 4 5 6 7 8 9 Probability: 0.061 0.154 0.228 0.229 0.173 0.094 0.041 0.015 0.004 0.001 Tuesday, January 17: 6.1 Discrete Random Variables Read 341 344 What is a random variable? Give some examples. What is a probability distribution? What is a discrete random variable? Give some examples.

More information

Zeros of Polynomial Functions. The Fundamental Theorem of Algebra. The Fundamental Theorem of Algebra. zero in the complex number system.

Zeros of Polynomial Functions. The Fundamental Theorem of Algebra. The Fundamental Theorem of Algebra. zero in the complex number system. _.qd /7/ 9:6 AM Page 69 Section. Zeros of Polnomial Functions 69. Zeros of Polnomial Functions What ou should learn Use the Fundamental Theorem of Algebra to determine the number of zeros of polnomial

More information

Math/Stats 342: Solutions to Homework

Math/Stats 342: Solutions to Homework Math/Stats 342: Solutions to Homework Steven Miller (sjm1@williams.edu) November 17, 2011 Abstract Below are solutions / sketches of solutions to the homework problems from Math/Stats 342: Probability

More information

The Bivariate Normal Distribution

The Bivariate Normal Distribution The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

More information

Some special discrete probability distributions

Some special discrete probability distributions University of California, Los Angeles Department of Statistics Statistics 100A Instructor: Nicolas Christou Some special discrete probability distributions Bernoulli random variable: It is a variable that

More information

Simple Regression Theory II 2010 Samuel L. Baker

Simple Regression Theory II 2010 Samuel L. Baker SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

More information

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

More information

Dimensionality Reduction: Principal Components Analysis

Dimensionality Reduction: Principal Components Analysis Dimensionality Reduction: Principal Components Analysis In data mining one often encounters situations where there are a large number of variables in the database. In such situations it is very likely

More information

Section 7.2 Linear Programming: The Graphical Method

Section 7.2 Linear Programming: The Graphical Method Section 7.2 Linear Programming: The Graphical Method Man problems in business, science, and economics involve finding the optimal value of a function (for instance, the maimum value of the profit function

More information

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL

FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL FEGYVERNEKI SÁNDOR, PROBABILITY THEORY AND MATHEmATICAL STATIsTICs 4 IV. RANDOm VECTORs 1. JOINTLY DIsTRIBUTED RANDOm VARIABLEs If are two rom variables defined on the same sample space we define the joint

More information

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE TROY BUTLER 1. Random variables and distributions We are often presented with descriptions of problems involving some level of uncertainty about

More information

Linear Algebraic Equations, SVD, and the Pseudo-Inverse

Linear Algebraic Equations, SVD, and the Pseudo-Inverse Linear Algebraic Equations, SVD, and the Pseudo-Inverse Philip N. Sabes October, 21 1 A Little Background 1.1 Singular values and matrix inversion For non-smmetric matrices, the eigenvalues and singular

More information

ECE302 Spring 2006 HW3 Solutions February 2, 2006 1

ECE302 Spring 2006 HW3 Solutions February 2, 2006 1 ECE302 Spring 2006 HW3 Solutions February 2, 2006 1 Solutions to HW3 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in

More information

Chapter 16, Part C Investment Portfolio. Risk is often measured by variance. For the binary gamble L= [, z z;1/2,1/2], recall that expected value is

Chapter 16, Part C Investment Portfolio. Risk is often measured by variance. For the binary gamble L= [, z z;1/2,1/2], recall that expected value is Chapter 16, Part C Investment Portfolio Risk is often measured b variance. For the binar gamble L= [, z z;1/,1/], recall that epected value is 1 1 Ez = z + ( z ) = 0. For this binar gamble, z represents

More information

6 PROBABILITY GENERATING FUNCTIONS

6 PROBABILITY GENERATING FUNCTIONS 6 PROBABILITY GENERATING FUNCTIONS Certain derivations presented in this course have been somewhat heavy on algebra. For example, determining the expectation of the Binomial distribution (page 5.1 turned

More information

MATH 10034 Fundamental Mathematics IV

MATH 10034 Fundamental Mathematics IV MATH 0034 Fundamental Mathematics IV http://www.math.kent.edu/ebooks/0034/funmath4.pdf Department of Mathematical Sciences Kent State University January 2, 2009 ii Contents To the Instructor v Polynomials.

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

Feb 7 Homework Solutions Math 151, Winter 2012. Chapter 4 Problems (pages 172-179)

Feb 7 Homework Solutions Math 151, Winter 2012. Chapter 4 Problems (pages 172-179) Feb 7 Homework Solutions Math 151, Winter 2012 Chapter Problems (pages 172-179) Problem 3 Three dice are rolled. By assuming that each of the 6 3 216 possible outcomes is equally likely, find the probabilities

More information

Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition

Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition Online Learning Centre Technology Step-by-Step - Excel Microsoft Excel is a spreadsheet software application

More information