1 Chapter 4. robability and robability Distributions
2 Importance of Knowing robability To know whether a sample is not identical to the population from which it was selected, it is necessary to assess the degree of accuracy to which the sample mean, sample standard deviation, or sample proportion represent the corresponding population values. To decide at what point the result of the observed sample is not possible. This means that we need to know how to find the probability of obtaining a particular sample outcome. robability is the tool that enables us to make an inference.
3 Definition of robability 1 Classical definition Each possible distinct result is called an outcome; an event is identified as a collection of outcomes. The probability of an event E is computed by taking the ratio of the number of outcomes favorable to event E N e to the total number of N of possible outcomes: event E N N e
4 Definition of robability 2 Relative frequency If an experiment is conducted n different times and if event E occurs on n e of these trials, then the probability of event E is approximately event E ne n
5 Basic Event Relations and robability Laws 1 The probability of an event, say event A, will always satisfy the property: 0 A 1 Mutually exclusive Two events A and B are said to be mutually exclusive if they cannot occur simultaneously. A or B A + B
6 Basic Event Relations and Complement robability Laws 2 The complement of an event A is the event that A does not occur. The complement of A is denoted by the symbol A. Union The union of two events A and B is the set of all outcomes that are included in either A or B or both. Intersection A + A 1 The intersection of two events A and B is the set of all outcomes that are included in both A and B. A B A + B AI B
7 Basic Event Relations and robability Laws 3 Conditional robability When probabilities are calculated with a subset of the total group as the denominator, the result is called a conditional probability. Consider two events A and B with nonzero probabilities, A and B. The conditional probability of event A given event B is: A B A B B
8 Basic Event Relations and Independence robability Laws 3 The occurrence of event A is not dependent on the occurrence of event B or, simply, that A and B are independent event. A B A When events A and B are independent, it follows that: A B A B A A B
9 Bayes Formula 1 Let A 1, A 2,, A k be a collection of k mutually exclusive and exhaustive events with A i >0 for i1,, k. Then for any other B for which B >0 Example 1. k j A A B A A B B B A B A k i i i j j j j 1,..., 1
10 Bayes Formula 2 Sensitivity The sensitivity of a test or symptom is the probability of a positive test result or presence of the symptom given the presence of the disease. Specificity The specificity of a test or symptom is the probability of a negative test result or absence of the symptom given the absence of the disease. False positive The false positive of a test or symptom is the probability of a positive test result or presence of the symptom given the absence of the disease. False negative The false negative of a test or symptom is the probability of a negative test result or presence of the symptom given the presence of the disease.
11 Bayes Formula 3 redictive value positive The predictive value positive of a test or symptom is the probability that a subject has the disease given that the subject has a positive test result or has the symptom. redictive value negative The predictive value negative of a test or symptom is the probability that a subject does not have the disease, given that the subject has a negative test result or does not have the symptom. Example 2.
12 Discrete and Continuous Variables Discrete random variable When observation on a quantitative random variable can assume only a countable number of values, the variable is called a discrete random variable. Continuous random variable When observations on a quantitative random variable can assume any one of the uncountable number of values in a line interval, the variable is called a continuous random variable.
13 robability Distribution for Discrete Random Variables 1 For discrete random variables, we can compute the probability of specific individual values occurring. The probability distribution for a discrete random variable displays the probability y associated with each value of y. roperties of discrete random variables: The probability associated with every value of y lies between 0 and 1. The sum of the probabilities for all values of y is equal to 1. The probabilities for a discrete random variable are additive. Hence, the probability that y1, 2, 3,, k is equal to k. Example 3.
14 robability Distribution for Discrete Random Variables 2 Binomial distribution or experiment roperties: A binomial experiment consists of n identical trials. Each trial results is one of two outcomes. We will label one outcome a success and the other a failure. The probability of success on a single trial is equal to π and π remains the same from trial to trial. The trials are independent; that is, the outcome of one trial does not influence the outcome of any other trial. The random variability y is the number of successes observed during the n trials.
15 General Formula for Binomial robability The probability of observing y successes in n trials of a binomial experiment is: Example 3 n! y y! n y π 1 π y! n y where n number of trials π probability of success on a trial 1 π probability of failure on a trial y number of successes in n trials n! n n 1 n
16 Mean and Standard Deviation of the Binomial robability Distribution Mean µ µ nπ Standard deviation σ σ nπ 1 π whereπ is the probability of success in agiven trial and n is the number of trials in the binomial experiment. Example 6
17 robability Distributions for Continuous Random Variables Theoretically, a continuous random variable is one that can assume values associated with infinitely many points in a line interval. It is impossible to assign a small amount of probability to each value of y and retain the property that the probabilities sum to 1. To overcome this difficulty, for continuous random variables, the probability of an interval of values is the event of interest or the probability of y falling in a given interval.
18 Normal Distribution 1 Normal distribution distribution Normal Density µ Normal probability density function f y µ 1 2 2σ y e 2πσ 2
19 Normal Distribution 2 Area under a normal curve Normal Density Normal Density µ µ Normal Density Normal Density µ µ
20 Normal Distribution 3 Z score To determine the probability that a measurement will be less than some value y, we first calculate the number of standard deviations that y lies away from the mean by using the formula: The value of z computed using this formula is sometimes referred to as the z score associated with the y-value. Using the computed value of z, we determine the appropriate probability by using the z table. Example 8 z y µ σ
21 Normal Distribution 4 100pth percentile The 100pth percentile of a distribution is that value, y p, such that 100p% of the population values fall below y p and 1001-p% are above y p. To find the percentile, z p, we find the probability p in z table. To find the 100pth percentile, y p, of a normal distribution with mean µ and standard deviation σ, we need to apply the reverse of the standardization formula: Example 9 y µ + z pσ p
22 Random Sampling Random number table Random number generator
23 Sampling Distributions 1 A sample statistic is a random variable; it id subject to random variation because it is based on a random sample of measurements selected from the population of interest. Like any other random variable, a sample statistic has a probability distribution. We call the probability distribution of a sampling statistic the sampling distribution of that statistic. Example 10
24 Sampling Distributions 2 The sampling distribution of y has mean and standard deviation σ y, which are related to the population mean µ, and standard deviation σ, by the following relationship: µ y µ µ σ y y σ n The sampling deviations have means that are approximately equal to the population mean. Also, the sampling deviations have standard deviations that are approximately equal to σ n. If all possible values of y have been generated, then the standard deviation of y would equal to σ n exactly.
25 Central Limit Theorems 1 Let y denote the sample mean computed from a random sample of n measurements from a population having a mean, µ, and finite standard deviation, σ. Let µ y and σ y denote the mean and standard deviation of the sampling distribution of y, respectively. Based on repeated random samples of size n from the population, we can conclude the following: µ y µ σ n σ y When n is large, the sampling distribution of y will be with the approximation becoming more precise as n increases. distribution of approximately normal When the population distribution is normal, the sampling y is exactly nromal for any sample size n.
26 Central Limit Theorems 2 The Central Limit Theorems provide theoretical justification for our approximating the true sampling distribution of the sample mean with the normal distribution. Similar theorems exist for the sample median, sample standard deviation, and the sample proportion. For applying the Central Limit Theorems, no specific shape is required for the theorems to be validated. However, this is not true in general. If the population distribution had many extreme values or several modes, the sampling distribution of y would require n to be considerably larger in order to achieve a symmetric bell shape.
27 Central Limit Theorems 3 It is very unlikely that the exact shape of the population distribution will be known. Thus, the exact shape of the sampling distribution of y will not be known either. The important point to remember is that the sampling distribution of y will be approximately normally distributed with a mean µ, the y µ population mean, and a standard deviation σ y σ n. The approximation will be more precise as n, the sample size for each sample, increases and as the shape of the population distribution becomes more like the shape of a normal distribution. How large should the sample size be for the Central Limit Theorem to hold? In general, the Central Limit Theorem holds for n > 30. However, one should not apply this rule blindly. If the population is heavily skewed, the sampling distribution for y will still be skewed even for n > 30. On the other hand, if population is symmetric, the Central Limit Theorem holds for n < 30.
28 Central Limit Theorems 4 y Central Limit Theorem for : y Let denote the sum of a random sample of n measurements from a population having a mean µ and finite standard deviation σ. Let µ and σ denote the mean and standard y deviation of the sampling distribution y of y, respectively. Based on repeated random samples of size n from the population, we can conclude the following: µ nµ y σ y When nσ n is large, the sampling distribution of y will be normal with the approximation becoming more precise as approximately n increases. When the population distribution is normal, the sampling distribution of y is exactly nromal for any sample size n.
29 Normal Approximation to the Binomial 1 The binomial random variable y is the number of successes in the n trials. Let n random variables, I 1, I 2,, I n defined as: I i 1 0 if if the ith trial results in a success the ith trial results in a failure n To consider the sum of the random variables, I 1, I 2,, I n, I. i 1 i A 1 is placed in the sum for each success that occurs and a 0 n for each failure that occurs. Thus, I is the number of i 1 i successes that occurred during the n trials. Hence, we conclude n that y Ii. 1 i Because the binomial random variable y is the sum of independent random variables, each having the same distribution, we can apply the Central Limit Theorem for sums to y.
30 Normal Approximation to the Binomial 2 The normal distribution can be used to approximate the binomial distribution when n is of an appropriate size. The normal distribution that will be used has a mean and standard deviation given by the following formula: µ nπ σ nπ 1 π π the probability of success Example 11
31 Normal Approximation to the Binomial 3 The normal approximation to the binomial distribution can be unsatisfactory if nπ < 5 or n1- π < 5. If π is small and n is modest, the actual binomial distribution is seriously skewed to the right. In such a case, the symmetric normal curve will give an unsatisfactory approximation. If π is near 1, so n1- π < 5, the actual binomial will be skewed to the left, and again the normal approximation will not be very accurate. The normal approximation is quite good when nπ or n1- π exceed about 20. In the middle zone, nπ or n1- π between 5 and 20, a modification called continuity correction makes a substantial contribution to the quality of the approximation.
32 Normal Approximation to the Binomial 4 The point of the continuity correction is that we are using the continuous normal curve to approximate a discrete binomial distribution. The general idea of the contunity correction is to add or subtract 0.5 from a binomial value before using normal probabilities. A picture of the situation as the following: Instead of y 5 p[ z / ] p z use y 5.5 [ z / ] p z The actual binomial y 5 C probability is : C C C C C
33 Homework 4.39, 4.40 p , 4.96 p p.189
34 Example 1 A book club classifies members as heavy, medium, or light purchasers, and separate mailings are prepared for each of these groups. Overall, 20% of the members are heavy purchasers, 30% medium, and 50% light. A member is not classified into a group until 18 months after joining the club, but a test is made of the feasibility of using the first 3 months purchases to classify members. The following percentages are obtained from existing records of individuals classified as heavy, medium, or light purchasers: First 3 Group % Months Heavy Medium Light urchases If a member purchases no books in the first 3 months, what is the probability that the member is a light purchaser? Note: This table contains conditional percentages for each column.
35 Answer to Example formula, According to Bayes'? L H H M M L L L L L L
36 Example 2 A screening test for a disease shows the result as the following table. What are the sensitivity, specificity, false positive, false negative, predictive value positive, and predictive value negative? Test Result Disease resent D Absent D Total ositive T a b a + b Negative T c d c + d Total a + c b + d n
37 Answer to Example 2 negative value predictive positive value predictive positive false negative false specificity sensitivity D D T D D T D D T d c d T D D D T D D T D D T b a a T D d b b D T c a c D T d b d D T c a a D T
38 Example 3 An article in the March 5, 1998, issue of The New England Journal of Medicine discussed a large outbreak of tuberculosis. One person, called the index patient, was diagnosed with tuberculosis in The 232 co-worker of the index patient were given a tuberculin screening test. The number of co-workers recording a positive reading on the test was the random variable of interest. Did this study satisfy the properties of a binomial experiment?
39 Answer to Example 3 Were there n identical trials? Yes Did each trial result in one of two outcomes? Yes Was the probability of success the same from trial to trial? Yes Were the trials independent? Yes Was the random variable of interest to the experimenter the number of successes y in the 232 screening tests? Yes All five characteristics were satisfied, so the tuberculin screening test represented a binomial experiment.
40 Example 4 An economist interview 75 students in a class of 100 to estimate the proportion of students who expect to obtain a C or better in the course. Is this a binomial experiment? Answer: Were there n identical trials? Yes Did each trial result in one of two outcomes? Yes Was the probability of success the same from trial to trial? No
41 Example 5 What is the probability distribution of the number of heads in tosses of 4 coins? Answer: Let y is the number of heads observed. Then the empirical sampling results for y: y Frequency Observed Relative Frequency Expected Relative Frequency
42 Answer to example 5 continued robability distribution for the number of heads when 4 coins are tossed. y Number of Heads
43 Example 6 Suppose that a sample of households is randomly selected from all the households in the city in order to estimate the percentage in which the head of the household in unemployed. To illustrate the computation of a binomial probability, suppose that the unknown percentage is actually 10% and that a sample of n5 is selected from the population. What is the probability that all five heads of the households are employed? What is the probability of one or fewer being unemployed? Answer: y 5 5! 0.9 5!5 5! ! 0.9 5!0! y 4 or
44 Example 7 A company producing the turf grass takes a sample of 20 seeds on a regular basis to monitor the quality of the seeds. According to the result from previous experiments, the germination rate of the seeds is 85%. If in a particular sample of 20 seeds there are only 12 had germinated, would the germination arte of 85% seem consist with the current results? Answer: µ nπ σ nπ 1 π Thus, y12 seeds is more than 3 standard deviation less than the mean number of seeds µ 17; it is not likely that in 20 seeds we would obtain only 12 germinated seeds if π really is equal to 0.85.
45 The binomial distribution for n 20 and π0.85 Count Number of Germinated Seeds
46 Example 8 The mean daily milk production of a herd of Guerney cows has a normal distribution with µ70 pounds and σ13 pounds. What is the probability that the milk production for a cow chosen at random will be less than 60 pounds? What is the probability that the milk production for a cow chosen at random will be greater than 90 pounds? What is the probability that the milk production for a cow chosen at random will be between 60 pounds and 90 pounds?
47 Answer to Example 8 1 To compute the z value corresponding to the value of 60 pounds. y µ z σ Normal Density µ
48 Answer to Example 8 2 To compute the z value corresponding to the value of 90 pounds. Then, check the z table to find out the corresponding probability of the values greater than 90 pounds. y µ z σ Normal Density µ
49 Answer to Example 8 3 The area between two values 60 and 90 is determine by finding the difference between the areas to left of the two values Normal Density µ
50 Example 9 The Scholastic Assessment Test SAT is an examination used to measure a person s readiness for college. The mathematics scores are used to have a normal distribution with mean 500 and standard deviation 100. What proportion of the people taking the SAT will score below 350? To identify a group of students needing remedial assistance, say, the lower 10% of all scores, what is the score on the SAT?
51 Answer to Example 9 To find the proportion of scores below 350: z y µ σ Normal Density µ To find the 10 th percentile, we first find z 0.1 in z table. Since is the value nearest and its corresponding z is 1.28, we take z and then compute: y0.1 µ + z0.1σ
52 Random Numbers
53 Example 10 1 The population consists of 500 pennies from which we compute the age of each penny: age 2000 date on penny. What are the distributions of y based on sample of sizes n 5, 10 and 25? Given the population mean µ and the population standard deviation σ Frequency Ages
54 Example 10 2 Frequency Frequency Mean Age Mean Age Sample Size Mean of y Standard Deviation of y n Frequency Mean Age Sampling distribution of y for n 5, 10 and 25
55 Example 11 Using the normal approximation to the binomial to compute the probability of observing 460 or fewer in a sample of 1000 favoring consolidation if we assume that 50% of the entire population favor the change. Answer : µ nπ σ z nπ 1 π y µ σ fy y