9.1 (a) The standard deviation of the four sample differences is given as.68. The standard error is SE (ȳ1 - ȳ 2 ) = SE d - = s d n d
|
|
- Calvin Lynch
- 7 years ago
- Views:
Transcription
1 CHAPTER 9 Comparison of Paired Samples 9.1 (a) The standard deviation of the four sample differences is given as.68. The standard error is SE (ȳ1 - ȳ 2 ) = SE d - = s d n d =.68 4 =.34. (b) H 0 : The mean yields of the two varieties are the same (µ 1 = µ 2 ) H A : The mean yields of the two varieties are different (µ 1 µ 2 ) t s = -1.65/.34 = With df = 3, Table 4 gives t.01 = and t.005 = 5.841; thus,.01 < P <.02. At significance level α =.05, we reject H 0 if P <.05. Since.01 < P <.02, we reject H 0. There is sufficient evidence (.01 < P <.02) to conclude that Variety 2 has a higher mean yield than Variety 1. (c) H 0 : The mean yields of the two varieties are the same (µ 1 = µ 2 ) H A : The mean yields of the two varieties are different (µ 1 µ 2 ) SE (ȳ1 - ȳ 2 ) = = t s = -1.65/1.230 = With df = 6, Table 4 gives t.20 =.906 and t.10 = Thus,.20 < P <.40 and we do not reject H 0. There is insufficient evidence (.20 < P <.40) to conclude that the mean yields of the two varieties are different. (By contrast, the correct test, in part (b), resulted in rejection of H 0.) 9.2 (a) The standard deviation of the nine sample differences is given as The standard error is SE- d = s d = 59.3 n d 9 = (b) H 0 : The mean weight gains on the two diets are the same (µ 1 = µ 2 ) H A : The mean weight gains on the two diets are different (µ 1 µ 2 ) t s = 22.9/19.77 = With df = 8, Table 4 gives t.20 =.889 and t.10 = Thus,.20 < P <.40 and we do not reject H 0. There is insufficient evidence (.20 < P <.40) to conclude that the mean weight gains on the two diets are different. (c) 22.9 ± (1.860)(19.77) (-13.9,59.7) or < µ d < 59.7 lb. (d) We are 90% confident that the average steer gains somewhere between 59.7 pounds more and 13.9 pounds less when on Diet 1 than when on Diet 2 (in a 140-day period).
2 Let 1 denote control and let 2 denote progesterone. H 0 : Progesterone has no effect on camp (µ 1 = µ 2 ) H A : Progesterone has some effect on camp (µ 1 µ 2 ) The standard error is SE (ȳ1 - ȳ 2 ) = SE d - = s d n d =.40 4 =.20. The test statistic is t s = ȳ 1 - ȳ 2 SE (ȳ1 - ȳ 2 ) = d - SE d - = = 3.4. To bracket the P-value, we consult Table 4 with df = 4-1 = 3. Table 4 gives t.025 = and t.02 = Thus, the P-value is bracketed as.04 < P <.05. At significance level α =.10, we reject H 0 if P <.10. Since.04 < P <.05, we reject H 0. There is sufficient evidence (.04 < P <.05) to conclude that progesterone decreases camp under these conditions. 9.4 (a) Let 1 denote treated side and 2 denote control side. The standard error is SE (ȳ1 - ȳ 2 ) = s d = n d 15 = The critical value t.025 is found from Student's t distribution with df = n d - 1 = 15-1 = 14. From Table 4 we find that t(14).025 = The 95% confidence interval is d - ± t.025 SE- d.117 ± (2.145)(.2887) (-.50,.74) or -.50 < µ 1 - µ 2 <.74 C. (b) SE (ȳ1 - ȳ 2 ) = = ± (2.048)(.460) (using df = 28) (-.83,1.06) or -.83 < µ 1 - µ 2 < 1.06 C. This interval is wider than the one obtained in part (a). 9.5 Let 1 denote treated side and 2 denote control side. H 0 : The electrical treatment has no effect on collagen shrinkage temperature (µ 1 = µ 2 ) H A : The electrical treatment tends to reduce collagen shrinkage temperature (µ 1 < µ 2 ) We note that ȳ 1 > ȳ 2, so the data do not deviate from H 0 in the direction specified by H A. Thus, P >.50 and we do not reject H 0. There is no evidence (P >.50) that the electrical treatment tends to reduce collagen shrinkage temperature under these conditions. 9.6 The data provide fairly strong evidence (P =.03) that desipramine is more effective than clomipramine in reducing the compulsion to pull one's hair.
3 9.7 SE d - = s d n d = 3 =.57. The confidence interval is 10.9 ± (2.052)(.57) or (9.7,12.1) With the outliers deleted, the mean of the remaining 26 differences is 11.0 and the standard deviation is 2.1. SE- d = s d 2.1 = =.41. The confidence interval is 11.0 ± (2.060)(.41) or (10.1,11.8). This interval is n d 26 more narrow than the previous interval that was based on all of the data, including the outliers, but the difference is not great. 9.9 There is no single correct answer. Any data set with Y 1 and Y 2 varying, but d not varying, is correct; for example: Y 1 Y 2 d See Section III of this Manual (a) 34 Yield, Variety Yield, Variety 1 Yes, the upward trend indicates that the pairing was effective.
4 132 (b) Weight gain, Diet Weight gain, Diet 1 The upward trend here is rather weak, which indicates that the pairing was not especially effective. (c) Shrinkage Temp, Control Shrinkage Temp, Treated Yes, the upward trend indicates that the pairing was effective (a) B s = 6. Looking under n d = 9 in Table 7, we see that there is no entry less than or equal to 6. Therefore, P >.20. (b) B s = 7. Looking under n d = 9 in Table 7, we see that the only column with a critical value less than or equal to 7 is the column headed.20 (for a nondirectional alternative), and the next column is headed.10. Therefore,.10 < P <.20. (c) B s = 8. Looking under n d = 9 in Table 7, we see that the rightmost column with a critical value less than or equal to 8 is the column headed.05 (for a nondirectional alternative), and the next column is headed.02. Therefore,.02 < P <.05. (d) B s = 9. Looking under n d = 9 in Table 7, we see that the rightmost column with a critical value less than or equal to 9 is the column headed.01 (for a nondirectional alternative), and the next column is headed.002. Therefore,.002 < P <.01.
5 9.15 (a) P >.20 (b).10 < P <.20 (c).02 < P <.05 (d).002 < P <.01 (e) P <.001 (f) P < Let p denote the probability that oral conjugated estrogen will decrease PAI-1 level. H 0 : Oral conjugated estrogen has no effect on PAI-1 level (p =.5) H A : Oral conjugated estrogen has an effect on PAI-1 level (p.5) N + = 8, N - = 22, B s = 22. With n d = 30, 22 falls under the.02 heading (for a nondirectional alternative) in Table 7. Thus,.01 < P <.02 and we reject H 0. There is sufficient evidence (.01 < P <.02) to conclude that oral conjugated estrogen tends to decrease PAI-1 level For the sign test, the hypotheses can be stated as H 0 : p=.5 H A : p>.5 where p denotes the probability that the rat in the enriched environment will have the larger cortex. The hypotheses may be stated informally as H 0 : Weight of the cerebral cortex is not affected by environment H A : Environmental enrichment increases cortex weight There were 12 pairs. Of these, there were 10 pairs in which the relative cortex weight was greater for the "enriched" rat than for his "impoverished" littermate; thus N + = 10 and N - = 2. To check the directionality of the data, we note that N + > N - Thus, the data so deviate from H 0 in the direction specified by H A. The value of the test statistic is B s = larger of N + and N - = 10. Looking in Table 7, under n d = 12 for a directional alternative, we see that the rightmost column with a critical value less than or equal to 10 is the column headed.025 and the next column is headed.01. Therefore,.01 < P <.025. At significance level α =.05, we reject H 0 if P <.05. Since P <.025, we reject H 0. There is sufficient evidence (.01 < P <.025) to conclude that environmental enrichment increases cortex weight.
6 We have n d = 12. The null distribution is a binomial distribution with n = 12 and p =.5. Since B s = 10 and H A is directional, we need to calculate the probability of 10, 11, or 12 plus (+) signs. We apply the binomial formula n C j p j (1 - p) n-j, as follows: j = 10, n - j = 2: j = 11, n - j = 1: j = 12, n - j = 0: (66)(.5 10 )(.5 2 ) = (12)(.5 11 )(.5 1 ) = (1)(.5 12 )(.5 0 ) = The P-value is the sum of these probabilities: P = = Let p denote the probability that a patient will have fewer minor seizures with valproate then with placebo. H 0 : Valproate is not effective against minor seizures (p =.5) H A : Valproate is effective against minor seizures (p >.5) N + = 14, N - = 5, B s = 14; the data deviate from H 0 in the direction specified by H A. Eliminating the pair with d = 0, we refer to Table 7 with n d = 19. The only entry of 14 falls under the.05 heading (for a directional alternative). Thus,.025 < P <.05 and we reject H 0. There is sufficient evidence (.025 < P <.05) to conclude that valproate is effective against minor seizures We need to find the probability of 14 or more successes for a binomial distribution with n = 19 and p =.5. For the normal approximation to the binomial, the mean is np = (19)(.5) = 9.5 and the SD is z = np(1 - p) = (19)(.5)(.5) = P = = = 1.84; Table 3 gives We need to find the probability of 22 or more successes, or of 8 or fewer successes, for a binomial distribution with n = 30 and p =.5. For the normal approximation to the binomial, the mean is np = (30)(.5) = 15 and the SD is np(1 - p) = (30)(.5)(.5) = We find the probability of 22 or more successes and double this probability (since the normal curve is symmetric). z = = 2.37; Table 3 gives P = 2( ) = Let p denote the probability that the Northern member of a pair will dominate in more episodes than the Carolina.
7 H 0 : Dominance is balanced between the subspecies (p =.5) H A : One of the subspecies tends to dominate the other (p.5) N + = 8, N - = 0, B s = 8. Looking under n d = 8 in Table 7, we see that the rightmost column with a critical value less than or equal to 8 is the column headed.01 (for a nondirectional alternative), and the next column is headed.002. Therefore,.002 < P <.01. There is sufficient evidence (.002 < P <.01) to conclude the Carolina subspecies tends to dominate the Northern P = 2(.5 8 ) = (a) The null distribution is a binomial distribution with n = 7 and p =.5. Since B s =7 and H A is nondirectional, we need to calculate the probability of 7 successes or of 0 successes. The probability of 7 successes is.5 7 = Likewise, the probability of 0 successes is.5 7 = Thus, P = 2( ) = (b) With n d = 7, the smallest possible p-value is ; thus P cannot be less than (a) (i) P = (2)[(105)(.5 13 )(.5 2 ) + (15)(.5 14 )(.5) + (1)(.5 15 )] = (ii) P = (2)[(15)(.5 14 )(.5) + (1)(.5 15 )] = (iii) P = (2)(.5 15 ) = (b) If B s = 14, then P = <.002; if B s = 13, then P = >.002. Thus, the critical value 14 corresponds to a P-value that is as close to.002 as possible without exceeding it. (c) The entry for.005 would be 14, because <.005 but > Let p denote the probability that hunger rating is higher when taking mcpp than when taking the placebo. H 0 : p =.5 H A : p.5 N + = 3, N - = 5, B s = 5. Looking under n d = 8 in Table 7, we see that the leftmost column has an entry of 7, so the P-value is greater than.20. There is insufficient evidence (P >.20) to conclude that hunger ratings differ on the two treatments P = (2)[(56)(.5 5 )(. 5 3 ) + (28)(.5 6 )(. 5 2 ) + (8)(.5 7 )(. 5 1 ) + (1)(.5 8 )] = (a) P >.20 (b).10 < P <.20 (c).02 < P <.05 (d).01 < P < (a) P >.20 (b).05 < P <.10 (c).002 < P <.01 (d).002 < P < H 0 : Hunger rating is not affected by treatment (mcpp vs. placebo) H A : Treatment does affect hunger rating 135
8 136 The absolute values of the differences are 5, 7, 28, 47, 80, 7, 8, and 20. The ranks of the absolute differences are 1, 2.5, 6, 7, 8, 2.5, 4, and 5. The signed ranks are 1, 2.5, -6, -7, -8, 2.5, 4, and 5. Thus, W+ = = 9 and W- = = 27. W s = 27 and n d = 8; reading Table 8 we find P-value >.20 and H 0 is not rejected. There is insufficient evidence (P >.20) to conclude that treatment has an effect H 0 : Weight change is not affected by treatment (mcpp vs. placebo) H A : Treatment does affect weight change The absolute values of the differences are 1.1, 1.6, 2.1, 0.3, 0.6, 2.2, 0.9, 0.7, and 0.4. The ranks of the absolute differences are 6, 7, 8, 1, 3, 9, 5, 4, and 2. The signed ranks are 6, -7, -8, -1, -3, -9, 5, 4, and -2. Thus, W+ = = 15 and W- = = 30. W s = 30 and n d = 9; reading Table 8 we find P-value >.20 and H 0 is not rejected. There is insufficient evidence (P >.20) to conclude that treatment has an effect H 0 : HL-A compatibility has no effect on graft survival time H A : Survival time tends to be greater when compatibility score is close The differences tend to be positive, which is consistent with H A. The absolute values of the differences are 12, 6, 42+, 67, 5, 5, 6, 20, 11, 18+, and 1. The ranks of the absolute differences are 7, 4.5, 10, 11, 2.5, 2.5, 4.5, 9, 6, 8, and 1. The signed ranks are 7, 4.5, 10, 11, 2.5, 2.5, -4.5, 9, 6, 8, and -1. Thus, W+ = = 60.5 and W- = = 5.5. W s = 60.5 and n d = 11; reading Table 8 we find.005 < P-value <.01 and H 0 is rejected. There is strong evidence (.005 < P-value <.01) to conclude that survival time tends to be greater when compatibility score is close H 0 : Alcoholism has no effect on brain density H A : Alcoholism reduces brain density The differences tend to be negative, which is consistent with H A. The absolute values of the differences are 1.2, 1.7,.5, 4.7, 3.3,.4, 2.7, 1.8,.1,.3, and 1.4. The ranks of the absolute differences are 5, 7, 4, 11, 10, 3, 9, 8, 1, 2, and 6. The signed ranks are -5, -7, -4, -11, -10, 3, -9, -8, -1, 2, and -6. Thus, W+ = = 5 and W- = = 61. W s = 61 and n d = 11; reading Table 8 we find.001 < P-value <.005 and H 0 is rejected. There is strong evidence (.001 < P-value <.005) to conclude that alcoholism is associated with reduced brain density. This was an observational study, so drawing a cause-effect inference is risky. We should stop short of saying that alcoholism reduces brain density.
9 9.34 Let 1 denote 5 weeks and 2 denote baseline. (a) Let N denote no coffee. H 0 : Mean cholesterol does not change in the "no coffee" condition (µ N,1 = µ N,2 ) H A : Mean cholesterol does change in the "no coffee" condition (µ N,1 µ N,2 ) 137 SE = 27/ 25 = t s = -35/5.40 = With df = 24, Table 4 gives t.0005 = We reject H 0. There is sufficient evidence (P <.001) to conclude that mean cholesterol is reduced in the "no coffee" condition. (b) Let U denote usual coffee. H 0 : Mean cholesterol does not change in the "usual coffee" condition (µ U,1 = µ U,2 ) H A : Mean cholesterol does change in the "usual coffee" condition (µ U,1 µ U,2 ) SE = 56/ 8 = t s = 26/19.8 = With df = 7, Table 4 gives t.20 = and t.10 = We do not reject H 0. There is insufficient evidence (.20 < P <.40) to conclude that mean cholesterol is changed in the "usual coffee" condition. (c) Let N denote no coffee, U denote usual coffee, and d denote the change from baseline. H 0 : Mean cholesterol is not affected by discontinuing coffee (µ N,d = µ U,d ) H A : Mean cholesterol is affected by discontinuing coffee (µ N,d µ U,d ) SE = = t s = (-35-26)/20.52 = Formal (7.1) gives df = 8.1 and the conservative df value is min{24,7} = 7, whereas the liberal df value is n 1 + n 2-2 = 31. Using df = 8, Table 4 gives t.01 = and t.005 = 3.355, which implies that.01 < P <.02. (Using df = 30, the closest value to 31 in Table 4, we get t.005 = and t.0005 = 3.646, which implies that.001 < P <.01.) We reject H 0. (d) There is sufficient evidence (.01 < P <.02) to conclude that mean cholesterol is reduced by discontinuing coffee Food Intake (cal) Premenstrual Postmenstrual
10 No. This result suggests that the population mean difference between the right eye and the left eye is less than 1.1 mg/dl, but positive and negative differences cancel each other out when such a mean is calculated. The result says nothing at all about the typical or average magnitude of the difference between the two eyes No. "Accurate" prediction would mean that the individual differences (d's) are small. To judge whether this is the case, one would need the individual values of the d's; using these, one could see whether most of the magnitudes ( d 's) are small (a) ȳ 1 - ȳ 2 = d - = -1, s d = 1.2. SE (ȳ1 - ȳ 2 ) = 1.2/ 15 = ± (2.145)(.3098) (df = 14) (-1.66,-.34) or < µ 1 - µ 2 < (b) SE (ȳ1 - ȳ 2 ) = = ± (2.145)(.8595) (using df = 14) (-2.84,0.84) or < µ 1 - µ 2 < This interval is much wider than the one constructed in part (a) H 0 : The before and after means are the same (µ 1 = µ 2 ) H A : The before and after means are different (µ 1 µ 2 ) SE (ȳ1 - ȳ 2 ) = 1.2/ 15 = t s = -1/.3098 = With df = 14, Table 4 gives t.005 = and t.0005 = 4.140; thus,.001 < P <.01. We reject H 0 ; there is strong evidence (.001 < P <.01) of a before and after difference (a) Let p denote the probability that a before count is higher than the corresponding after count. H 0 : p =.5 H A : p.5 N + = 2, N - = 10, B s = 10. Looking under n d = 12 in Table 7, we see that.02 < P <.05. There is sufficient evidence (.02 < P <.05) to conclude that the after count tends to be higher than the before count. (b) P = (2)[(66)(.5 10 )( 5 2 ) + (12)(.5 11 )( 5 1 ) + (1)(.5 12 )] =.0386.
11 The scatterplot shows a positive relationship between before and after counts. The pairing removes the variability between cats from the analysis and is, therefore, effective. After (Y ) Before (Y ) Let 1 denote central and 2 denote top. ȳ 1 - ȳ 2 = d - = 2.533, s d = SE (ȳ1 - ȳ 2 ) =.41312/ 6 = ± (2.015)(.1687) (df = 5) (2.19,2.87) or 2.19 < µ 1 - µ 2 < 2.87 percent The standard error is SE (ȳ1 - ȳ 2 ) = 1.86/ 15 = ± (1.761)(.48) (df = 14) (1.35,3.05) or 1.35 < µ 1 - µ 2 < 3.05 species It must be reasonable to regard the 15 differences as a random sample from a normal population. We must trust the researchers that their sampling method was random. The normality condition can be verified with a normal probability plot. The plot below is fairly linear (although the plateaus show that there are several differences that have the same value), which supports the normality condition Difference nscores
12 The null and alternative hypotheses are H 0 : The average number of species is the same in pools as in riffles (µ 1 =µ 2 ) H A : The average numbers of species in pools and in riffles differ (µ 1 µ 2 ) The standard error is SE (ȳ1 - ȳ 2 ) = SE d - = s d n d = =.48. The test statistic is t s = ȳ 1 - ȳ 2 SE (ȳ1 - ȳ 2 ) = d - SE d - = = To bracket the P-value, we consult Table 4 with df = 15-1 = 14. Table 4 gives t.0005 = Thus, the P-value for the nondirectional test is bracketed as P <.001. At significance level α =.10, we reject H 0 if P <.10. Since P <.001, we reject H 0. There is sufficient evidence (P <.001) to conclude that the average number of species in pools is greater than in riffles (a) Let p denote the probability that there are more species in a pool than in its adjacent riffle. H 0 : The two habitats support equal levels of diversity (p =.5) H A : The two habitats do not support equal levels of diversity (p.5) N + = 12, N - = 1, B s = 12. Eliminating the two pairs with d = 0, we refer to Table 7 with n d = 13. The rightmost column with a critical value of 12 is the column headed.01 for a nondirectional alternative (i.e., for a two-tailed test), and the next column is headed.002. Therefore,.002 < P <.01. There is sufficient evidence (.002 < P <.01) to conclude that species diversity is greater in pools than in riffles. (b) P = (2)[(13)(.5 12 )(.5 1 ) ] = H 0 : Pools and riffles support equal levels of diversity H A : Pools and riffles support different levels of diversity The absolute values of the differences are 3, 3, 4, 3, 4, 5, -1, 1, 1, 4, 1, 4, and 1. The ranks of the absolute differences are 7, 7, 10.5, 7, 10.5, 13, 3, 3, 3, 10.5, 3, 10.5 and 3. The signed ranks are 7, 7, 10.5, 7, 10.5, 13, -3, 3, 3, 10.5, 3, 10.5 and 3. Thus, W+ = = 88 and W- = 3. W s = 88 and n d = 13; reading Table 8 we find.001 < P-value <.002 and H 0 is rejected. There is strong evidence (.001 < P-value <.002) to conclude that the diversity levels differ between pools and riffles There are several ties in the data, which means that the P-value from the Wilcoxon test is only approximate.
13 9.49 The null and alternative hypotheses are H 0 : Caffeine has no effect on RER (µ 1 =µ 2 ) H A : Caffeine has some effect on RER (µ 1 µ 2 ) We proceed to calculate the differences, the standard error of the mean difference, and the test statistic. Subject Placebo Caffeine Difference Mean 7.33 SD 5.59 The standard error is SE (ȳ1 - ȳ 2 ) = SE d - = s d n d = = The test statistic is t s = ȳ 1 - ȳ 2 SE (ȳ1 - ȳ 2 ) = d - SE d - = = To bracket the P-value, we consult Table 4 with df = 9-1 = 8. Table 4 gives t.005 = and t.0005 = Thus, the P-value for the nondirectional test is bracketed as.001 < P <.01. At significance level α =.05, we reject H 0 if P <.05. Since P <.01, we reject H 0. To determine the directionality of departure from H 0, we note that d - > 0; that is, ȳ 1 > ȳ 2. There is sufficient evidence (.001 < P <.01) to conclude that caffeine tends to decrease RER under these conditions. 141
14 RER (%) Placebo Caffeine 9.51 Let p denote the probability that RER for a subject is higher after taking placebo than after taking caffeine. H 0 : RER is not affected by caffeine (p =.5) H A : RER is affected by caffeine (p.5) N + = 9, N - = 0, B s = 9. Looking under n d = 9 in Table 7, we see that the rightmost column with a critical value less than or equal to 9 is the column headed.01 (for a nondirectional alternative), and the next column is headed.002. Therefore,.002 < P <.01. There is sufficient evidence (.002 < P <.01) to conclude that caffeine tends to decrease RER under these conditions H 0 : Mean CP is the same in regenerating and in normal tissue (µ 1 = µ 2 ) H A : Mean CP is different in regenerating and in normal tissue (µ 1 µ 2 ) SE (ȳ1 - ȳ 2 ) = 4.89/ 8 = t s = 4.64/1.727 = With df = 7, Table 4 gives t.02 = and t.01 = Thus,.02 < P <.04 and we reject H 0. There is sufficient evidence (.02 < P <.04) to conclude that mean CP is different in regenerating and in normal tissue (a) Let 1 denote control and 2 denote benzamil. H 0 : Benzamil does not impair healing (µ 1 = µ 2 ) H A : Benzamil impairs healing (µ 1 > µ 2 ) ȳ 1 - ȳ 2 = d - =.09706; s d = SE (ȳ1 - ȳ 2 ) =.14768/ 17 = t s =.09706/ = P =.0077, so we reject H 0. There is sufficient evidence (P =.0077) to conclude that benzamil impairs healing.
15 (b) Let p denote the probability that the control limb heals more than the benzamil limb. H 0 : Benzamil does not impair healing (p =.5) H A : Benzamil impairs healing (p >.5) N + = 11, N - = 4, B s = 1. The two animals with d = 0 are eliminated. P =.059, so we do not reject H 0. There is insufficient evidence (P =.059) to conclude that benzamil impairs healing. [Remark: Unlike the t test in part (a), the sign test does not take account of the fact that the negative d's are smaller in magnitude than the positive d's. This illustrates the inferior power of the sign test.] (c) (.021,.173) or.021 < µ 1 - µ 2 <.173 mm 2. (d) Benzamil Control Yes, the upward trend indicates that the pairing was effective Summary statistics are as follows: Experimental group Control group Rest (1) Work (2) Difference (3) Rest (4) Work (5) Difference (6) Mean SD (a) Column (1) versus column (4) H 0 : Mean ventilation at rest is the same in the two conditions (µ 1 = µ 4 ) H A : Mean ventilation at rest is different in the two conditions (µ 1 µ 4 ) t s = 2.757, df = 13.97, P =.015. We reject H 0. There is sufficient evidence (P =.015) to conclude that mean ventilation at rest is higher in the "to be hypnotized" condition than in the "control" condition. (b) (i) Column (1) versus column (2): H 0 : Hypnotic suggestion does not change mean ventilation (µ 1 = µ 2 ) H A : Hypnotic suggestion increases mean ventilation (µ 1 < µ 2 ) t s = , df = 7, P = We reject H 0. There is sufficient evidence (P =.0087) to conclude that hypnotic suggestion increases mean ventilation.
16 144 (ii) Column (4) versus column (5): H 0 : Waking suggestion does not change mean ventilation (µ 4 = µ 5 ) H A : Waking suggestion increases mean ventilation (µ 4 < µ 5 ) Because ȳ 4 > ȳ 5, the data do not deviate from H 0 in the direction specified by H A. Thus, P >.50 and we do not reject H 0. There is no evidence that waking suggestion increases mean ventilation. (iii) Column (3) versus column (6): H 0 : Hypnotic and waking suggestion produce the same mean change in ventilation (µ 3 = µ 6 ) H A : Hypnotic suggestion increases mean ventilation more than does waking suggestion (µ 3 < µ 6 ) t s = , df = 7.5, P = We reject H 0. There is sufficient evidence (P =.0055) to conclude that hypnotic suggestion increases mean ventilation more than does waking suggestion. (c) (i) Sign test for column (1) versus column (2). Let p 1 denote the probability that a person's ventilation after hypnotic suggestion will be higher than that at rest. H 0 : Hypnotic suggestion does not change mean ventilation (p 1 =.5) H A : Hypnotic suggestion increases mean ventilation (p 1 >.5) B s = 8, P = We reject H 0. There is sufficient evidence (P =.0039) to conclude that hypnotic suggestion increases mean ventilation. (ii) Sign test for column (4) versus column (5). Let p 2 denote the probability that a person's ventilation after waking suggestion will be higher than that at rest. H 0 : Waking suggestion does not change mean ventilation (p 2 =.5) H A : Waking suggestion increases mean ventilation (p 2 >.5) N + = 6, N - = 2. Thus, the data do not deviate from H 0 in the direction specified by H A, so P >.50 and we do not reject H 0. There is no evidence that waking suggestion increases mean ventilation. (iii) Wilcoxon-Mann-Whitney test for column (3) versus column (6): H 0 : Hypnotic and waking suggestion produce the same mean change in ventilation H A : Hypnotic suggestion increases mean ventilation more than does waking suggestion U s = 63, P = We reject H 0. There is sufficient evidence (P = ) to conclude that hypnotic suggestion increases mean ventilation more than does waking suggestion.
17 145 (d) A normal probability plot of column (3) shows that the data are quite skewed. This could account for two discrepancies: First, to compare column (1) to column (2), we used the differences in column (3); the t test gave P =.0087 whereas the sign test gave P = Second, to compare column (3) to column (6), the t test gave P =.0055 whereas the Wilcoxon-Mann-Whitney test gave P = Both of the t tests rest on the questionable condition that the population distribution corresponding to column (3) is normal. The failure of this condition inflates the standard deviation and robs the t test of power, so that the nonparametric tests give stronger conclusions (smaller P-values) Ventillation nscores A normal probability plot of column (6) shows that the normality condition appears to be met for these data. 5.5 Ventillation nscores 9.55 (a) By using matched pairs we eliminate the variability that is associated with the variables used to create the pairs (age, sex, etc.). This provides for greater precision and more power in the test. (b) It may be that the pairing variables (age, sex, etc.) are unrelated to blood pressure. If this is the case, then the pairing accomplishes nothing, but it reduces the number of degrees of freedom, and therefore the power, of the test N + = 10, N - = 10, B s = 10. In this case, the data are as evenly balanced as possible, so P = 1. (Table 7 indicates that P >.20.) Thus, we do not reject H 0. There is no evidence that transdermal estradiol has an effect on PAI-1 level.
18 A normal probability plot of the data shows that the normality condition is not met. However, a sign test can be conducted. Let p denote the probability that urinary protein excretion will go down after plasmapheresis. H 0 : Plasmapheresis affects urinary protein excretion (p =.5) H A : Plasmapheresis does not affect urinary protein excretion (p.5) N + = 6, N - = 0, B s = 6. From Table 7,.02 < P <.05 (for a two-sided test). The exact P-value is (2)(.5 6 ) = Thus, there is evidence (P =.03125) to conclude that urinary protein excretion tends to go down after plasmapheresis. Note: Another approach would be to transform the data and then conduct a t test in the transformed scale. For example, taking the reciprocal of each difference yields a fairly symmetric distribution; a t test then gives t s = 5.4 and P =.003.
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
More informationHypothesis Testing: Two Means, Paired Data, Two Proportions
Chapter 10 Hypothesis Testing: Two Means, Paired Data, Two Proportions 10.1 Hypothesis Testing: Two Population Means and Two Population Proportions 1 10.1.1 Student Learning Objectives By the end of this
More informationCHAPTER 14 NONPARAMETRIC TESTS
CHAPTER 14 NONPARAMETRIC TESTS Everything that we have done up until now in statistics has relied heavily on one major fact: that our data is normally distributed. We have been able to make inferences
More informationSection 3 Part 1. Relationships between two numerical variables
Section 3 Part 1 Relationships between two numerical variables 1 Relationship between two variables The summary statistics covered in the previous lessons are appropriate for describing a single variable.
More information3.4 Statistical inference for 2 populations based on two samples
3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted
More informationExperimental Design. Power and Sample Size Determination. Proportions. Proportions. Confidence Interval for p. The Binomial Test
Experimental Design Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 To this point in the semester, we have largely
More informationInference for two Population Means
Inference for two Population Means Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison October 27 November 1, 2011 Two Population Means 1 / 65 Case Study Case Study Example
More informationName: Date: Use the following to answer questions 3-4:
Name: Date: 1. Determine whether each of the following statements is true or false. A) The margin of error for a 95% confidence interval for the mean increases as the sample size increases. B) The margin
More informationSTAT 145 (Notes) Al Nosedal anosedal@unm.edu Department of Mathematics and Statistics University of New Mexico. Fall 2013
STAT 145 (Notes) Al Nosedal anosedal@unm.edu Department of Mathematics and Statistics University of New Mexico Fall 2013 CHAPTER 18 INFERENCE ABOUT A POPULATION MEAN. Conditions for Inference about mean
More informationComparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples
Comparing Two Groups Chapter 7 describes two ways to compare two populations on the basis of independent samples: a confidence interval for the difference in population means and a hypothesis test. The
More informationComparing Means in Two Populations
Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we
More informationNon-Parametric Tests (I)
Lecture 5: Non-Parametric Tests (I) KimHuat LIM lim@stats.ox.ac.uk http://www.stats.ox.ac.uk/~lim/teaching.html Slide 1 5.1 Outline (i) Overview of Distribution-Free Tests (ii) Median Test for Two Independent
More informationStatistics 2014 Scoring Guidelines
AP Statistics 2014 Scoring Guidelines College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. AP Central is the official online home
More informationP(every one of the seven intervals covers the true mean yield at its location) = 3.
1 Let = number of locations at which the computed confidence interval for that location hits the true value of the mean yield at its location has a binomial(7,095) (a) P(every one of the seven intervals
More information1 Nonparametric Statistics
1 Nonparametric Statistics When finding confidence intervals or conducting tests so far, we always described the population with a model, which includes a set of parameters. Then we could make decisions
More informationHYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as...
HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1 PREVIOUSLY used confidence intervals to answer questions such as... You know that 0.25% of women have red/green color blindness. You conduct a study of men
More informationSTAT 350 Practice Final Exam Solution (Spring 2015)
PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects
More informationAnalysis of Variance ANOVA
Analysis of Variance ANOVA Overview We ve used the t -test to compare the means from two independent groups. Now we ve come to the final topic of the course: how to compare means from more than two populations.
More informationCorrelation Coefficient The correlation coefficient is a summary statistic that describes the linear relationship between two numerical variables 2
Lesson 4 Part 1 Relationships between two numerical variables 1 Correlation Coefficient The correlation coefficient is a summary statistic that describes the linear relationship between two numerical variables
More informationMind on Statistics. Chapter 12
Mind on Statistics Chapter 12 Sections 12.1 Questions 1 to 6: For each statement, determine if the statement is a typical null hypothesis (H 0 ) or alternative hypothesis (H a ). 1. There is no difference
More informationCALCULATIONS & STATISTICS
CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents
More informationThe Wilcoxon Rank-Sum Test
1 The Wilcoxon Rank-Sum Test The Wilcoxon rank-sum test is a nonparametric alternative to the twosample t-test which is based solely on the order in which the observations from the two samples fall. We
More informationChapter 8 Paired observations
Chapter 8 Paired observations Timothy Hanson Department of Statistics, University of South Carolina Stat 205: Elementary Statistics for the Biological and Life Sciences 1 / 19 Book review of two-sample
More informationChapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -
More informationDescriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
More informationLecture Notes Module 1
Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific
More information1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
More informationA POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
CHAPTER 5. A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING 5.1 Concepts When a number of animals or plots are exposed to a certain treatment, we usually estimate the effect of the treatment
More informationGeneral Method: Difference of Means. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n 1, n 2 ) 1.
General Method: Difference of Means 1. Calculate x 1, x 2, SE 1, SE 2. 2. Combined SE = SE1 2 + SE2 2. ASSUMES INDEPENDENT SAMPLES. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n
More informationII. DISTRIBUTIONS distribution normal distribution. standard scores
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,
More informationDensity Curve. A density curve is the graph of a continuous probability distribution. It must satisfy the following properties:
Density Curve A density curve is the graph of a continuous probability distribution. It must satisfy the following properties: 1. The total area under the curve must equal 1. 2. Every point on the curve
More informationNonparametric Two-Sample Tests. Nonparametric Tests. Sign Test
Nonparametric Two-Sample Tests Sign test Mann-Whitney U-test (a.k.a. Wilcoxon two-sample test) Kolmogorov-Smirnov Test Wilcoxon Signed-Rank Test Tukey-Duckworth Test 1 Nonparametric Tests Recall, nonparametric
More informationHYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as...
HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1 PREVIOUSLY used confidence intervals to answer questions such as... You know that 0.25% of women have red/green color blindness. You conduct a study of men
More informationAssociation Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
More informationUnit 26 Estimation with Confidence Intervals
Unit 26 Estimation with Confidence Intervals Objectives: To see how confidence intervals are used to estimate a population proportion, a population mean, a difference in population proportions, or a difference
More informationHow To Compare Birds To Other Birds
STT 430/630/ES 760 Lecture Notes: Chapter 7: Two-Sample Inference 1 February 27, 2009 Chapter 7: Two Sample Inference Chapter 6 introduced hypothesis testing in the one-sample setting: one sample is obtained
More informationUNDERSTANDING THE DEPENDENT-SAMPLES t TEST
UNDERSTANDING THE DEPENDENT-SAMPLES t TEST A dependent-samples t test (a.k.a. matched or paired-samples, matched-pairs, samples, or subjects, simple repeated-measures or within-groups, or correlated groups)
More informationChapter 7 Section 7.1: Inference for the Mean of a Population
Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used
More informationUnit 26: Small Sample Inference for One Mean
Unit 26: Small Sample Inference for One Mean Prerequisites Students need the background on confidence intervals and significance tests covered in Units 24 and 25. Additional Topic Coverage Additional coverage
More information8 6 X 2 Test for a Variance or Standard Deviation
Section 8 6 x 2 Test for a Variance or Standard Deviation 437 This test uses the P-value method. Therefore, it is not necessary to enter a significance level. 1. Select MegaStat>Hypothesis Tests>Proportion
More informationTesting for differences I exercises with SPSS
Testing for differences I exercises with SPSS Introduction The exercises presented here are all about the t-test and its non-parametric equivalents in their various forms. In SPSS, all these tests can
More informationUnit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
More informationOutline. Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test
The t-test Outline Definitions Descriptive vs. Inferential Statistics The t-test - One-sample t-test - Dependent (related) groups t-test - Independent (unrelated) groups t-test Comparing means Correlation
More informationParametric and non-parametric statistical methods for the life sciences - Session I
Why nonparametric methods What test to use? Rank Tests Parametric and non-parametric statistical methods for the life sciences - Session I Liesbeth Bruckers Geert Molenberghs Interuniversity Institute
More information1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
More informationUsing Excel for inferential statistics
FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied
More informationMultivariate Analysis of Ecological Data
Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology
More informationNon-Inferiority Tests for Two Means using Differences
Chapter 450 on-inferiority Tests for Two Means using Differences Introduction This procedure computes power and sample size for non-inferiority tests in two-sample designs in which the outcome is a continuous
More informationUNDERSTANDING THE INDEPENDENT-SAMPLES t TEST
UNDERSTANDING The independent-samples t test evaluates the difference between the means of two independent or unrelated groups. That is, we evaluate whether the means for two independent groups are significantly
More informationTHE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.
THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM
More informationDESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.
DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,
More informationBiostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY
Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY 1. Introduction Besides arriving at an appropriate expression of an average or consensus value for observations of a population, it is important to
More informationMultiple samples: Pairwise comparisons and categorical outcomes
Multiple samples: Pairwise comparisons and categorical outcomes Patrick Breheny May 1 Patrick Breheny Introduction to Biostatistics (171:161) 1/19 Introduction Pairwise comparisons In the previous lecture,
More informationSimple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
More informationQUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS This booklet contains lecture notes for the nonparametric work in the QM course. This booklet may be online at http://users.ox.ac.uk/~grafen/qmnotes/index.html.
More information13: Additional ANOVA Topics. Post hoc Comparisons
13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Kruskal-Wallis Test Post hoc Comparisons In the prior
More informationBiostatistics: Types of Data Analysis
Biostatistics: Types of Data Analysis Theresa A Scott, MS Vanderbilt University Department of Biostatistics theresa.scott@vanderbilt.edu http://biostat.mc.vanderbilt.edu/theresascott Theresa A Scott, MS
More informationTest Positive True Positive False Positive. Test Negative False Negative True Negative. Figure 5-1: 2 x 2 Contingency Table
ANALYSIS OF DISCRT VARIABLS / 5 CHAPTR FIV ANALYSIS OF DISCRT VARIABLS Discrete variables are those which can only assume certain fixed values. xamples include outcome variables with results such as live
More informationSession 7 Bivariate Data and Analysis
Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares
More informationRecall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
More informationRelationships Between Two Variables: Scatterplots and Correlation
Relationships Between Two Variables: Scatterplots and Correlation Example: Consider the population of cars manufactured in the U.S. What is the relationship (1) between engine size and horsepower? (2)
More informationTutorial 5: Hypothesis Testing
Tutorial 5: Hypothesis Testing Rob Nicholls nicholls@mrc-lmb.cam.ac.uk MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................
More informationNCSS Statistical Software. One-Sample T-Test
Chapter 205 Introduction This procedure provides several reports for making inference about a population mean based on a single sample. These reports include confidence intervals of the mean or median,
More informationStat 5102 Notes: Nonparametric Tests and. confidence interval
Stat 510 Notes: Nonparametric Tests and Confidence Intervals Charles J. Geyer April 13, 003 This handout gives a brief introduction to nonparametrics, which is what you do when you don t believe the assumptions
More informationSummary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
More informationC. The null hypothesis is not rejected when the alternative hypothesis is true. A. population parameters.
Sample Multiple Choice Questions for the material since Midterm 2. Sample questions from Midterms and 2 are also representative of questions that may appear on the final exam.. A randomly selected sample
More informationSOLUTIONS TO BIOSTATISTICS PRACTICE PROBLEMS
SOLUTIONS TO BIOSTATISTICS PRACTICE PROBLEMS BIOSTATISTICS DESCRIBING DATA, THE NORMAL DISTRIBUTION SOLUTIONS 1. a. To calculate the mean, we just add up all 7 values, and divide by 7. In Xi i= 1 fancy
More information5/31/2013. Chapter 8 Hypothesis Testing. Hypothesis Testing. Hypothesis Testing. Outline. Objectives. Objectives
C H 8A P T E R Outline 8 1 Steps in Traditional Method 8 2 z Test for a Mean 8 3 t Test for a Mean 8 4 z Test for a Proportion 8 6 Confidence Intervals and Copyright 2013 The McGraw Hill Companies, Inc.
More informationTImath.com. F Distributions. Statistics
F Distributions ID: 9780 Time required 30 minutes Activity Overview In this activity, students study the characteristics of the F distribution and discuss why the distribution is not symmetric (skewed
More informationIntroduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses
Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the
More informationPsychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck!
Psychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck! Name: 1. The basic idea behind hypothesis testing: A. is important only if you want to compare two populations. B. depends on
More informationChapter 23. Two Categorical Variables: The Chi-Square Test
Chapter 23. Two Categorical Variables: The Chi-Square Test 1 Chapter 23. Two Categorical Variables: The Chi-Square Test Two-Way Tables Note. We quickly review two-way tables with an example. Example. Exercise
More informationChapter 4. Probability and Probability Distributions
Chapter 4. robability and robability Distributions Importance of Knowing robability To know whether a sample is not identical to the population from which it was selected, it is necessary to assess the
More informationThe Importance of Statistics Education
The Importance of Statistics Education Professor Jessica Utts Department of Statistics University of California, Irvine http://www.ics.uci.edu/~jutts jutts@uci.edu Outline of Talk What is Statistics? Four
More informationBA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420
BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420 1. Which of the following will increase the value of the power in a statistical test
More informationStatCrunch and Nonparametric Statistics
StatCrunch and Nonparametric Statistics You can use StatCrunch to calculate the values of nonparametric statistics. It may not be obvious how to enter the data in StatCrunch for various data sets that
More informationPart 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217
Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing
More informationNon Parametric Inference
Maura Department of Economics and Finance Università Tor Vergata Outline 1 2 3 Inverse distribution function Theorem: Let U be a uniform random variable on (0, 1). Let X be a continuous random variable
More informationNCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
More informationNon-Inferiority Tests for One Mean
Chapter 45 Non-Inferiority ests for One Mean Introduction his module computes power and sample size for non-inferiority tests in one-sample designs in which the outcome is distributed as a normal random
More informationLesson 1: Comparison of Population Means Part c: Comparison of Two- Means
Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
More informationSIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.
SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation
More informationTwo Related Samples t Test
Two Related Samples t Test In this example 1 students saw five pictures of attractive people and five pictures of unattractive people. For each picture, the students rated the friendliness of the person
More informationCome scegliere un test statistico
Come scegliere un test statistico Estratto dal Capitolo 37 of Intuitive Biostatistics (ISBN 0-19-508607-4) by Harvey Motulsky. Copyright 1995 by Oxfd University Press Inc. (disponibile in Iinternet) Table
More informationList of Examples. Examples 319
Examples 319 List of Examples DiMaggio and Mantle. 6 Weed seeds. 6, 23, 37, 38 Vole reproduction. 7, 24, 37 Wooly bear caterpillar cocoons. 7 Homophone confusion and Alzheimer s disease. 8 Gear tooth strength.
More informationTwo-sample inference: Continuous data
Two-sample inference: Continuous data Patrick Breheny April 5 Patrick Breheny STA 580: Biostatistics I 1/32 Introduction Our next two lectures will deal with two-sample inference for continuous data As
More informationCHAPTER 13. Experimental Design and Analysis of Variance
CHAPTER 13 Experimental Design and Analysis of Variance CONTENTS STATISTICS IN PRACTICE: BURKE MARKETING SERVICES, INC. 13.1 AN INTRODUCTION TO EXPERIMENTAL DESIGN AND ANALYSIS OF VARIANCE Data Collection
More informationDifference tests (2): nonparametric
NST 1B Experimental Psychology Statistics practical 3 Difference tests (): nonparametric Rudolf Cardinal & Mike Aitken 10 / 11 February 005; Department of Experimental Psychology University of Cambridge
More informationChapter 7. One-way ANOVA
Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks
More informationMath 251, Review Questions for Test 3 Rough Answers
Math 251, Review Questions for Test 3 Rough Answers 1. (Review of some terminology from Section 7.1) In a state with 459,341 voters, a poll of 2300 voters finds that 45 percent support the Republican candidate,
More informationPRACTICE PROBLEMS FOR BIOSTATISTICS
PRACTICE PROBLEMS FOR BIOSTATISTICS BIOSTATISTICS DESCRIBING DATA, THE NORMAL DISTRIBUTION 1. The duration of time from first exposure to HIV infection to AIDS diagnosis is called the incubation period.
More informationFixed-Effect Versus Random-Effects Models
CHAPTER 13 Fixed-Effect Versus Random-Effects Models Introduction Definition of a summary effect Estimating the summary effect Extreme effect size in a large study or a small study Confidence interval
More informationStatistics I for QBIC. Contents and Objectives. Chapters 1 7. Revised: August 2013
Statistics I for QBIC Text Book: Biostatistics, 10 th edition, by Daniel & Cross Contents and Objectives Chapters 1 7 Revised: August 2013 Chapter 1: Nature of Statistics (sections 1.1-1.6) Objectives
More information4. Continuous Random Variables, the Pareto and Normal Distributions
4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random
More informationStatistics courses often teach the two-sample t-test, linear regression, and analysis of variance
2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample
More informationHypothesis testing - Steps
Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =
More information