Unit 14: Nonparametric Statistical Methods Statistics 571: Statistical Methods Ramón V. León 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 1
Introductory Remarks Most methods studied so far have been based on the assumption of normally distributed data Frequently this assumption is not valid Sample size may be too small to verify it Sometimes the data is measured in an ordinal scale Nonparametric or distribution-free statistical methods Make very few assumptions about the form of the population distribution from which the data are sampled Based on ranks so they can be used on ordinal data Will concentrate on hypothesis tests but will also mention confidence interval procedures. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 2
Inference for a Single Sample Consider a random sample x1, x2,..., x n from a population with unknown median µ. (Recall that for nonnormal (especially skewed) distributions the median is a better measure of the center than the mean.) H : µ = µ vs. H : µ > µ 0 0 1 0 Example: Test whether the median household income of a population exceeds $50,000 based on a random sample of household incomes from that population For simplicity we sometimes present methods for one-sided tests. Modifications for two-sided tests are straightforward and are given in the textbook Some examples in these notes are two-sided tests. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 3
Sign test: Sign Test for a Single Sample H 1. Count the number of x i 's that exceed µ 0. Denote this number by s+, called the number of plus signs. Let s = n s+, which is the number of minus signs. 2. Reject H if s is large or equivalently if s is small. 0 : µ = µ vs. H : µ > µ 0 0 1 0 + Test idea: Under the null hypothesis s + has a binomial distribution, Bin (n, ½). So this test is simply the test for binomial proportions 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 4
Sign Test Example A thermostat used in an electric device is to be checked for the accuracy of its design setting of 200ºF. Ten thermostats were tested to determine their actual settings, resulting in the following data: 202.2, 203.4, 200.5, 202.5, 206.3, 198.0, 203.7, 200.8, 201.3, 199.0 s + H : µ = 200 vs H : µ 200 0 1 = 8 = number of data values > 200, so 10 10 2 10 10 1 10 1 P-value = 2 2 0.110 i= 8 i = = 2 i= 0 i 2 (The t test based on the mean has P-value = 0.0453. However recall that the t test assumes a normal population) 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 5
Normal Approximation to Test Statistic If the sample size is large ( 20) the common of S and S is approximated by a normal distribution with 1 n ES ( + ) = ES ( ) = np= n =, 2 2 1 1 n Var( S+ ) = Var( S ) = np(1 p) = n = 2 2 4 Therefore can perform a one-sided z- test with s+ n 2 1 2 z = n 4 + 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 6
P-values for Sign Test Using JMP Based on normal approximation to the binomial ( = z 2 ) 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 7
Treatment of Ties Theory of the test assumes that the distribution of the data is continuous so in theory ties are impossible In practice they do occur because of rounding A simple solution is to ignore the ties and work only with the untied observation. This does reduce the effective sample size of the test and hence its power, but the loss is not significant if there are only a few ties 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 8
Let x x x be the ordered data values. (1) (2) ( n) Then a (1- α )-level CI for µ is given by x µ Comfidence Interval for µ x ( b+ 1) ( n b) where b= b α is the lower α 2 critical point n,1 2 of the Bin n,1 2 distribution. ( ) Note: Not all confidence levels are possible because of the discreteness of the Binomial distribution 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 9
Thermostat Setting: Sign Confidence Interval for the Median From Table A.1 we see that for n = 10 and p=0.5, the lower 0.011 critical point of the binomial distribution is 1 and by symmetry the upper 0.011 critical point is 9. Setting α 2 = 0.011 which gives 1-α = 1 0.022 = 0.978, we find that 199.0 = x µ x = 203.7 (2) (9) is a 97.8% CI for µ. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 10
Sign Test for Matched Pairs Drop 3 tied pairs. Then s + = 20; s - = 3 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 11
Sign Test for Matched Pairs 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 12
Sign Test for Matched Pairs in JMP Pearson s p-value is not the same as the book s two-sided P-value because the book uses the continuity correction in the normal approximation to the binomial distribution, i.e, book uses z = 3.336 (Page 567) rather than z = 3.544745 used by JMP. Note that (3.544745) 2 = 12.5652 book 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 13
Wilcoxon Signed Rank Test H : µ = µ vs. H : µ µ 0 0 1 0 More powerful than the sign test, however, it requires the assumption that the population distribution is symmetric 1. Rank and order the differences in terms of their absolute value Example 14.1 and 14.4: Thermostat Setting is 200 F 2. Calculate w + = sum of the ranks of the positive differences w + = 6 + 8 + 1 + 7 + 10 + 9 + 2 + 4 3. Reject H 0 if w + is large or small 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 14
Wilcoxon Signed Rank Test in JMP This test finds a significant difference at α=0.05 while the sign test did not at even α=0.1 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 15
Normal Approximation in the Wilcoxon Signed Rank Test For large n, the null distribution of W W W + - can be well-approximated by a normal distribution with mean and variance given by nn ( + 1) nn ( + 1)(2n+ 1) EW ( ) = andvarw ( ) =. 4 24 For large samples a one-sided ( greater than median) z-test uses the statistic w+ n( n 1)/4 1/2 z = + nn ( + 1)(2n+ 1) 24 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 16
Importance of Symmetric Population Assumption Here even though H 0 is true the long right hand tail makes the positive differences tend to be larger in magnitude than the negative differences, resulting in higher ranks. This inflates w + and hence the test s type I error probability. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 17
Null Distribution of the Wilcoxon Signed Rank Statistics 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 18
Null Distribution of the Wilcoxon Signed Rank Statistics 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 19
Wilcoxon Signed Rank Statistic: Treatment of Ties There are two types of ties Some of the data is equal to the median Drop these observations Some of the differences from the median may be tied Use midrank, that is, the average rank For example, suppose d1 = 1, d2 =+ 3, d3 = 3, d4 =+ 5 Then (2 + 3) r1 = 1, r2 = r3 = = 2.5, r4 = 4 2 With ties Table A.10 is only approximate 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 20
Wilcoxon Sign Rank Test: Matched Pair Design Example 14.5: Comparing Two Methods of Cardiac Output Notice that we drop the three zero differences Notice that we average the tied ranks Two-Side P-values Signed test: 0.0008 Signed Rank test: 0.0002 t-test: 0.0000671 (Page 284) (Notice that these tests require progressively more stringent assumptions about the population of differences) 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 21
JMP Calculation 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 22
Signed Rank Confidence Interval for the Median 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 23
Thermostat Setting: Wilcoxon Signed Rank Confidence Interval for Median From Table A.10 we see that for n = 10, the upper 2.4% critical point is 47 and by symmetry the lower 2.4% 10(10 + 1) critical point is - 47 = 55-47 = 8. 2 Setting α 2 = 0.024 and hence 1-α=1-0.048=0.952 we find that + 200.10 = x = x µ x = 203.55 is a 95.2% CI for 9 8+ 1 47 µ 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 24
Inferences for Two Independent Samples One wants to show that the observations from one population tend to be larger than those from another population based on independent random samples x, x,..., x and y, y,..., y 1 2 n 1 2 1 2 n Examples: Treated patients tend to live longer than untreated patients An equity fund tends to have a higher yield than a bond fund 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 25
Wilcoxon-Mann-Whitney Test Example: Time to Failure of Two Capacitor Groups Reject for extreme values of w 1. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 26
Stochastic Ordering of Populations X is stochastically larger than Y ( X Y) if for all real numbers u, PX ( > u) PY ( > u) equivalently, P( X u) = F( u) F ( u) = P( Y u) 1 2 with strict inequality for at least some u. Denoted by X Y or equivalently by F < F ) 1 2 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 27
Stochastic Ordering Especial Case: Location Difference θ is called a location parameter Notice that X X iff θ < θ 2 1 2 1 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 28
0 1 2 Wilcoxon-Mann-Whitney Test H : F = F ( X Y) Alternatives : One sided: H : F < F ( X Y) 1 1 2 Two sided: H : F < F or F < F ( X Y or Y X) 1 1 2 2 1 Notice that the alternative is not H : F F 1 1 2 (Kolmogorov-Smirnov Test can handle this alternative) 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 29
Wilcoxon Version of the Test H : F = F ( X Y)vs. H : F < F ( X Y) 0 1 2 1 1 2 1. Rank all N = n + n observations, 1 2 x, x,..., x and y, y,..., y 1 2 n 1 2 1 2 in ascending order 2. Sum the ranks of the x's and y's separately. Denote these sums by w and 0 1 1 2 3. Reject H if w is large or equivalently w is small n w 2 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 30
Mann-Whitney Test Version The advantage of using the Mann-Whitney form of the test is that the same distribution applies whether we use u 1 or u 2 P value = P( U u ) = P( U u ) 1 2 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 31
Null Distribution of the Wilcoxon- Mann-Whitney Test Statistic Under the null hypothesis each of these 10 ordering has an equal chance of occurring, namely, 1/10 5 = 10 2 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 32
Null Distribution of the Wilcoxon- Mann-Whitney Test Statistic Pw ( 8) = 0.1+ 0.1 = 0.2 (one-sided p-value for w= 8) 1 1 ( H : X Y) 1 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 33
Normal Approximation of Mann- Whitney Statistic For large n and n, the null distribution of U can be 1 2 well approximated by a normal distribution with mean and variance given by nn 1 2 nn 1 2( N+ 1) EU ( ) = and VarU ( ) = 2 12 A large sample one-sided z- test can be based on the statistic z = u nn 1 1 2 nn 1 2 2 1 2 + 1 ( N 1) 12 ( H : X Y) 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 34
Treatment of Ties A tie occurs when some x equal a y. A contribution of ½ is counted towards both u 1 and u 2 for each tied pair Equivalent to using the midrank method in computing the Wilcoxon rank sum statistic 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 35
Wilcoxon-Mann-Whitney Confidence Interval Example14.8 shows that [d (18), d (63) ] = [-1.1, 14.7] is a 95.6% CI for the difference of the two medians of the failure times of capacitors. This example is in the book errata since Table A.11 is not detailed enough. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 36
Wilcoxon-Mann-Whitney Test in JMP z 2 =1.688 2 With continuity correction. Used in the book which gets a onesided p- value of 0.0502 Without continuity correction 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 37
Inference for Several Independent Samples: Kruskal-Wallis Test Note that this is a completely randomized design 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 38
Kruskal-Wallis Test H : F = F = = F vs. H : F < F for some i j 0 1 2 a 1 i j Reject if a 2 H0 kw> χ 1, α Distance from the average rank 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 39
Chi-Square Approximation For large samples the distribution of KW under the null hypothesis can be approximated by the chisquare distribution with a-1 degrees of freedom So reject H 0 if kw > χa 1, α 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 40
Kruskal-Wallis Test Example Reject if kw is large. 2 χ 3,.005 = 12.837 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 41
Kruskal-Wallis Test in JMP 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 42
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 43
Case method is different from Unitary method Formula method is different from Unitary method 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 44
Pairwise Comparisons: Is Any Pair of Treatments Different? One can use the Tukey Method on the average ranks to make approximate pairwise comparisons. This is one of many approximate techniques where ranks are substituted for the observations in the normal theory methods. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 45
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 46
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 47
Tukey s Test Applied to the Ranks Averaged Lack of agreement with the more precise method of Example 14.10. Here Equation method also seems to be different from Formula and Case method 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 48
Example of Friedman s Test Ranking is done within blocks 2 χ 7,.025 = 16.012 - P-value =.0040 vs..0003 for ANOVA table 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 49
i i i Inference for Several Matched Samples Randomized Block Design: a b y ij = observation on the i-th treatment in the j-th block if = c.d.f of r.v. Y corresponding to the observed value y ij ij ij For simplicity assume F ( y) = F( y θ β ) iθ i iβ j 2 treatment groups 2 blocks is the "treatment effect" is the "block effect" i.e., we assume that there is no treatment by block interaction ij i j 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 50
Friedman Test H : θ = θ = = θ vs. H : θ > θ for some i j 0 1 2 a 1 i j Reject if fr 2 > χa 1, α Distance from the total of the ranks from their expected value when there is no agreement between the blocks 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 51
Pairwise Comparisons 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 52
Rank Correlation Methods The Pearson correlation coefficient measures only the degree of linear association between two variables Inferences use the assumption of bivariate normality of the two variables We present two correlation coefficients that Take into account only the ranks of the observations Measure the degree of monotonic (increasing or decreasing) association between two variables 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 53
Motivating Example 1 2 3 4 5 ( xy, ) = (1, e), (2, e), (3, e), (4, e), (5, e) Note that there is a perfect positive association between between x and y with y = e x. The Pearson correlation correlation coefficient is only 0.886 because the relationship is not linear The rank correlation coefficients we present yield a value of 1 for these data 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 54
Spearman s Rank Correlation Coefficient Ranges between 1 and +1 with r s = -1 when there is a perfect negative association and r s = +1 when there is a perfect positive association 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 55
Example 14.12 (Wine Consumption and Heart Disease Deaths per 100,000 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 56
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 57
Calculation of Spearman s Rho 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 58
Test for Association Based on Spearman s Rank Correlation Coefficient 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 59
H 0 1 Hypothesis Testing Example : X= Wine Consumption and Y = Heart Disease Deaths are independent. vs. H : X and Y are (negatively or positively) associated z = r n 1 = 0.826 19 1 = 3.504 S Two-Sided P value = 0.0004 Evidence of negative association 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 60
JMP Calculations: Pearson Correlation Heart Disease Deaths 350 300 250 200 150 100 50 0 2 4 6 8 10 Alcohol from Wine Plot is fairly linear Pearson correlation 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 61
JMP Calculations: Spearman Rank Correlation 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 62
Kendall s Rank Correlation Coefficient: Key Concept Examples Concordant pairs: (1,2), (4,9) (1-4)(2-9)>0 (4,2), (3,1) (4-3)(2-1)>0 Discordant pairs: (1,2), (9,1) (1-9)(2-1)<0 (2,4), (3,1) (2-3)(4-1)<0 Tied pairs: (1,3), (1,5) (1 1)(3 5)=0 (1,4), (2,4) (1 2)(4 4)=0 (1,2), (1,2) (1 1)(2 2)=0 Kendall s idea is to compare the number of concordant pairs to the number of discordant pairs in bivariate data 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 63
(X, Y) (1, 2) Kendall s Tau (3, 4) Example (2, 1) n 3 Number of pairwise comparisons = = = 3 = 2 2 N Concordant pairs: (1,2) (3,4) (3,4) (2,1) N c = 2 Discordant pairs: (1,2) (2,1) N d = 1 ˆ τ = = = N c N 2 1 3 1 3 N d 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 64
Kendall s Rank Correlation Coefficient: Population Version 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 65
Kendall s Rank Correlation Coefficient: Sample Estimate Let Nc = Number of concordant pairs in the data Let Nd = Number of disconcordant pairs in the data n Let N = be the number of pairwise comparisons among 2 the observations ( xi, yi), i = 1, 2,..., n. Then Nc Nd ˆ τ = and Nc + Nd N = N if no ties ˆ τ = Nc Nd if ties ( N T )( N T ) x y where T and T are corrections for the number of tied pairs. x y 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 66
Hypothesis of Independence Versus Positive Association Wine data: -.696-4.164 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 67
JMP Calculations: Kendall s Rank Correlation Coefficient 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 68
Kendall s Coefficient of Concordance Measure of association between several matched samples Closely related to Friedman s test statistic Consider a candidates (treatments) and b judges (blocks) with each judge ranking the a candidates If there is perfect agreement between the judges, then each candidate gets the same rank. Assuming the candidates are labeled in the order of their ranking, the rank sum for the ith candidate would be r i = ib If the judges rank the candidates completely at random ( perfect disagreement ) then the expected rank of each candidate would be [1+2+ +a]/a =[a(a+1)/2]/a=(a+1)/2, and the expected value of all the rank sums would equal to b(a+1)/2 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 69
Kendall s Coefficient of Concordance 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 70
Kendall s Coefficient of Concordance and Friedman s Test 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 71
24.667 w = = 0.881 4(8 1) 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 72
Do You Need to Know More Nonparametric Statistical Methods, Second Edition by Myles Hollander and Douglas A. Wolfe. (1999) Wiley-Interscience 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 73
Resampling Methods Conventional methods are based on the sampling distribution of a statistic computed for the observed sample. The sampling distribution is derived by considering all possible samples of size n from the underlying population. Resampling methods generate the sampling distribution of the statistic by drawing repeated samples from the observed sample itself. This eliminates the need to assume a specific functional form for the population distribution (e.g. normal). 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 74
Challenger Shuttle O-Ring Data Do we have statistical evidence that cold temperature leads to more O-ring incidents? Notice that assumptions of two sample t test do not hold. Original analysis omitted the zeros? Was this justified? What do we do? 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 75
Wrong t-test Analysis Difference of Low mean to High mean Notice that the assumptions of the independent sample t-test do not hold, i.e., data is not normal for each group. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 76
Permutation Distribution of t Statistic Also equal to the two-sided p-value Equivalent to selecting all simple random samples without replacement of size 20 from the 24 data points, labeling these High and the rest Low 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 77
Comments A randomization test is a permutation test applied to data from a randomized experiment. Randomization tests are the gold standard for establishing causality. A permutation test considers all possible simple random samples without replacement from the set of observed data values The bootstrap method considers a large number of simple random samples with replacement from the set of observed data values. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 78
Calculation of t Statistics from 10,000-2 -1 0 1 2 3 4 5 6 Bootstrap Samples Think that we are placing the 24 Challenger data values in a hat. And that we are randomly selecting 24 values with replacement from the hat, labeling the first 20 values High and the remaining 4 values Low. We repeat these process 10,000 times. For each of these 10,000 bootstrap samples we calculate the t-statistic. 35 t- statistics values were greater than or equal to 3.888 out of 10000 (if s p = 0, t is defined to be 0). This gives a bootstrap P-value of 35/10000 = 0.0035 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 79
Bootstrap Distribution of Difference Between the Means 67 of the 10,000 differences of the Low mean and the High mean were greater than or equal to 1.3. This gives a bootstrap P-value of 67/10000 =.0067-1 0 1 2 Conclusion: Cold weather increases the chance of O-ring problems 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 80
Bootstrap Final Remarks The JMP files - that we used to generate the bootstrap samples and to calculate the statistics - are available at the course web site. There are bootstrap procedures for most types of statistical problems. All are based on resampling from the data. These methods do not assume specific functional forms for the distribution of the data, e.g. normal The accuracy of bootstrap procedures depend on the sample size and the number of bootstrap samples generated 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 81
How Were the Bootstrap Samples Generated? (see next page) 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 82
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 83
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 84
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 85
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 86
Calculated Columns in JMP Samples File 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 87
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 88
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 89
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 90
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 91
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 92
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 93
7/26/2004 Unit 14 - Stat 571 - Ramón V. León 94
Bootstrap Estimate of the Standard Error of the Mean Summary: We calculate the standard deviation of the N bootstrap estimates of the mean 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 95
BSE for Arbitrary Statistic Example: The bootstrap standard error of the median is calculated by drawing a large number N, e.g. 10000, of bootstrap samples from the data. For each bootstrap sample we calculated the sample median. Then we calculate the standard deviation of the N bootstrap medians. 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 96
Estimated Bootstrap Standard Error for t- statistics Using JMP Note N =10,000 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 97
Bootstrap Standard Error Interpretation Many bootstrap statistics have an approximate normal distribution Confidence interval interpretation 68% of the time the bootstrap estimate (the average of the bootstrap estimates) will be within one standard error of true parameter value 95% of the time the bootstrap estimate (the average of the bootstrap estimates) will be within two standard error of true parameter value 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 98
Bootstrap Confidence Intervals Percentile Method: Median Example 1. Draw N (= 10000) bootstrap samples from the data and for each calculate the (bootstrap) sample median. 2. The 2.5 percentile of the N bootstrap sample medians will be the LCL for a 95% confidence interval 3. The 97.5 percentile of the N bootstrap sample medians will be the UCL for a 95% confidence interval 0.025 0.025 LCL UCL 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 99
Do You Need to Know More? A Introduction to the Bootstrap by Bradley Efrom and Robert J. Tibshirani. (1993) Chapman & Hall/CRC 7/26/2004 Unit 14 - Stat 571 - Ramón V. León 100