NCSS Statistical Software. One-Sample T-Test
|
|
|
- Bonnie Jackson
- 9 years ago
- Views:
Transcription
1 Chapter 205 Introduction This procedure provides several reports for making inference about a population mean based on a single sample. These reports include confidence intervals of the mean or median, the t-test, the z-test, and non-parametric tests including the randomization test, the quantile (sign) test, and the Wilcoxon Signed-Rank test. Tests of assumptions and distribution plots are also available in this procedure. Another procedure that produces a large amount of summary information about a single sample is the Descriptive Statistics procedure. While it is not as focused on hypothesis testing, it contains many additional descriptive statistics, including minimum, maximum, range, counts, trimmed means, sums, mode, variance, Skewness, Kurtosis, coefficient of variation, coefficient of dispersion, percentiles, additional normality tests, and a stem-andleaf plot. Research Questions For the one-sample situation, the typical concern in research is examining a measure of central tendency (location) for the population of interest. The best-known measures of location are the mean and median. For a one-sample situation, we might want to know if the average waiting time in a doctor s office is greater than one hour, if the average refund on a 1040 tax return is different from $500, if the average assessment for similar residential properties is less than $120,000, or if the average growth of roses is 4 inches or more after two weeks of treatment with a certain fertilizer. One early concern should be whether the data are normally distributed. If normality can safely be assumed, then the one-sample t-test is the best choice for assessing whether the measure of central tendency, the mean, is different from a hypothesized value. On the other hand, if normality is not valid, one of the nonparametric tests, such as the Wilcoxon Signed Rank test or the quantile test, can be applied. Technical Details The technical details and formulas for the methods of this procedure are presented in line with the Example 1 output. The output and technical details are presented in the following order: Descriptive Statistics Confidence Interval of μμ with σσ Unknown Confidence Interval of μμ with σσ Known Confidence Interval of the Median Bootstrap Confidence Intervals Confidence Interval of σσ T-Test and associated power report Z-Test 205-1
2 Randomization Test Quantile (Sign) Test Wilcoxon Signed-Rank Test Tests of Assumptions Graphs Data Structure For this procedure, the data are entered as a single column and specified as a response variable. Multiple columns can be analyzed individually during the same run if multiple response variables are specified. Weight Null and Alternative Hypotheses The basic null hypothesis is that the population mean is equal to a hypothesized value, HH 0 : μμ = HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV with three common alternative hypotheses, HH aa : μμ HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV, HH aa : μμ < HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV, or HH aa : μμ > HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV, one of which is chosen according to the nature of the experiment or study
3 Assumptions This section describes the assumptions that are made when you use one of these tests. The key assumption relates to normality or nonnormality of the data. One of the reasons for the popularity of the t-test is its robustness in the face of assumption violation. However, if an assumption is not met even approximately, the significance levels and the power of the t-test are invalidated. Unfortunately, in practice it often happens that more than one assumption is not met. Hence, take the steps to check the assumptions before you make important decisions based on these tests. There are reports in this procedure that permit you to examine the assumptions, both visually and through assumptions tests. Assumptions The assumptions of the one-sample t-test are: 1. The data are continuous (not discrete). 2. The data follow the normal probability distribution. 3. The sample is a simple random sample from its population. Each individual in the population has an equal probability of being selected in the sample. Wilcoxon Signed-Rank Test Assumptions The assumptions of the Wilcoxon signed-rank test are as follows: 1. The data are continuous (not discrete). 2. The distribution of the data is symmetric. 3. The data are mutually independent. 4. The data all have the same median. 5. The measurement scale is at least interval. Quantile Test Assumptions The assumptions of the quantile (sign) test are: 1. A random sample has been taken resulting in observations that are independent and identically distributed. 2. The measurement scale is at least ordinal. Procedure Options This section describes the options available in this procedure. Variables Tab This option specifies the variables that will be used in the analysis. Response Variables Response Variable(s) Specify one or more columns for one-sample analysis. A separate one-sample report will be output for each column listed
4 Reports Tab The options on this panel specify which reports will be included in the output. Descriptive Statistics Descriptive Statistics This section reports the count, mean, standard deviation, standard error, and mean for the specified variable. Confidence Intervals Confidence Level This confidence level is used for the unknown σ and known σ confidence intervals of the mean, as well as for the confidence intervals of the median and standard deviation. Typical confidence levels are 90%, 95%, and 99%, with 95% being the most common. Limits Specify whether a two-sided or one-sided confidence interval of the mean or standard deviation is to be reported. Confidence intervals of the median and bootstrap confidence intervals are always two-sided in these reports. Two-Sided For this selection, the lower and upper limits of μμare reported, giving a confidence interval of the form (Lower Limit, Upper Limit). One-Sided Upper For this selection, only an upper limit of μμ is reported, giving a confidence interval of the form (-, Upper Limit). One-Sided Lower For this selection, only a lower limit of μμ is reported, giving a confidence interval of the form (Lower Limit, ). Confidence Interval of μμ with σσ Unknown This section gives the lower and upper limits of the confidence interval for the population mean based on the T distribution with n - 1 degrees of freedom. This confidence interval is more commonly used in practice because the true standard deviation is rarely known in practice. Confidence Interval of μμ with σσ Known This section gives the lower and upper limits of the confidence interval for the population mean based on the Z distribution. The known standard deviation σ must be entered directly. This confidence interval is rarely used in practice because the true standard deviation is rarely known in practice. Known Standard Deviation σσ This is the known population standard deviation for the corresponding confidence interval. Confidence Interval of the Median This section reports the median and the lower and upper confidence interval limits for the median. Bootstrap Confidence Intervals This section provides confidence intervals of desired levels for the mean. A bootstrap distribution histogram may be specified on the Plots tab
5 Bootstrap Confidence Levels These are the confidence levels of the bootstrap confidence intervals. All values must be between 50 and 100. You may enter several values, separated by blanks or commas. A separate confidence interval is generated for each value entered. Examples: :99(1) 90 Samples (N) This is the number of bootstrap samples used. A general rule of thumb is to use at least 100 when standard errors are the focus or 1000 when confidence intervals are your focus. With current computer speeds, 10,000 or more samples can usually be run in a short time. Retries If the results from a bootstrap sample cannot be calculated, the sample is discarded and a new sample is drawn in its place. This parameter is the number of times that a new sample is drawn before the algorithm is terminated. It is possible that the sample cannot be bootstrapped. This parameter lets you specify how many alternative bootstrap samples are considered before the algorithm gives up. The range is 1 to 1000, with 50 as a recommended value. Bootstrap Confidence Interval Method This option specifies the method used to calculate the bootstrap confidence intervals. Percentile The confidence limits are the corresponding percentiles of the bootstrap values. Reflection The confidence limits are formed by reflecting the percentile limits. If X0 is the original value of the parameter estimate and XL and XU are the percentile confidence limits, the Reflection interval is (2 X0 - XU, 2 X0 - XL). Percentile Type This option specifies which of five different methods is used to calculate the percentiles. More details of these five types are offered in the Descriptive Statistics chapter. Confidence Interval of σσ This section provides one- or two-sided confidence limits for σ. These limits rely heavily on the assumption of normality of the population from which the sample is taken. Tests Alpha Alpha is the significance level used in the hypothesis tests. A value of 0.05 is most commonly used, but 0.1, 0.025, 0.01, and other values are sometimes used. Typical values range from to H0 Value This is the hypothesized value of the population mean under the null hypothesis. This is the mean to which the sample mean will be compared
6 Alternative Hypothesis Direction Specify the direction of the alternative hypothesis. This direction applies to t-test, power report, z-test, and Wilcoxon Signed-Rank test. The Randomization test and Quantile (Sign) test results given in this procedure are always two-sided tests. The selection 'Two-Sided and One-Sided' will produce all three tests for each test selected. Tests - Parametric T-Test This report provides the results of the common one-sample T-Test. Power Report for T-Test This report gives the power of the one-sample T-Test when it is assumed that the population mean and standard deviation are equal to the sample mean and standard deviation. Z-Test This report gives the Z-Test for comparing a sample mean to a hypothesized mean assuming a known standard deviation. This procedure is rarely used in practice because the population standard deviation is rarely known. Known Standard Deviation σσ This is the known population standard deviation used in the corresponding Z-Test. Tests - Nonparametric Randomization Test A randomization test is conducted by first determining the signs of all the values relative to the null hypothesized mean that is, the signs of the values after subtracting the null hypothesized mean. Then all possible permutations of the signs are enumerated. Each of these permutations is assigned to absolute values of the original subtracted values in their original order, and the corresponding t-statistic is calculated. The original t-statistic, based on the original data, is compared to each of the permutation t-statistics, and the total count of permutation t- statistics more extreme than the original t-statistic is determined. Dividing this count by the number of permutations tried gives the significance level of the test. For even moderate sample sizes, the total number of permutations is in the trillions, so a Monte Carlo approach is used in which the permutations are found by random selection rather than complete enumeration. Edgington suggests that at least 1,000 permutations by selected. We suggest that this be increased to 10,000. Monte Carlo Samples Specify the number of Monte Carlo samples used when conducting randomization tests. You also need to check the Randomization Test box under the Variables tab to run this test. Somewhere between 1000 and Monte Carlo samples are usually necessary. Although the default is 1000, we suggest the use of when using this test. Quantile (Sign) Test The quantile (sign) test tests whether the null hypothesized quantile value is the quantile for the quantile test proportion. Each of the values in the sample is compared to the null hypothesized quantile value and determined whether it is larger or smaller. Values equal to the null hypothesized value are removed. The binomial distribution based on the Quantile Test Proportion is used to determine the probabilities in each direction of obtaining such a sample or one more extreme if the null quantile is the true one. These probabilities are the p-values for the one-sided test of each direction. The two-sided p-value is twice the value of the smaller of the two one-sided p-values
7 When the quantile of interest is the median (the quantile test proportion is 0.5), this quantile test is called the sign test. Quantile Test Proportion This is the value of the binomial proportion used in the Quantile test. The quantile (sign) test tests whether the null hypothesized quantile value is the quantile for this proportion. A value of 0.5 results in the Sign Test. Wilcoxon Signed-Rank Test This nonparametric test makes use of the sign and the magnitude of the rank of the differences (original data minus the hypothesized value). It is one of the most commonly used nonparametric alternatives to the one sample t-test. Report Options Tab The options on this panel control the label and decimal options of the report. Report Options Variable Names This option lets you select whether to display only variable names, variable labels, or both. Decimal Places Decimals This option allows the user to specify the number of decimal places directly or based on the significant digits. These decimals are used for output except the nonparametric tests and bootstrap results, where the decimal places are defined internally. If one of the significant digits options is used, the ending zero digits are not shown. For example, if 'Significant Digits (Up to 7)' is chosen, is displayed as is displayed as The output formatting system is not designed to accommodate 'Significant Digits (Up to 13)', and if chosen, this will likely lead to lines that run on to a second line. This option is included, however, for the rare case when a very large number of decimals is needed. Plots Tab The options on this panel control the inclusion and appearance of the plots. Select Plots Histogram Average-Difference Plot Check the boxes to display the plot. Click the plot format button to change the plot settings. Bootstrap Confidence Interval Histograms This plot requires that the bootstrap confidence intervals report has been selected. It gives the plot of the bootstrap distribution used in creating the bootstrap confidence interval
8 Example 1 Running a One-Sample Analysis This section presents an example of how to run a one-sample analysis. The data are the Weight data shown above and found in the Weight dataset. The data can be found under the column labeled Weight. A researcher wishes to test whether the mean weight of the population of interest is different from 130. You may follow along here by making the appropriate entries or load the completed template Example 1 by clicking on Open Example Template from the File menu of the window. 1 Open the Weight dataset. From the File menu of the NCSS Data window, select Open Example Data. Click on the file Weight.NCSS. Click Open. 2 Open the One Mean Inference window. Using the Analysis menu or the Procedure Navigator, find and select the procedure. On the menus, select File, then New Template. This will fill the procedure with the default template. 3 Specify the variables. On the One Mean Inference window, select the Variables tab. (This is the default.) Double-click in the Response Variable(s) text box. This will bring up the variable selection window. Select Weight from the list of variables and then click Ok. Weight will appear in the Response Variables box. 4 Specify the reports. On the One Mean Inference window, select the Reports tab. Make sure the Descriptive Statistics check box is checked. Leave the Confidence Level at 95%. Leave the Confidence Interval Limits at Two-Sided. Make sure the Confidence Interval of µ with σ Unknown check box is checked. Make sure the Confidence Interval of µ with σ Known check box is checked. Enter 40 for σ. Check the Confidence Interval of the Median check box. Check Bootstrap Confidence Intervals. Leave the sub-options at the default values. Check the Confidence Interval σ check box. Leave Alpha at Enter 130 for the H0 Value. For Ha, select Two-Sided and One-Sided. Usually a single alternative hypothesis is chosen, but all three alternatives are shown in this example to exhibit all the reporting options. Make sure the T-Test check box is checked. Check the Power Report for T-Test check box. Check the Z-Test check box. Enter 40 for σ. Check the Randomization Test check box. Leave the Monte Carlo Samples at Check the Quantile (Sign) Test check box. Leave the Quantile Test Proportion at 0.5. Check the Wilcoxon Signed-Rank Test check box. Make sure the Test of Assumptions check box is checked. Leave the Assumptions Alpha at Specify the plots. On the One Mean Inference window, select the Plots tab. Make sure all the plot check boxes are checked. 6 Run the procedure. From the Run menu, select Run Procedure. Alternatively, just click the green Run button
9 The following reports and charts will be displayed in the Output window. Descriptive Statistics Section Standard Standard Variable Count Mean Deviation Error Median Weight Variable The name of the variable for which the descriptive statistics are listed here. Count This is the number of non-missing values. Mean This is the average of the data values. xx = nn ii=1 xx ii Standard Deviation The sample deviation is the square root of the variance. It is a measure of dispersion based on squared distances from the mean for the variables listed. nn ss = nn ii=1 (xx ii xx) 2 nn 1 Standard Error This is the estimated standard deviation of the distribution of sample means for an infinite population. SSSS xx = ss nn Median The median is the 50 th percentile of the data, using the AveXp(n+1) method. The details of this method are described in the Descriptive Statistics chapter under Percentile Type. Two-Sided Confidence Interval of µ with σ Unknown Section 95.0% C. I. of μ Standard Standard Lower Upper Variable Count Mean Deviation Error T* d.f. Limit Limit Weight Variable This is the spreadsheet column variable. Count This is the number of non-missing values. Mean This is the average of the data values
10 Standard Deviation The sample deviation is the square root of the variance. It is a measure of dispersion based on squared distances from the mean for the variables listed. Standard Error This is the estimated standard deviation of the distribution of sample means for an infinite population. T* This is the t-value used to construct the confidence limits. It is based on the degrees of freedom and the confidence level. d.f. The degrees of freedom are used to determine the T distribution from which T* is generated: dddd = nn 1 Lower and Upper Confidence Limits These are the confidence limits of the confidence interval for μμ. The confidence interval formula is xx ± TT dddd SSSS xx Two-Sided Confidence Interval of µ with σ Known Section 95.0% C. I. of μ Standard Lower Upper Variable Count Mean σ Error Z* Limit Limit Weight Variable This is the spreadsheet column variable. Count This is the number of non-missing values. Mean This is the average of the data values. Known Standard Deviation σ This is the known pre-specified value for the population standard deviation. Standard Error This is the standard deviation of the distribution of sample means. SSSS xx = σσ nn Z* This is the z-value used to construct the confidence limits. It is based on the standard normal distribution according to the specified confidence level
11 Lower and Upper Confidence Limits These are the confidence limits of the confidence interval for μμ. The confidence interval formula is xx ± ZZ SSSS xx Two-Sided Confidence Interval of the Median Section 95.0% C. I. of the Median Lower Upper Variable Count Median Limit Limit Weight Variable This is the spreadsheet column variable. Count This is the number of non-missing values. Median The median is the 50 th percentile of the data, using the AveXp(n+1) method. The details of this method are described in the Descriptive Statistics chapter under Percentile Type. Lower Limit and Upper Limit These are the lower and upper confidence limits of the median. These limits are exact and make no distributional assumptions other than a continuous distribution. No limits are reported if the algorithm for this interval is not able to find a solution. This may occur if the number of unique values is small. Bootstrap Section Bootstrapping was developed to provide standard errors and confidence intervals in situations in which the standard assumptions are not valid. In these nonstandard situations, bootstrapping is a viable alternative to the corrective action suggested earlier. The method is simple in concept, but it requires extensive computation time. The bootstrap is simple to describe. You assume that your sample is actually the population and you draw B samples (B is over 1000) of N from the original dataset, with replacement. With replacement means that each observation may be selected more than once. For each bootstrap sample, the mean is computed and stored. Suppose that you want the standard error and a confidence interval of the mean. The bootstrap sampling process has provided B estimates of the mean. The standard deviation of these B means is the bootstrap estimate of the standard error of the mean. The bootstrap confidence interval is found by arranging the B values in sorted order and selecting the appropriate percentiles from the list. For example, a 90% bootstrap confidence interval for the difference is given by fifth and ninety-fifth percentiles of the bootstrap mean values. The main assumption made when using the bootstrap method is that your sample approximates the population fairly well. Because of this assumption, bootstrapping does not work well for small samples in which there is little likelihood that the sample is representative of the population. Bootstrapping should only be used in medium to large samples
12 Estimation Results Bootstrap Confidence Limits Parameter Estimate Conf. Level Lower Upper Mean Original Value Bootstrap Mean Bias (BM - OV) Bias Corrected Standard Error Confidence Limit Type = Reflection, Number of Samples = Bootstrap Histograms Section This report provides bootstrap confidence intervals of the mean. Note that since these results are based on 3000 random bootstrap samples, they will differ slightly from the results you obtain when you run this report. Original Value This is the parameter estimate obtained from the complete sample without bootstrapping. Bootstrap Mean This is the average of the parameter estimates of the bootstrap samples. Bias (BM - OV) This is an estimate of the bias in the original estimate. It is computed by subtracting the original value from the bootstrap mean. Bias Corrected This is an estimated of the parameter that has been corrected for its bias. The correction is made by subtracting the estimated bias from the original parameter estimate. Standard Error This is the bootstrap method s estimate of the standard error of the parameter estimate. It is simply the standard deviation of the parameter estimate computed from the bootstrap estimates. Conf. Level This is the confidence coefficient of the bootstrap confidence interval given to the right. Bootstrap Confidence Limits Lower and Upper These are the limits of the bootstrap confidence interval with the confidence coefficient given to the left. These limits are computed using the confidence interval method (percentile or reflection) designated on the Bootstrap panel. Note that to be accurate, these intervals must be based on over a thousand bootstrap samples and the original sample must be representative of the population
13 Bootstrap Histogram The histogram shows the distribution of the bootstrap parameter estimates. Two-Sided Confidence Interval of σ Section 95.0% C. I. of σ Standard Lower Upper Variable Count Deviation Limit Limit Weight Variable This is the spreadsheet column variable. Count This is the number of non-missing values. Standard Deviation The sample deviation is the square root of the variance. It is a measure of dispersion based on squared distances from the mean for the variables listed. Lower Limit and Upper Limit ss = nn ii=1 (xx ii xx) 2 nn 1 This report gives the limits of the confidence interval for the population standard deviation. The validity of these intervals is very dependent on the assumption that the data are sampled from a normal distribution. The formula for the confidence interval of the standard deviation is given by where α is one minus the confidence level. (nn 1)ss2 (nn 1)ss2 2 σσ χχ 1 αα/2,nn 1 2 χχ αα/2,nn 1 T-Test Section This section presents the results of the traditional one-sample T-test. Here, reports for all three alternative hypotheses are shown, but a researcher would typically choose one of the three before generating the output. All three tests are shown here for the purpose of exhibiting all the output options available. Variable: Weight Alternative Standard Prob Reject H0 Hypothesis Mean Error T-Statistic d.f. Level at α = μ No μ < No μ > No Alternative Hypothesis The (unreported) null hypothesis is HH 0 : μμ = HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV
14 and the alternative hypotheses, HH aa : μμ HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV, HH aa : μμ < HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV, or HH aa : μμ > HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV. In practice, the alternative hypothesis should be chosen in advance. Mean This is the average of the data values. Standard Error This is the estimated standard deviation of the distribution of sample means for an infinite population. SSSS xx = ss nn T-Statistic The T-Statistic is the value used to produce the p-value (Prob Level) based on the T distribution. The formula for the T-Statistic is: xx HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV TT SSSSSSSSSSSSSSSSSS = SSSS xx d.f. The degrees of freedom define the T distribution upon which the probability values are based. The formula for the degrees of freedom in the equal-variance T-test is: dddd = nn 1 Prob Level The probability level, also known as the p-value or significance level, is the probability that the test statistic will take a value at least as extreme as the observed value, assuming that the null hypothesis is true. If the p-value is less than the prescribed α, in this case 0.05, the null hypothesis is rejected in favor of the alternative hypothesis. Otherwise, there is not sufficient evidence to reject the null hypothesis. Reject H0 at α = (0.050) This column indicates whether or not the null hypothesis is rejected, in favor of the alternative hypothesis, based on the p-value and chosen α. A test in which the null hypothesis is rejected is sometimes called significant. Power for the The power report gives the power of a test where it is assumed that the population mean and standard deviation is equal to the sample mean and standard deviation. Powers are given for alpha values of 0.05 and For a much more comprehensive and flexible investigation of power or sample size, we recommend you use the PASS software program. Power for the Variable: Weight This section assumes the population mean and standard deviation are equal to the sample values. Alternative Power Power Hypothesis N μ σ (α = 0.05) (α = 0.01) μ μ < μ >
15 Alternative Hypothesis This value identifies the test direction of the test reported in this row. In practice, you would select the alternative hypothesis prior to your analysis and have only one row showing here. N N is the assumed sample size. µ This is the assumed population mean on which the power calculation is based. σ This is the assumed population standard deviation on which the power calculation is based. Power (α = 0.05) and Power (α = 0.01) Power is the probability of rejecting the hypothesis that the mean is equal to the hypothesized value when they are in fact not equal. Power is one minus the probability of a type II error (β). The power of the test depends on the sample size, the magnitude of the standard deviation, the alpha level, and the true difference between the population mean and the hypothesized value. The power value calculated here assumes that the population standard deviation is equal to the sample standard deviation and that the difference between the population mean and the hypothesized value is exactly equal to the difference between the sample mean and the hypothesized value. High power is desirable. High power means that there is a high probability of rejecting the null hypothesis when the null hypothesis is false. Some ways to increase the power of a test include the following: 1. Increase the alpha level. Perhaps you could test at α = 0.05 instead of α = Increase the sample size. 3. Decrease the magnitude of the standard deviations. Perhaps the study can be redesigned so that measurements are more precise and extraneous sources of variation are removed. Z-Test Section This section presents the results of the traditional one-sample Z-test, which can be used when the population standard deviation is known. Because the population standard deviation is rarely known, this test is not commonly used in practice. Here, reports for all three alternative hypotheses are shown, but a researcher would typically choose one of the three before generating the output. All three tests are shown here for the purpose of exhibiting all the output options available. One-Sample Z-Test Variable: Weight Alternative Standard Prob Reject H0 Hypothesis Mean σ Error Z-Statistic Level at α = μ No μ < No μ > No Alternative Hypothesis The (unreported) null hypothesis is HH 0 : μμ = HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV
16 and the alternative hypotheses, HH aa : μμ HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV, HH aa : μμ < HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV, or HH aa : μμ > HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV. In practice, the alternative hypothesis should be chosen in advance. Mean This is the average of the data values. Known Standard Deviation σ This is the known pre-specified value for the population standard deviation. Standard Error This is the standard deviation of the distribution of sample means. SSSS xx = σσ nn Z-Statistic The Z-Statistic is the value used to produce the p-value (Prob Level) based on the standard normal Z distribution. The formula for the Z-Statistic is: xx HHHHHHHHHHheeeeeeeeeeee VVVVVVVVVV ZZ SSSSSSSSSSSSSSSSSS = SSSS xx Prob Level The probability level, also known as the p-value or significance level, is the probability that the test statistic will take a value at least as extreme as the observed value, assuming that the null hypothesis is true. If the p-value is less than the prescribed α, in this case 0.05, the null hypothesis is rejected in favor of the alternative hypothesis. Otherwise, there is not sufficient evidence to reject the null hypothesis. Reject H0 at α = (0.050) This column indicates whether or not the null hypothesis is rejected, in favor of the alternative hypothesis, based on the p-value and chosen α. A test in which the null hypothesis is rejected is sometimes called significant. Randomization Test Section A randomization test is conducted by first determining the signs of all the values relative to the null hypothesized mean that is, the signs of the values after subtracting the null hypothesized mean. Then all possible permutations of the signs are enumerated. Each of these permutations is assigned to absolute values of the original subtracted values in their original order, and the corresponding t-statistic is calculated. The original t-statistic, based on the original data, is compared to each of the permutation t-statistics, and the total count of permutation t-statistics more extreme than the original t-statistic is determined. Dividing this count by the number of permutations tried gives the significance level of the test. For even moderate sample sizes, the total number of permutations is in the trillions, so a Monte Carlo approach is used in which the permutations are found by random selection rather than complete enumeration. Edgington suggests that at least 1,000 permutations by selected. We suggest that this be increased to 10,
17 Randomization Test (Two-Sided) Alternative Hypothesis: The distribution center is not 130. Variable: Weight Number of Monte Carlo samples: Prob Reject H0 Test Level at α = Randomization Test No Prob Level The probability level, also known as the p-value or significance level, is the probability that the test statistic will take a value at least as extreme as the observed value, assuming that the null hypothesis is true. If the p-value is less than the prescribed α, in this case 0.05, the null hypothesis is rejected in favor of the alternative hypothesis. Otherwise, there is not sufficient evidence to reject the null hypothesis. Reject H0 at α = (0.050) This column indicates whether or not the null hypothesis is rejected, in favor of the alternative hypothesis, based on the p-value and chosen α. A test in which the null hypothesis is rejected is sometimes called significant. based upon the sampling distribution of the statistic being normal under the alternative hypothesis. Quantile (Sign) Test Section The quantile (sign) test is one of the older nonparametric procedures. Each of the values in the sample is compared to the null hypothesized quantile value and determined whether it is larger or smaller. Values equal to the null hypothesized value are removed. The binomial distribution based on the Quantile Test Proportion is used to determine the probabilities in each direction of obtaining such a sample or one more extreme if the null quantile is the true one. These probabilities are the p-values for the one-sided test of each direction. The two-sided p-value is twice the value of the smaller of the two one-sided p-values. When the quantile of interest is the median (the quantile test proportion is 0.5), a quantile test is called the sign test. While the quantile (sign) test is simple, there are more powerful nonparametric alternatives, such as the Wilcoxon signed-rank test. However, if the shape of the underlying distribution of a variable is the double exponential distribution, the sign test may be the better choice. Quantile (Sign) Test This Quantile test is equivalent to the Sign test if the Quantile Proportion is 0.5. Variable: Weight Null Quantile Number Number H1: Q Q0 H1: Q < Q0 H1: Q > Q0 Quantile (Q0) Proportion Lower Higher Prob Level Prob Level Prob Level Null Quantile (Q0) Under the null hypothesis, the proportion of all values below the null quantile is the quantile proportion. For the sign test, the null quantile is the null median. Quantile Proportion Under the null hypothesis, the quantile proportion is the proportion of all values below the null quantile. For the sign test, this proportion is 0.5. Number Lower This is the actual number of values that are below the null quantile
18 Number Higher This is the actual number of values that are above the null quantile. H1:Q Q0 Prob Level This is the two-sided probability that the true quantile is equal to the stated null quantile (Q0), for the quantile proportion stated and given the observed values. A small p-value indicates that the true quantile for the stated quantile proportion is different from the null quantile. H1:Q < Q0 Prob Level This is the one-sided probability that the true quantile is greater than or equal to the stated null quantile (Q0), for the quantile proportion stated and given the observed values. A small p-value indicates that the true quantile for the stated quantile proportion is less than the null quantile. H1:Q > Q0 Prob Level This is the one-sided probability that the true quantile is less than or equal to the stated null quantile (Q0), for the quantile proportion stated and given the observed values. A small p-value indicates that the true quantile for the stated quantile proportion is greater than the null quantile. Wilcoxon Signed-Rank Test Section This nonparametric test makes use of the sign and the magnitude of the rank of the differences (original data minus the hypothesized value). It is typically the best nonparametric alternative to the one-sample t-test. Wilcoxon Signed-Rank Test Variable: Weight W Mean Std Dev Number Number Sets Multiplicity Sum Ranks of W of W of Zeros of Ties Factor Approximation Without Approximation With Exact Probability* Continuity Correction Continuity Correction Alternative Prob Reject H0 Prob Reject H0 Prob Reject H0 Hypothesis Level (α = 0.050) Z-Value Level (α = 0.050) Z-Value Level (α = 0.050) Median No No Median < No No Median > No No *Exact probabilities are given only when there are no ties. Sum Ranks (W) The basic statistic for this test is the sum of the positive ranks, ΣR + (The sum of the positive ranks is chosen arbitrarily. The sum of the negative ranks could equally be used). This statistic is called W. WW = RR + Mean of W This is the mean of the sampling distribution of the sum of ranks for a sample of n items. where dd 0 is the number of zero differences. μμ WW = nn(nn + 1) dd 0(dd 0 + 1)
19 Std Dev of W This is the standard deviation of the sampling distribution of the sum of ranks. Here t i represents the number of times the i th value occurs. ss WW = nn(nn + 1)(2nn + 1) dd 0(dd 0 + 1)(2dd 0 + 1) tt ii 3 tt ii where dd 0 is the number zero differences, t i is the number of absolute differences that are tied for a given non-zero rank, and the sum is over all sets of tied ranks. Number of Zeros This is the number of times that the difference between the observed value (or difference) and the hypothesized value is zero. The zeros are used in computing ranks, but are not considered positive ranks or negative ranks. Number Sets of Ties The treatment of ties is to assign an average rank for the particular set of ties. This is the number of sets of ties that occur in the data, including ties at zero. Multiplicity Factor This is the correction factor that appeared in the standard deviation of the sum of ranks when there were ties. Alternative Hypothesis For the Wilcoxon signed-rank test, the null and alternative hypotheses relate to the median. In the two-tail test for the median difference (assuming a hypothesized value of 0), the null hypothesis would be H 0: Median = 0 with the alternative being H a: Median 0. The left-tail alternative is represented by Median < 0 (i.e., H a: median < 0) while the right-tail alternative is depicted by Median > 0. Exact Probability: Prob Level This is an exact p-value for this statistical test, assuming no ties. The p-value is the probability that the test statistic will take on a value at least as extreme as the actually observed value, assuming that the null hypothesis is true. If the p-value is less than α, say 5%, the null hypothesis is rejected. If the p-value is greater than α, the null hypothesis is accepted. Exact Probability: Reject H0 (α = 0.050) This is the conclusion reached about the null hypothesis. It will be to either fail to reject H 0 or reject H 0 at the assigned level of significance. Approximations with (and without) Continuity Correction: Z-Value Given the sample size is at least ten, a normal approximation method may be used to approximate the distribution of the sum of ranks. Although this method does correct for ties, it does not have the continuity correction factor. The z value is as follows: zz = WW μμ WW σσ WW If the correction factor for continuity is used, the formula becomes: zz = WW μμ WW ± 1 2 σσ WW
20 Approximations with (and without) Continuity Correction: Prob Level This is the p-value for the normal approximation approach for the Wilcoxon signed-rank test. The p-value is the probability that the test statistic will take a value at least as extreme as the actually observed value, assuming that the null hypothesis is true. If the p-value is less than α, say 5%, the null hypothesis is rejected. If the p-value is greater than α, the null hypothesis is accepted. Approximations with (and without) Continuity Correction: Reject H0 (α = 0.050) This is the conclusion reached about the whether to reject null hypothesis. It will be either Yes or No at the given level of significance. Tests of Assumptions Section Tests of Assumptions Section Assumption Value Probability Decision (α = 0.050) Skewness Normality Cannot reject normality Kurtosis Normality Cannot reject normality Omnibus Normality Cannot reject normality The main assumption when using the t-test is that the data are normally distributed. The normality assumption can be checked statistically by the skewness, kurtosis, or omnibus normality tests and visually by the histogram or normal probability plot. In the case of non-normality, one of the nonparametric tests may be considered. Sometimes a transformation, such as the natural logarithm or the square root of the original data, can change the underlying distribution from skewed to normal. To evaluate whether the underlying distribution of the variable is normal after the transformation, rerun the normal probability plot on the transformed variable. If some of the data values are negative or zero, it may be necessary to add a constant to the original data prior to the transformation. If the transformation produces a suitable result, then the one-sample t-test could be performed on the transformed data. Normality (Skewness, Kurtosis, and Omnibus) These three tests allow you to test the skewness, kurtosis, and overall normality of the data. If any of them reject the hypothesis of normality, the data should not be considered normal. These tests are discussed in more detail in the Descriptive Statistics chapter. Graphical Perspectives
21 Histogram The histogram provides a general idea of the shape. The histogram appearance can be affected considerably by the bin width choices, particularly when the sample size is smaller. Normal Probability Plot If the observations fall along a straight line, this indicates the data follow a normal distribution. Average-Difference Plot This average-difference plot is designed to detect a lack of symmetry in the data. This plot is constructed from the original data. Let D(i) represent the ith ordered value. Pairs of these sorted values are considered, with the pairing being done as you move toward the middle from either end. That is, consider the pairs D(1) and D(n), D(2) and D(n-1), D(3) and D(n-2), etc. Plot the average versus the difference of each of these pairs. Your plot will have about n/2 points, depending on whether n is odd or even. If the data are symmetric, the average of each pair will be the close to the median. Symmetry is an important assumption for some of the available tests. A perfectly symmetric set of data should show a vertical line of points hitting the horizontal axis at the value of the median. Departures from symmetry would deviate from this standard. Checklist This checklist, prepared by a professional statistician, is a flowchart of the steps you should complete to conduct a valid one-sample t-test (or one of its nonparametric counterparts). You should complete these tasks in order. Step 1 Data Preparation Introduction This step involves scanning your data for anomalies, data entry errors, typos, and so on. Frequently we hear of people who completed an analysis with the right techniques but obtained strange conclusions because they had mistakenly selected the data. Sample Size The sample size (number of non-missing rows) has a lot of ramifications. The larger the sample size for the onesample t-test the better. Of course, the t-test may be performed on very small samples, say 4 or 5 observations, but it is impossible to assess the validity of assumptions with such small samples. It is our statistical experience that at
22 least 20 observations are necessary to evaluate normality properly. On the other hand, since skewness can have unpleasant effects on t-tests with small samples, particularly for one-tailed tests, larger sample sizes (30 to 50) may be necessary. It is possible to have a sample size that is too large for a statistical significance test. When your sample size is very large, you are almost guaranteed to find statistical significance. However, the question that then arises is whether the magnitude of the difference is of practical importance. Missing Values The number and pattern of missing values is always an issue to consider. Usually, we assume that missing values occur at random throughout your data. If this is not true, your results will be biased since a particular segment of the population is underrepresented. If you have a lot of missing values, some researchers recommend comparing other variables with respect to missing versus nonmissing. If you find large differences in other variables, you should begin to worry about whether the missing values might cause a systematic bias in your results. Type of Data The mathematical basis of the t-test assumes that the data are continuous. Because of the rounding that occurs when data are recorded, all data are technically discrete. The validity of assuming the continuity of the data then comes down to determining when we have too much rounding. For example, most statisticians would not worry about human-age data that was rounded to the nearest year. However, if these data were rounded to the nearest ten years or further to only three groups (young, adolescent, and adult), most statisticians would question the validity of the probability statements. Some studies have shown that the t-test is reasonably accurate when the data has only five possible values (most would call this discrete data). If your data contains less than five unique values, any probability statements made are tenuous. Outliers Generally, outliers cause distortion in statistical tests. You must scan your data for outliers (the box plot is an excellent tool for doing this). If you have outliers, you have to decide if they are one-time occurrences or if they would occur in another sample. If they are one-time occurrences, you can remove them and proceed. If you know they represent a certain segment of the population, you have to decide between biasing your results (by removing them) or using a nonparametric test that can deal with them. Most would choose the nonparametric test. Step 2 Setup and Run the Procedure Introduction NCSS is designed to be simple to operate, but it requires some learning. When you go to run a procedure such as this for the first time, take a few minutes to read through the chapter again and familiarize yourself with the issues involved. Enter Variables The NCSS panels are set with ready-to-run defaults. About all you have to do is select the appropriate variable. Select All Plots As a rule, you should select all diagnostic plots (histograms, etc.). They add a great deal to your analysis of the data
23 Specify Alpha Most beginners in statistics forget this important step and let the alpha value default to the standard You should make a conscious decision as to what value of alpha is appropriate for your study. The 0.05 default came about when people had to rely on printed probability tables in which there were only two values available: 0.05 or Now you can set the value to whatever is appropriate. Step 3 Check Assumptions Introduction Once the program output is displayed, you will be tempted to go directly to the probability of the t-test, determine if you have a significant result, and proceed to something else. However, it is very important that you proceed through the output in an orderly fashion. The first task is to determine which of the assumptions are met by your data. Sometimes, when the data are nonnormal, a data transformation (like square roots or logs) might normalize the data. Frequently, this kind of transformation or re-expression approach works very well. However, always check the transformed variable to see if it is normally distributed. It is not unusual in practice to find a variety of tests being run on the same basic null hypothesis. That is, the researcher who fails to reject the null hypothesis with the first test will sometimes try several others and stop when the hoped-for significance is obtained. For instance, a statistician might run the one-sample t-test on the original data, the one-sample t-test on the logarithmically transformed data, the Wilcoxon rank-sum test, and the Quantile test. An article by Gans (1984) suggests that there is no harm on the true significance level if no more than two tests are run. This is not a bad option in the case of questionable outliers. However, as a rule of thumb, it seems more honest to investigate whether the data is normal. The conclusion from that investigation should direct you to the right test. Random Sample The validity of this assumption depends on the method used to select the sample. If the method used ensures that each individual in the population of interest has an equal probability of being selected for this sample, you have a random sample. Unfortunately, you cannot tell if a sample is random by looking at either it or statistics from it. Check Descriptive Statistics You should check the Descriptive Statistics Section first to determine if the Count and the Mean are reasonable. If you have selected the wrong variable, these values will alert you. Normality To validate this assumption, you would first look at the plots. Outliers will show up on the box plots and the probability plots. Skewness, kurtosis, more than one mode, and a host of other problems will be obvious from the density trace on the histogram. After considering the plots, look at the Tests of Assumptions Section to get numerical confirmation of what you see in the plots. Remember that the power of these normality tests is directly related to the sample size, so when the normality assumption is accepted, double-check that your sample is large enough to give conclusive results (at least 20). Symmetry The nonparametric tests need the assumption of symmetry. The easiest ways to evaluate this assumption are from the density trace on the histogram or from the average-difference plot
24 Step 4 Choose the Appropriate Statistical Test Introduction After understanding how your data fit the assumptions of the various one-sample tests, you are ready to determine which statistical procedures will be valid. You should select one of the following three situations based on the status of the normality. Normal Data Use the T-Test Section for hypothesis testing and the Descriptive Statistics Section for interval estimation. Nonnormal and Asymmetrical Data Try a transformation, such as the natural logarithm or the square root, on the original data since these transformations frequently change the underlying distribution from skewed to normal. If some of the data values are negative or zero, add a constant to the original data prior to the transformation. If the transformed data is now normal, use the T-Test Section for hypothesis testing and the Descriptive Statistics Section for interval estimation. Nonnormal and Symmetrical Data Use the Wilcoxon Rank-Sum Test or the Quantile Test for hypothesis testing. Step 5 Interpret Findings Introduction You are now ready to conduct your test. Depending on the nature of your study, you should look at either of the following sections. Hypothesis Testing Here you decide whether to use a two-tailed or one-tailed test. The two-tailed test is the standard. If the probability level is less than your chosen alpha level, reject the null hypothesis of equality to a specified mean (or median) and conclude that the mean is different. Your next task is to look at the mean itself to determine if the size of the difference is of practical interest. Confidence Limits The confidence limits let you put bounds on the size of the mean (for one independent sample) or mean difference (for dependent samples). If these limits are narrow and close to your hypothesized value, you might determine that even though your results are statistically significant, there is no practical significance. Step 6 Record Your Results Finally, as you finish a test, take a moment to jot down your impressions. Explain what you did, why you did it, what conclusions you reached, which outliers you deleted, areas for further investigation, and so on. Since this is a technical process, your short-term memory will not retain these details for long. These notes will be worth their weight in gold when you come back to this study a few days later!
25 Example of Steps This example will illustrate the use of one-sample tests for a single variable. A registration service for a national motel/hotel chain wants the average wait time for incoming calls on Mondays (during normal business hours, 8:00 a.m. to 5:00 p.m.) to be less than 25 seconds. A random sample of 30 calls yielded the results shown below. Row AnsTime Row AnsTime Step 1 Data Preparation This is not paired data but just a single random sample of one variable. There are no missing values, and the variable is continuous. Step 2 Setup and Run the Select and run the from the Analysis menu on the single variable, Anstime. The alpha value has been set at Interpretation of the results will come in the steps to follow. Step 3 Check Assumptions The major assumption to check for is normality, and you should begin with the graphic perspectives: normal probability plots, histograms, density traces, and box plots. Some of these plots are given below
26 The normal probability plot above does not look straight. It shows some skewness to the right. The histogram confirms the skewness to the right. This type of skewness to the right turns up quite often when dealing with elapsed-time data. The skewness, kurtosis, and the omnibus normality tests in the output below have p-values greater than 0.05, indicating that answer time seems to be normally distributed. This conflict in conclusions between the normal probability plot and the normality tests is probably due to the fact that this sample size is not large enough to accurately assess the normality of the data. Tests of Assumptions Section Variable: AnsTime Assumption Value Probability Decision (α = 0.050) Skewness Normality Cannot reject normality Kurtosis Normality Cannot reject normality Omnibus Normality Cannot reject normality Step 4 Choose the Appropriate Statistical Test In Step 3, the conclusions from checking the assumptions were two-fold: (1) the data are continuous, and (2) the answer times are (based on the probability plot) non-normal. As a result of these findings, the appropriate statistical test is the Wilcoxon Signed-Rank test, which is shown in the figure. For comparison purposes, the t-test results are also shown in the output. Variable: AnsTime Alternative Standard Prob Reject H0 Hypothesis Mean Error T-Statistic d.f. Level at α = μ < Yes
27 Wilcoxon Signed-Rank Test Variable: AnsTime W Mean Std Dev Number Number Sets Multiplicity Sum Ranks of W of W of Zeros of Ties Factor Approximation Without Approximation With Exact Probability* Continuity Correction Continuity Correction Alternative Prob Reject H0 Prob Reject H0 Prob Reject H0 Hypothesis Level (α = 0.050) Z-Value Level (α = 0.050) Z-Value Level (α = 0.050) Median < Yes Yes *Exact probabilities are given only when there are no ties. Step 5 Interpret Findings Since the nonparametric test is more appropriate here and the concern was that the average answer time was less than 25 seconds, the Median < 25 is the proper alternative hypothesis. The p-value for the Wilcoxon Signed-Rank test is , which is much less than Thus, the conclusion of the test is to reject the null hypothesis. This says that the median answer time is significantly less than 25 seconds. It is interesting to note that the p-value for the left-tailed t-test is about the same. This points out the robustness of the t-test in the cases of heavy-tailed but almost symmetric distributions. Step 6 Record Your Results The conclusions for this example are that the median is less than 25 seconds. Again, if you were troubled by the shape of the distribution, you could use a transformation such as the natural logarithm to make the data more normal and try the t-test. However, in this case, that seems to be more work than is needed
NCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
Paired T-Test. Chapter 208. Introduction. Technical Details. Research Questions
Chapter 208 Introduction This procedure provides several reports for making inference about the difference between two population means based on a paired sample. These reports include confidence intervals
NCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
Two-Sample T-Tests Assuming Equal Variance (Enter Means)
Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of
Two-Sample T-Tests Allowing Unequal Variance (Enter Difference)
Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
Data Analysis Tools. Tools for Summarizing Data
Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool
Confidence Intervals for the Difference Between Two Means
Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means
Comparing Means in Two Populations
Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
CALCULATIONS & STATISTICS
CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents
Gamma Distribution Fitting
Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics
Pearson's Correlation Tests
Chapter 800 Pearson's Correlation Tests Introduction The correlation coefficient, ρ (rho), is a popular statistic for describing the strength of the relationship between two variables. The correlation
Non-Inferiority Tests for Two Means using Differences
Chapter 450 on-inferiority Tests for Two Means using Differences Introduction This procedure computes power and sample size for non-inferiority tests in two-sample designs in which the outcome is a continuous
Point Biserial Correlation Tests
Chapter 807 Point Biserial Correlation Tests Introduction The point biserial correlation coefficient (ρ in this chapter) is the product-moment correlation calculated between a continuous random variable
Tests for One Proportion
Chapter 100 Tests for One Proportion Introduction The One-Sample Proportion Test is used to assess whether a population proportion (P1) is significantly different from a hypothesized value (P0). This is
How To Test For Significance On A Data Set
Non-Parametric Univariate Tests: 1 Sample Sign Test 1 1 SAMPLE SIGN TEST A non-parametric equivalent of the 1 SAMPLE T-TEST. ASSUMPTIONS: Data is non-normally distributed, even after log transforming.
Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means
Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis
Standard Deviation Estimator
CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of
Permutation Tests for Comparing Two Populations
Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of
Recall this chart that showed how most of our course would be organized:
Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
Chapter 3 RANDOM VARIATE GENERATION
Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.
Non-Inferiority Tests for One Mean
Chapter 45 Non-Inferiority ests for One Mean Introduction his module computes power and sample size for non-inferiority tests in one-sample designs in which the outcome is distributed as a normal random
Using Excel for inferential statistics
FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied
Lecture Notes Module 1
Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures
Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:
Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:
Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours
Tests for Two Proportions
Chapter 200 Tests for Two Proportions Introduction This module computes power and sample size for hypothesis tests of the difference, ratio, or odds ratio of two independent proportions. The test statistics
The Wilcoxon Rank-Sum Test
1 The Wilcoxon Rank-Sum Test The Wilcoxon rank-sum test is a nonparametric alternative to the twosample t-test which is based solely on the order in which the observations from the two samples fall. We
SPSS Explore procedure
SPSS Explore procedure One useful function in SPSS is the Explore procedure, which will produce histograms, boxplots, stem-and-leaf plots and extensive descriptive statistics. To run the Explore procedure,
Two Correlated Proportions (McNemar Test)
Chapter 50 Two Correlated Proportions (Mcemar Test) Introduction This procedure computes confidence intervals and hypothesis tests for the comparison of the marginal frequencies of two factors (each with
Two Related Samples t Test
Two Related Samples t Test In this example 1 students saw five pictures of attractive people and five pictures of unattractive people. For each picture, the students rated the friendliness of the person
Simple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
Once saved, if the file was zipped you will need to unzip it. For the files that I will be posting you need to change the preferences.
1 Commands in JMP and Statcrunch Below are a set of commands in JMP and Statcrunch which facilitate a basic statistical analysis. The first part concerns commands in JMP, the second part is for analysis
Stat 5102 Notes: Nonparametric Tests and. confidence interval
Stat 510 Notes: Nonparametric Tests and Confidence Intervals Charles J. Geyer April 13, 003 This handout gives a brief introduction to nonparametrics, which is what you do when you don t believe the assumptions
Study Guide for the Final Exam
Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make
Introduction to Quantitative Methods
Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................
Non-Parametric Tests (I)
Lecture 5: Non-Parametric Tests (I) KimHuat LIM [email protected] http://www.stats.ox.ac.uk/~lim/teaching.html Slide 1 5.1 Outline (i) Overview of Distribution-Free Tests (ii) Median Test for Two Independent
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition
Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition Online Learning Centre Technology Step-by-Step - Excel Microsoft Excel is a spreadsheet software application
StatCrunch and Nonparametric Statistics
StatCrunch and Nonparametric Statistics You can use StatCrunch to calculate the values of nonparametric statistics. It may not be obvious how to enter the data in StatCrunch for various data sets that
Non-Inferiority Tests for Two Proportions
Chapter 0 Non-Inferiority Tests for Two Proportions Introduction This module provides power analysis and sample size calculation for non-inferiority and superiority tests in twosample designs in which
THE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7.
THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM
Descriptive Statistics
Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize
Normality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected]
HYPOTHESIS TESTING WITH SPSS:
HYPOTHESIS TESTING WITH SPSS: A NON-STATISTICIAN S GUIDE & TUTORIAL by Dr. Jim Mirabella SPSS 14.0 screenshots reprinted with permission from SPSS Inc. Published June 2006 Copyright Dr. Jim Mirabella CHAPTER
MBA 611 STATISTICS AND QUANTITATIVE METHODS
MBA 611 STATISTICS AND QUANTITATIVE METHODS Part I. Review of Basic Statistics (Chapters 1-11) A. Introduction (Chapter 1) Uncertainty: Decisions are often based on incomplete information from uncertain
Confidence Intervals for One Standard Deviation Using Standard Deviation
Chapter 640 Confidence Intervals for One Standard Deviation Using Standard Deviation Introduction This routine calculates the sample size necessary to achieve a specified interval width or distance from
Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection
Chapter 311 Introduction Often, theory and experience give only general direction as to which of a pool of candidate variables (including transformed variables) should be included in the regression model.
Testing for differences I exercises with SPSS
Testing for differences I exercises with SPSS Introduction The exercises presented here are all about the t-test and its non-parametric equivalents in their various forms. In SPSS, all these tests can
Simple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
Scatter Plots with Error Bars
Chapter 165 Scatter Plots with Error Bars Introduction The procedure extends the capability of the basic scatter plot by allowing you to plot the variability in Y and X corresponding to each point. Each
Statistics courses often teach the two-sample t-test, linear regression, and analysis of variance
2 Making Connections: The Two-Sample t-test, Regression, and ANOVA In theory, there s no difference between theory and practice. In practice, there is. Yogi Berra 1 Statistics courses often teach the two-sample
t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon
t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected] www.excelmasterseries.com
Fairfield Public Schools
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217
Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing
6.4 Normal Distribution
Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under
Tutorial 5: Hypothesis Testing
Tutorial 5: Hypothesis Testing Rob Nicholls [email protected] MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................
How To Check For Differences In The One Way Anova
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
Nonparametric Two-Sample Tests. Nonparametric Tests. Sign Test
Nonparametric Two-Sample Tests Sign test Mann-Whitney U-test (a.k.a. Wilcoxon two-sample test) Kolmogorov-Smirnov Test Wilcoxon Signed-Rank Test Tukey-Duckworth Test 1 Nonparametric Tests Recall, nonparametric
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS This booklet contains lecture notes for the nonparametric work in the QM course. This booklet may be online at http://users.ox.ac.uk/~grafen/qmnotes/index.html.
Dongfeng Li. Autumn 2010
Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
Confidence Intervals for Cp
Chapter 296 Confidence Intervals for Cp Introduction This routine calculates the sample size needed to obtain a specified width of a Cp confidence interval at a stated confidence level. Cp is a process
Hypothesis Testing for Beginners
Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes
Rank-Based Non-Parametric Tests
Rank-Based Non-Parametric Tests Reminder: Student Instructional Rating Surveys You have until May 8 th to fill out the student instructional rating surveys at https://sakai.rutgers.edu/portal/site/sirs
Tests for Two Survival Curves Using Cox s Proportional Hazards Model
Chapter 730 Tests for Two Survival Curves Using Cox s Proportional Hazards Model Introduction A clinical trial is often employed to test the equality of survival distributions of two treatment groups.
NONPARAMETRIC STATISTICS 1. depend on assumptions about the underlying distribution of the data (or on the Central Limit Theorem)
NONPARAMETRIC STATISTICS 1 PREVIOUSLY parametric statistics in estimation and hypothesis testing... construction of confidence intervals computing of p-values classical significance testing depend on assumptions
Hypothesis testing - Steps
Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =
KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management
KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To
Factor Analysis. Chapter 420. Introduction
Chapter 420 Introduction (FA) is an exploratory technique applied to a set of observed variables that seeks to find underlying factors (subsets of variables) from which the observed variables were generated.
Difference tests (2): nonparametric
NST 1B Experimental Psychology Statistics practical 3 Difference tests (): nonparametric Rudolf Cardinal & Mike Aitken 10 / 11 February 005; Department of Experimental Psychology University of Cambridge
CHAPTER 14 NONPARAMETRIC TESTS
CHAPTER 14 NONPARAMETRIC TESTS Everything that we have done up until now in statistics has relied heavily on one major fact: that our data is normally distributed. We have been able to make inferences
Lin s Concordance Correlation Coefficient
NSS Statistical Software NSS.com hapter 30 Lin s oncordance orrelation oefficient Introduction This procedure calculates Lin s concordance correlation coefficient ( ) from a set of bivariate data. The
Descriptive Statistics
Y520 Robert S Michael Goal: Learn to calculate indicators and construct graphs that summarize and describe a large quantity of values. Using the textbook readings and other resources listed on the web
13: Additional ANOVA Topics. Post hoc Comparisons
13: Additional ANOVA Topics Post hoc Comparisons ANOVA Assumptions Assessing Group Variances When Distributional Assumptions are Severely Violated Kruskal-Wallis Test Post hoc Comparisons In the prior
Inference for two Population Means
Inference for two Population Means Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison October 27 November 1, 2011 Two Population Means 1 / 65 Case Study Case Study Example
DATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
Principles of Hypothesis Testing for Public Health
Principles of Hypothesis Testing for Public Health Laura Lee Johnson, Ph.D. Statistician National Center for Complementary and Alternative Medicine [email protected] Fall 2011 Answers to Questions
Statistics Review PSY379
Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses
MEASURES OF LOCATION AND SPREAD
Paper TU04 An Overview of Non-parametric Tests in SAS : When, Why, and How Paul A. Pappas and Venita DePuy Durham, North Carolina, USA ABSTRACT Most commonly used statistical procedures are based on the
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
Randomized Block Analysis of Variance
Chapter 565 Randomized Block Analysis of Variance Introduction This module analyzes a randomized block analysis of variance with up to two treatment factors and their interaction. It provides tables of
One-Way ANOVA using SPSS 11.0. SPSS ANOVA procedures found in the Compare Means analyses. Specifically, we demonstrate
1 One-Way ANOVA using SPSS 11.0 This section covers steps for testing the difference between three or more group means using the SPSS ANOVA procedures found in the Compare Means analyses. Specifically,
Chapter 7 Section 7.1: Inference for the Mean of a Population
Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used
3.4 Statistical inference for 2 populations based on two samples
3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted
Statistics I for QBIC. Contents and Objectives. Chapters 1 7. Revised: August 2013
Statistics I for QBIC Text Book: Biostatistics, 10 th edition, by Daniel & Cross Contents and Objectives Chapters 1 7 Revised: August 2013 Chapter 1: Nature of Statistics (sections 1.1-1.6) Objectives
4. Continuous Random Variables, the Pareto and Normal Distributions
4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random
1.5 Oneway Analysis of Variance
Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments
II. DISTRIBUTIONS distribution normal distribution. standard scores
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,
1 Nonparametric Statistics
1 Nonparametric Statistics When finding confidence intervals or conducting tests so far, we always described the population with a model, which includes a set of parameters. Then we could make decisions
HYPOTHESIS TESTING: POWER OF THE TEST
HYPOTHESIS TESTING: POWER OF THE TEST The first 6 steps of the 9-step test of hypothesis are called "the test". These steps are not dependent on the observed data values. When planning a research project,
Session 7 Bivariate Data and Analysis
Session 7 Bivariate Data and Analysis Key Terms for This Session Previously Introduced mean standard deviation New in This Session association bivariate analysis contingency table co-variation least squares
UNDERSTANDING THE TWO-WAY ANOVA
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
t-test Statistics Overview of Statistical Tests Assumptions
t-test Statistics Overview of Statistical Tests Assumption: Testing for Normality The Student s t-distribution Inference about one mean (one sample t-test) Inference about two means (two sample t-test)
DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.
DESCRIPTIVE STATISTICS The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses. DESCRIPTIVE VS. INFERENTIAL STATISTICS Descriptive To organize,
Drawing a histogram using Excel
Drawing a histogram using Excel STEP 1: Examine the data to decide how many class intervals you need and what the class boundaries should be. (In an assignment you may be told what class boundaries to
Module 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012
Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
The Dummy s Guide to Data Analysis Using SPSS
The Dummy s Guide to Data Analysis Using SPSS Mathematics 57 Scripps College Amy Gamble April, 2001 Amy Gamble 4/30/01 All Rights Rerserved TABLE OF CONTENTS PAGE Helpful Hints for All Tests...1 Tests
