# Categorical Data Analysis

Size: px
Start display at page:

Transcription

1 Richard L. Scheaffer University of Florida The reference material and many examples for this section are based on Chapter 8, Analyzing Association Between Categorical Variables, from Statistical Methods for the Social Sciences, 3rd edition (Alan Agresti and Barbara Finlay, Prentice Hall). Alan Agresti, a statistics professor at the University of Florida, is one of the world s leading research authorities on categorical data analysis. Barbara Finlay is a sociologist at Texas A&M University. The technique for analyzing categorical data that is most frequently taught in introductory statistics courses is the Chi-square test for independence. Consider the following two-way table that displays the results from a survey that Dr. Scheaffer asked his freshman and sophomore students to complete. The responses in the table are answers to the question, Do you currently have a job? Columns show the yes/no answers; rows show whether students were freshmen or sophomores. Do you currently have a job? No Yes Freshmen Sophomores Table 1 The χ 2 -value is 3.4, and the P-value is What information do these numbers provide? Since the P-value is relatively small, many researchers would conclude that there is some association between job status and class. Does the chi-square value of 3.4 tell us anything more? Does the chi-square statistic tell us what we really want to know? Most likely what we really want to know about is the nature of the association. The chi-square value does not tell us anything at all about the nature of the association. In fact, the chi-square value tells us very little and is considered by many statisticians to be overused as a technique for analyzing categorical data. In this situation we can look directly at the data and conclude that sophomores have jobs at a higher rate than freshmen, but we do not get this information from the test results. The test tells us only that there is a relationship somewhere in the data. There are other techniques for analyzing categorical data that give us more information. In this situation, for example, we would like to look at the proportions to get additional information. What we would like to have for categorical data analysis are techniques that give us useful information like we get from correlation and regression analysis for continuous data. Recall that the regression equation, in particular the slope from the regression equation, and the correlation value give us information we want to know. Once we have these values, we could 1

2 throw away the actual data and still have lots of useful information, especially if we know whether or not the slope is significantly different from zero. We would like to have some additional measures for categorical data that tell us 1. Is there evidence of association? 2. If so, how strong is it? Consider the following two-way table that categorizes a sample of people in the work force by income level (high or low) and educational level (end after high school or end after college). Income level Low High Education High School a b Level College c d Table 2 If there were a positive association in these categorical variables, how would a, b, c, and d be related? If there is a positive association, we would expect low income to be associated with low education and high income to be associated with high education. Thus we would expect the values of a and d to be high and the values of b and c to be low. If there were a negative association between these variables, we would expect low education to be associated with high income and high education to be associated with low income. Thus we would expect the values of b and c to be high and the values of a and d to be low. Note that these questions of association are meaningful only when we have ordered categories. In this situation our categories can be ordered high and low, but this will not be the case with all categorical variables (for example gender could not be ordered). Finally, we can think of several situations that would support no association between the variables. For example, a and b could be large with c and d small, or these values could be switched, or all cell counts could be approximately equal. In the example above, the two-way table provides very much the same information about the association between variables as we get by examining a scatterplot. In fact we can often do everything with data in a two-way table that we can do with data in a scatterplot. We can do logistic regression, we can examine residuals, and we can create a measure similar to correlation. The important point is that we can do far more categorical analysis than we generally do just with chi-square tests. Consider the following quote made in 1965 by Sir Austin Bradford Hill. Like fire, the chi-square statistic is an excellent servant and a bad master. Hill was a British statistician, epidemiologist, and medical researcher. He is credited with practically inventing in the 1940 s the randomized comparative experiments that are now known as clinical trials. Hill saw the need for systematically designed experiments, and he advocated completely randomized 2

3 designs. It is noteworthy that in the United States, the National Institutes of Health did not begin requiring well-designed clinical trials for medical researchers until In fact, the requirement today is not that there be a randomized design, but rather that a procedure be employed that is appropriate for the investigation. Prior to that time, researchers were free to do pretty much what they wanted to do. As a result, some experiments were designed well and others were flawed. Table 3 contains data provided in Statistical Methods for the Social Sciences, page 280. The data were obtained from the 1991 General Social Survey. Subjects were asked whether they identify more strongly with the Democratic party, the Republican party, or with independents. Party Identification Gender Democrat Independent Republican Total Females Males Total Table 3 The table classifies 980 respondents according to gender and party identification. Looking at the numbers in the table, we see that females tend to be Democrats and males tend to be Republicans. So it appears that there is an association between these two variables. The P-value from the chi-square test equals 0.03, confirming that there is an association. Is the association strong or weak? This is a harder question, but we can use residual analysis to help us find the answer. In the context of a two-way table, a residual is defined as the difference in the observed frequency and the expected frequency: residual = f o f e. To more easily make comparisons, statisticians generally prefer to use standardized or adjusted residuals. Standardized residuals are calculated by dividing the residual value by the standard error of the residual. adjusted residual = e f o f f (1 row proportion)(1 column proportion) e The adjusted residuals associated with the gender and party preference data are included in parentheses next to the observed frequencies in Table 4. Party Identification Gender Democrat Independent Republican Females 279 (2.3) 73 (0.5) 225 ( 2.6) Males 165 ( 2.3) 47 ( 0.5) 191 (2.6) 3

4 Table 4 The sign of the adjusted residual is positive when the observed frequency is higher than the expected frequency. In Table 4, we can see that the observed frequency of Democratic females is 2.3 standard errors higher than would be expected if there were no association between party and gender. Similarly, the observed frequency of Republican females is 2.6 standard errors lower than we would expect if there were no association between party and gender. Standardized residuals allow us to see the direction and strength of the association between categorical variables. A large standardized residual provides evidence of association in that cell. Measuring the Strength of Association In the case of continuous variables we use r and r 2 to measure the strength of the relationship between the variables. Measuring the strength of association between categorical variables is not quite so simple. We ll look at three ways of measuring this strength: proportions, odds and odds ratios, and concordant and discordant pairs. Comparing Proportions Comparing proportions is a simple procedure. Looking back at the job versus university class data in Table 1, we can see that 25/36= 0.69 of the students without jobs were freshmen and that 12/26= 0.46 of the students with jobs were freshmen. Do you currently have a job? No Yes Freshmen Sophomores Table 1 It is interesting to observe that the differences in proportions must range between 1 and +1. A difference close to one in magnitude indicates a high level of association, while a difference close to zero represents very little association. Note the similarity here to how we interpret values of the correlation coefficient in the continuous case. In this example, the difference in proportions is = 0.23, suggesting a low or moderate level of association. Odds and Odds Ratios As an example of using odds and odds ratios to measure the strength of association, we look back at Table 2, the two-way table for income level and education level. 4

5 Income level Low High Education High School a b Level College c d Table 2 a For high school graduates, the estimated probability of low income is a+ b and the b estimated probability of high income is. Therefore, for high school graduates, the odds a+ b favoring low income are given by a probability of success a b a # successes odds = = + = = probability of failure b b # failures a+ b where success represents low income. For college graduates, the odds favoring low income are similarly calculated to be c/d. Generally we look at the ratio of odds from two rows, which we denote by θ ˆ. In this example, ˆ a/ b ad θ = odds ratio = = c/ d bc To interpret the odds ratio, suppose θ ˆ = 2. This would indicate that the odds in favor of having a low income job if a person is a high school graduate are twice the odds in favor of having a low income job if a person is a college graduate. The Effect of Sample Size In the continuous case using regression, when we double the sample size without changing the results, what happens to the slope, correlation coefficient, and p-value for a test of xy, significance for the slope? As an example, consider the following two sets of data ( ) * * * *, x, y are just ( xy, ) repeated. and ( x y ). The ordered pairs ( ) x y * x * y

6 Bivariate Fit of y By x y x Linear Fit Linear Fit y = x Summary of Fit RSquare RSquare Adj Root Mean Square Error Mean of Response 6 Observations (or Sum Wgts) 5 Parameter Estimates Term Estimate Std Error t Ratio Prob> t Intercept x Bivariate Fit of y* By x* y* x* Linear Fit Linear Fit y* = x* 6

7 Summary of Fit RSquare RSquare Adj Root Mean Square Error Mean of Response 6 Observations (or Sum Wgts) 10 Parameter Estimates Term Estimate Std Error t Ratio Prob> t Intercept x* <.0001 of RSquare. However, the t-ratio is larger and the P-value is smaller for ( ) Notice that the parameter estimates for the two data sets are identical, as are the values * * x, y. The chisquare value and P-value behave similarly to the t-ratio and P-value for regression. Do we have a measure that is stable for chi-square in the same way that the slope and correlation are stable for regression relative to sample size? Consider the following table from Statistical Methods for the Social Sciences, page 268. These data show the results by race of responses to a question regarding legalized abortion. In each table, 49% of the whites and 51% of the blacks favor legalized abortion. A B C Yes No Total Yes No Total Yes No Total White Black Total χ 2 =.08 χ 2 =.16 χ 2 = 8.0 P =.78 P =.69 P =.005 Table 5 Note that the chi-square value increases from a value that is not statistically significant to a value that is highly significant as the sample size increases from 200 to 400 to 20,000. Since the P-value for this test of association is very sensitive to the sample size, we need other measures to describe the strength of the relationship. In every table above, the difference in proportions is 0.02, and the odds ratio is approximately The small difference in proportions and the odds ratio with value close to one indicate that there is not very much going on in any of these situations. The association is very weak. This example illustrates that we should not simply depend on a test of significance. Increasing sample size can always make very small differences in proportions statistically significant. We need to use other measures to get a more complete picture of the strength of association between two categorical variables. To get information about the strength of association between two categorical variables, we can do inference procedures based on the odds ratio ˆθ. First we must know the distribution 7

8 of ˆθ. We can work out the asymptotic distribution of the odds ratio, but it is much easier to work out the distribution of the natural logarithm of the odds ratio. Note that the values of the odds ratio range from zero to very large positive numbers, that is 0 θˆ <, and the values of ˆθ are highly skewed to the right. When the odds of success are equal in the two rows being compared, θ ˆ = 1, indicating no association between the variables. By taking the natural logarithm of ˆθ, we pull in the tail. It turns out that the logarithm of the odds ratio, that is ln( θ ˆ), is approximately normally distributed for large values of n. Thus we can perform inference procedures on ln( θ ˆ) since statisticians know that the standard error of ln( θ ˆ) is and the quantity values of n. ln( θˆ ) 0 SE ln( θˆ ) ( θˆ ) SE ln = + + +, a b c d has a distribution that is approximately standard normal for large As an example, reconsider the job data provided in Table 1. For freshmen, the odds favoring not having a job are 25 = For sophomores, the odds favoring not having a job 12 are So the odds ratio is ˆ θ This tells us that the odds in favor of having no job if the student is a freshman are 2.65 times the odds in favor of having no job in the student is a sophomore. Note that by taking the reciprocal of this odds ratio, θˆ, we see that the odds in favor of having a job if the student is a freshman are only 0.38 times the odds in favor of having a job if the student is a sophomore. We want to determine whether the odds ratio θ ˆ = 2.65 differs significantly from what we would expect if there were no difference in the odds of having a job for the freshmen and sophomore students. Note that if there were no difference in the odds for the two groups, the odds ratio would be exactly equal to one. Thus, under the null hypothesis of no difference, the natural logarithm of the odds ratio would be equal to zero. Based on our sample of students, ln( θ ˆ) = ln(2.65) The standard error of ln( θ ˆ) is given by a + b + c + d = Assuming that n is sufficiently large, which in this case may be questionable, we can do a z-test using as our test statistic ln( θˆ ) z = = SE (ln( θˆ )).53 8

9 This z-value indicates that there is not much evidence of a strong association between job status and class category, even though the odds ratio is This is a two-sided test, so z- values larger than 2 are generally considered significant. If for example the z-statistic had been approximately equal to 3, we would conclude that the odds ratio is significantly greater than one. This would indicate that the odds in favor of having no job if a student is a freshman are significantly higher than the odds in favor of having no job if a student is a sophomore. As another example, we can analyze the data in Table 3. Odds ratios are computed on two-by-two tables, so we will ignore Independents and look just at Democrats versus Republicans. In this case θ ˆ = So the odds in favor of being a Democrat if a person is female are times as great as the odds in favor of being a Democrat if a person is male. Note that ln( θ ˆ) = and SE ln( ˆ θ ) = 0.139, so ln( θˆ ) z = = > 2 SE (ln( θˆ )).139 and we conclude that this result is significant. We could now do a similar analysis of Democrats versus Independents by ignoring the Republicans. In fact there are three two-by-two subtables that could be constructed from Table 3. We need only look at two of them (as many subtables as there are degrees of freedom); the third result is a function of the other two. Democrats versus Republicans Democrats versus Independents Independents versus Republicans θ ˆ ln θ ˆ ( ) SE ln( θˆ ) z It is important to realize that this test on the odds ratio does not duplicate the chi-square test. The chi-square test is used to determine whether or not there is any association between the categorical variables. This z-test is used to determine whether there is a particular association. Before leaving two-by-two tables, we will interject a historical note about the development of chi-square procedures. The chi-square statistic was invented by Karl Pearson about Pearson knew what the chi-square distribution looks like, but he was unsure about the degrees of freedom. About 15 years later, Fisher got involved. He and Pearson were unable to agree on the degrees of freedom for the two-by-two table, and they could not settle the issue mathematically. They had no nice way to do simulations, which would be the modern approach, so they gathered lots of data in two-by-two tables where they thought the variables 9

10 should be independent. For each table they calculated the chi-square statistic. Recall that the expected value for the chi-square statistic is the degrees of freedom. After collecting many chisquare values, Pearson and Fisher averaged all the values they had obtained and got a result very close to one. Fisher is said to have commented, The result was embarrassingly close to 1. This confirmed that there is one degree of freedom for a two-by-two table. Some years later this result was proved mathematically. Concordant and Discordant Pairs Analytical techniques based on concordant and discordant pairs were developed by Maurice Kendall. These methods are useful only when the categories can be ordered. A pair of observations is concordant if the subject who is higher on one variable is also higher on the other variable. A pair of observations is discordant if the subject who is higher on one variable is lower on the other. If a pair of observations is in the same category of a variable, then it is neither concordant or discordant and is said to be tied on that variable. Income level Low High Education High School a b Level College c d Table 2 Looking back at Table 2, we see that the income category is ordered by low and high. Similarly the education category is ordered, with education ending at high school being the low category and education ending at college being the high category. All d observations, representing individuals in the high income and high education category, beat all a observations, who represent individuals in the low income and low education category. Thus there are C = ad concordant pairs. All b observations are higher on the income variable and lower on the education variable, while all c observations are lower on the income variable and higher on the education variable. Thus there are D= bc discordant pairs. The strength of association can be measured by calculating the difference in the proportions of concordant and discordant pairs. This measure, which is called gamma and is denoted by γ ˆ, is defined as C D C D γˆ = =. C+ D C+ D C + D Note that since γ ˆ represents the difference in two proportions, its value is between 1 and 1, that is, 1 γˆ 1. A positive value of gamma indicates a positive association, while a negative value of gamma indicates a negative association. Similar to the correlation coefficient, γˆ -values close to 0 indicate a weak association between variables while values close to 1 or 10

11 1 indicate a strong association. To obtain the value γ ˆ = 1, D must equal 0. In order to have D = 0, either 0 b = or c = 0. Whenever either b or c is close to 0 in value, the value of γ ˆ will be close to 1. Therefore, γ ˆ is not necessarily the best measure of the strength of association, however it is a common measure. A more sensitive measure of association between two ordinal variables is called Kendall s tau-b, denoted τ. This measure is more complicated to compute, but it has the b advantage of adjusting for ties. The result of adjusting for ties is that the value of τb is always a little closer to 0 than the corresponding value of gamma. The τ b statistic can be used for inference so long as the value of C+ D is large. Consider again the data given in Table 1 where a = 25, b = 12, c = 11, d = 14. The number of concordant pairs is C = ad= 25 14= 350; the number of discordant pairs is D= bc= 12 11= 132. In this example, ˆ C D λ = = 0.45, which indicates that the C+ D association between having a job and college class is not particularly strong. The τˆb -value is 0.23, which, as expected, is closer to 0; the P-value associated with the test of H0 : τ b = 0 is Some computer software will provide information about concordant and discordant pairs whenever analysis is performed on a two-way table. If the categories are not ordered, the results may not be meaningful, however. Suppose that the data given in Table 3 were ordered. One might assume an ordering from low conservative to highly conservative for party identification and from low to high testosterone for gender. The number of concordant pairs is given by C = 279( ) + 73(191) = 80,345. The number of discordant pairs is given by C D D = 225( ) + 73(165) = 59,745, and γˆ = = Recall that the chi-square C+ D test reveals a highly significant chi-square value. Based on γ ˆ, we can now see that the strength of the association is very modest. Note that the interpretation of γˆ may be questioned in this situation, however, since in a sense we forced the ordering. The measure of association γ ˆ is not very sensitive to sample size. Unlike the P-value associated with a chi-square test of association, the measure of γ ˆ changes very little as sample size increases. So γ ˆ gives us a measure that is stable for chi-square in the same way that the slope and correlation are stable for regression relative to sample size. Physicians Health Study Example The Physicians Health Study was a clinical trial performed in the United States in the late 1980 s and early 1990 s. It involved approximately 22,000 physician subjects over the age 11

12 of 40 who were divided into three groups according to their smoking history: current smokers, past smokers, and those who had never smoked. Each subject was assigned randomly to one of two groups. One group of physicians took a small dose of aspirin daily; physicians in the second group took a placebo instead. These physicians were observed over a period of several years; for each subject, a record was kept of whether he had a myocardial infarction (heart attack) during the period of the study. table. Data for the 10,919 physicians who had never smoked are provided in the following Placebo Aspirin Had heart attack Did not have heart attack Total A chi-square test to determine whether there is association between having a heart 2 attack and taking aspirin results in χ = and a P-value less than Thus we can conclude that there is strong evidence that aspirin has an effect on heart attacks. (Note that this statement is not synonymous with saying that aspirin has a strong effect on heart attack.) Based partially on the strength of this evidence that aspirin has an effect on heart attack, the Physicians Health Study was terminated earlier than originally planned so that the information could be shared. In the United States, researchers are ethically bound to stop a study when significance becomes obvious. The highly significant chi-square value does not give information about the direction or strength of the association. The odds ratio for the group who had never smoked is θ ˆ = Thus these data indicate that the odds in favor of having a heart attack if taking the placebo are 1.74 times the odds in favor of having a heart attack if taking aspirin for the group who had never smoked. Looking at this from the other direction, we can say that the odds in favor of having a heart attack if a physician is taking aspirin are only.57 times the odds in favor of having a heart attack if he is taking a placebo. Note that for this group, ln( θ ˆ) = 0.55 and ( θ ˆ ) SE ln = 0.17, so the z-value will indicate that the odds ratio is significantly greater than zero. Data for the physicians who were past smokers are provided in the following table. Placebo Aspirin Had heart attack Did not have heart attack

13 The odds ratio for this group is θ ˆ = Data for the physicians who were current smokers are provided in the following table. Placebo Aspirin Had heart attack Did not have heart attack The odds ratio for this group is θ ˆ = Note that the odds ratios are similar for all groups, indicating a favorable effect from aspirin across all groups. To get more information from all these data, we can look at a logistic regression model for the entire group of 22,000 participating physicians. In this model there were two explanatory variables, aspirin or placebo and smoking status. The model will have the form π ln = b0 + ba 1 + bp 2 + bc 3. 1 π The response variable was whether or not the subject had a heart attack. All variables are coded as 0 or 1. For example, letting A represent the aspirin variable, A = 1 indicates aspirin and A = 0 indicates placebo. There are two variables for smoking status. We let P represent past smoker and code P = 1 to indicate that the subject is a past smoker. If P = 0 then the subject is not a past smoker. Similarly C = 1 indicates that a subject is a current smoker, while C = 0 indicates that the subject is not a current smoker. A subject who has never smoked would be indicated by having the values of C and P both zero. The following linear model is based on a maximum likelihood method: ln( estimated odds) = A P C. This equation simplifies to A 0.339P 0.551C estimated odds= e ( e )( e )( e ) A P C = 0.018(0.578 )(1.403 )(1.735 ) Letting A= P = C = 0, we can predict the odds in favor of having a heart attack for the physicians who had never smoked and were not taking aspirin. According to the model, these 96 odds are.018. Note that in the raw data the odds are Thus it appears that the 5392 model fits very well for this group. Letting A = 1 and P= C = 0, the model predicts that the odds in favor of having a heart attack for physicians who never smoked and were taking aspirin 13

14 are lower: Note that this prediction is also very close to what we see in 55 the raw data: Note also that the number we multiply by to calculate the 5376 odds in favor of having a heart attack for physicians in this group who were taking aspirin, that is 0.578, is very close to the odds ratio observed earlier for this group of physicians. For physicians who are current or past smokers, the model will incorporate a factor of or and thus predict higher odds than for those who have never smoked. All possible combinations are shown in the table below: Conditions No Aspirin Never Smoked No Aspirin Past Smoker No Aspirin Current Smoker Aspirin Never Smoked Aspirin Past Smoker Aspirin Current Smoker A P C Estimated Odds study. Data are also provided showing outcomes by age group for the participants in this Age Placebo Aspirin Had heart attack Did not have heart attack For the year-olds, the odds ratio θˆ= 0.89, ln( θˆ) = 0.12, and ( θ ˆ ) SE ln = So for the younger participants in this study, the odds of having a heart attack if taking the placebo were 0.89 times the odds of having a heart attack if taking aspirin. At first glance this might suggest that aspirin had a negative effect for the younger physicians. Noting the relative size of the standard error to the ln( θ ˆ), however, informs us that this ratio is 0.12 not significantly different from zero ( z = = 0.42 )

15 As the tables below reveal, different results are observed for all groups of older participants. Age Placebo Aspirin Had heart attack Did not have heart attack Age Placebo Aspirin Had heart attack Did not have heart attack Age Placebo Aspirin Had heart attack Did not have heart attack Values of ˆ θ for these groups are 1.72 for the year old group, 2.20 for the year-old group, and 2.06 for the year-olds. These ratios inform us that aspirin is more effective for all of the older groups, though the effectiveness appears to drop off a little for the oldest participants. The odds ratios provide information that would lead us in general to recommend aspirin to a man fifty or older, but not to a man below fifty. Turtle Example Revisited Now we will look again at the data Bob Stephenson provided relating temperature and gender of turtles. This situation is different from the others we have looked at, as the response variable (gender) is categorical but the explanatory variable (temperature) is continuous. We can treat the continuous variable as categorical by separating temperatures into groups, in this case five temperature groups. The computer output from the earlier logistic regression analysis showed that temperature is significant. The data can be displayed in the two-way table that follows: Temperature Group Total Male Female

16 Total The computer output from the logistic regression analysis contains information about concordant and discordant pairs, Kendall s Tau-a, and much more. We will look at the concordant and discordant pairs. Temperatures are ordered in the table from low to high, with the lowest temperatures in group 1 and the highest temperatures in group 5. We will order gender so that male is the high group (since this is what we are looking for), so the table rows are arranged from high to low. Thus the high/high category is the one in the upper right corner with frequency 27. We obtain the number of concordant pairs as follows: C = 27( ) + 19(25+7+4) + 26(25+7) + 17(25) = Similarly, the number of discordant pairs is computed to be D = 2(20) +17(13) +26(9) + 19(1) = 514. The resulting gamma-value is 0.72, indicating a pretty strong association between temperature and maleness. The value of Kendall s Tau-a, which accounts for ties, drops to The gamma-value is a rough measure of strength of association while Kendall s Tau-a is a stronger, more sensitive measure. In addition to the adjustment for ties, there is another reason that the two values are so different in this situation. Kendall s Tau-a provides a measure of the strength of linear association between the variables. Because there is curvature in the relationship between temperature and gender, the more sensitive measure is going to reflect this, thus causing Kendall s Tau-a value to be lower. Note that the proportion of males increases very fast from temperature category 1 to temperature category 2. Increases for higher temperature categories are much smaller, so a graph displaying the proportion of males versus temperature category is increasing but concave down. Because there is no adjustment for ties, the gamma measure is a less sensitive detector of this curvature. Note again that measures of association for categorical variables are being used in this situation where one of the variables really is continuous. So caution should be exercised in using and interpreting theses measures. They are provided by the software, but they may not be so meaningful in this situation. The two-way table provided in the logistic regression output shows observed and expected frequencies and a corresponding chi-square statistic. The expected frequencies in this table are not calculated according to the usual chi-square method (row total times column total divided by grand total). Rather they are calculated by substituting into the logistic regression model. The logistic model predicts log odds, which can then be converted to an estimate of the odds, which can then be converted to a probability. The probability can be used to determine the expected frequencies for each cell of the table. Note that the logistic regression model has two terms and consequently uses two degrees of freedom. Based on these observed and expected frequencies we can compute the chi-square statistic in the usual way. The information provided by this chi-square test indicates how well the model fits the data. 16

17 There is another measure which is denoted by G 2 on the logistic regression output. This G 2 value is the deviance, which is another measure of the discrepancy between the observed frequencies and those that would be expected according to the logistic model. G 2 is calculated according to the formula G 2 fo = 2 fo ln fe. The G 2 statistic is calculated from a maximum likelihood equation. On the other hand, Pearson s chi-square is not a maximum likelihood statistic. Almost always there will be methods that are better than Pearson s chi-square statistic for a given situation; however, in many cases, the chi-square statistic works reasonably well though it may not be best. In some sense, G 2 is a better measure of discrepancy than the more generally used chi-square. Both are interpreted the same way. When the model fits well, we expect both the chi-square and the G 2 values to be small. Large values of these statistics indicate large discrepancies between the observed and expected frequencies and therefore indicate a poor fit. For the turtle example, the chi-square and G 2 values were both large. This would lead us to think that we should continue looking for a better model. One other issue remains with the output from logistic regression. We are told that the odds ratio equals What information does the odds ratio provide in this situation? Recall πˆ that we fit a model of the form ln = a + bx 1 πˆ, where x is a continuous variable representing temperature. We can get the odds for any particular cell of the two-way table by exponentiating: πˆ ln 1 πˆ a+ bx x e πˆ x = e = k c, so k c 1 π ˆ =. According to the logistic regression model, the odds grow exponentially as the x-value πˆ x 1 1 πˆ + x+ 1 k c b increases. If we calculate the ratio = = c= e, we see that the increase in πˆ x k c 1 πˆ x odds per one unit increase in the x-variable is e = e = This number, which is consistent with the odds ratio reported in the logistic regression computer output, tells us that the ratio of the odds of a turtle being male is 9.13 per increase in one degree of temperature. Note also that we can use the confidence interval reported for β to create a confidence interval for the odds ratio by computing Sensitivity and Specificity lower e and e upper. b

18 The words "sensitivity" and "specificity" have their origins in screening tests for diseases. When a single test is performed, the person may in fact have the disease or the person may be disease free. The test result may be positive, indicating the presence of disease, or the test result may be negative, indicating the absence of the disease. The table below displays test results in the columns and true status of the person being tested in the rows. True Status of Nature (S) Test Result (T) Positive ( + ) Negative ( ) Disease ( + ) a b No Disease ( ) c d Though these tests are generally quite accurate, they still make errors that we need to account for. Sensitivity: We define sensitivity as the probability that the test says a person has the disease + + a when in fact they do have the disease. This is PT ( S ) =. Sensitivity is a measure of a + b how likely it is for a test to pick up the presence of a disease in a person who has it. Specificity: We define specificity as the probability that the test says a person does not have d the disease when in fact they are disease free. This is PT ( S ) =. c + d Ideally, a test should have high sensitivity and high specificity. Sometimes there are tradeoffs in terms of sensitivity and specificity. For example, we can make a test have very high sensitivity, but this sometimes results in low specificity. Generally we are able to keep both sensitivity and specificity high in screening tests, but we still get false positives and false negatives. False Positive: A false positive occurs when the test reports a positive result for a person + c who is disease free. The false positive rate is given by P( S T ) =. Ideally we a + c would like the value of c to be zero; however, this is generally impossible to achieve in a screening test involving a large population. False Negative: A false negative occurs when the test reports a negative result for a person + b who actually has the disease. The false negative rate is given by: P( S T ) =. b + d 18

19 Which false result is the more serious depends on the situation. But we generally worry more about false positives in screening tests. We don't want to tell someone that they have a serious disease when they do not really have it. The following example is take from Statistical Methods for the Social Sciences, 3rd edition (Alan Agresti and Barbara Finlay, Prentice Hall, page 287). The data are provided for the results from a screening test for HIV that was performed on a group of 100,000 people. Note that the prevalence rate of HIV in this group was very low. HIV Status Test Result Positive ( + ) Negative ( ) Positive ( + ) Negative ( ) Based on the results above, the sensitivity of this test was 475 = 0.95 and the 500 specificity was = This test appears to be pretty good. What are the false positive and false negative rates for this test? False positive rate = = False negative rate = = Even when a test has a high sensitivity and specificity, you are still going to get a high false positive rate if you are screening a large population where the prevalence of the disease is low. False positive rate is not just a function of sensitivity and specificity. It is also a function of the prevalence rate of the disease in the population you are testing. Thus there is danger in indiscriminately applying a screening test to a large population where the prevalence rate of the disease is very low. In a significance test, sensitivity and specificity relate to correct decisions and false positives and false negatives correspond to Type I and Type II errors. Consider the columns in the table below. A positive test result corresponds to a statistically significant test result and thus leads to rejection of the null hypothesis. A negative test result corresponds to a test result that is not statistically significant; thus we decide not to reject H o. Now consider the rows in this table. If the true status of nature is positive (that is, if something is going on and what we are testing for is really true), then the alternative hypothesis is true. if what we are testing for is not true (and nothing is going on), then the null hypothesis is correct. 19

20 True Status of Nature (S) 0 H a True ( ) H True ( ) Test Result (T) Positive ( + ) Negative ( ) + a b (Type II) c (Type I) d Note that the upper left entry in the table corresponds to the correct decision to reject the null hypothesis when the alternative is really true. A high value of a results in a high probability of detecting that a null hypothesis is false and corresponds to a test that is very sensitive. The lower right entry corresponds to the correct decision not to reject the null hypothesis when it should not be rejected. A high value of d results in a high probability of not rejecting a "true" null hypothesis and corresponds to a test with high specificity. The lower left entry corresponds to a test result that causes us to reject the null hypothesis when it is really "true;" this entry reflects Type I error and corresponds to a false positive. The upper right entry corresponds to a test result that causes us not to reject the null hypothesis when it is not true; this entry reflects Type II error and corresponds to a false negative. Note that a small value of c is consistent with low probability of Type I error and a low value of b is consistent with low probability of Type II error. 20

### 11. Analysis of Case-control Studies Logistic Regression

Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### Association Between Variables

Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

### KSTAT MINI-MANUAL. Decision Sciences 434 Kellogg Graduate School of Management

KSTAT MINI-MANUAL Decision Sciences 434 Kellogg Graduate School of Management Kstat is a set of macros added to Excel and it will enable you to do the statistics required for this course very easily. To

### Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the

### Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma siersma@sund.ku.dk The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association

### Recall this chart that showed how most of our course would be organized:

Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

### X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.

### II. DISTRIBUTIONS distribution normal distribution. standard scores

Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### Multivariate Logistic Regression

1 Multivariate Logistic Regression As in univariate logistic regression, let π(x) represent the probability of an event that depends on p covariates or independent variables. Then, using an inv.logit formulation

### Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk

Analysing Questionnaires using Minitab (for SPSS queries contact -) Graham.Currell@uwe.ac.uk Structure As a starting point it is useful to consider a basic questionnaire as containing three main sections:

### Chi-square test Fisher s Exact test

Lesson 1 Chi-square test Fisher s Exact test McNemar s Test Lesson 1 Overview Lesson 11 covered two inference methods for categorical data from groups Confidence Intervals for the difference of two proportions

### DATA INTERPRETATION AND STATISTICS

PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE

### Simple Predictive Analytics Curtis Seare

Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Bivariate Statistics Session 2: Measuring Associations Chi-Square Test

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test Features Of The Chi-Square Statistic The chi-square test is non-parametric. That is, it makes no assumptions about the distribution

### 2. Simple Linear Regression

Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according

### International Statistical Institute, 56th Session, 2007: Phil Everson

Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA E-mail: peverso1@swarthmore.edu 1. Introduction

### We are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries?

Statistics: Correlation Richard Buxton. 2008. 1 Introduction We are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries? Do

### MORE ON LOGISTIC REGRESSION

DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 816 MORE ON LOGISTIC REGRESSION I. AGENDA: A. Logistic regression 1. Multiple independent variables 2. Example: The Bell Curve 3. Evaluation

### Generalized Linear Models

Generalized Linear Models We have previously worked with regression models where the response variable is quantitative and normally distributed. Now we turn our attention to two types of models where the

### VI. Introduction to Logistic Regression

VI. Introduction to Logistic Regression We turn our attention now to the topic of modeling a categorical outcome as a function of (possibly) several factors. The framework of generalized linear models

### Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

### Basic Statistical and Modeling Procedures Using SAS

Basic Statistical and Modeling Procedures Using SAS One-Sample Tests The statistical procedures illustrated in this handout use two datasets. The first, Pulse, has information collected in a classroom

### Two Correlated Proportions (McNemar Test)

Chapter 50 Two Correlated Proportions (Mcemar Test) Introduction This procedure computes confidence intervals and hypothesis tests for the comparison of the marginal frequencies of two factors (each with

### Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### Chapter 23. Inferences for Regression

Chapter 23. Inferences for Regression Topics covered in this chapter: Simple Linear Regression Simple Linear Regression Example 23.1: Crying and IQ The Problem: Infants who cry easily may be more easily

### Getting Correct Results from PROC REG

Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking

### Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

### SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

### Having a coin come up heads or tails is a variable on a nominal scale. Heads is a different category from tails.

Chi-square Goodness of Fit Test The chi-square test is designed to test differences whether one frequency is different from another frequency. The chi-square test is designed for use with data on a nominal

Mplus Short Courses Topic 2 Regression Analysis, Eploratory Factor Analysis, Confirmatory Factor Analysis, And Structural Equation Modeling For Categorical, Censored, And Count Outcomes Linda K. Muthén

### STATISTICS 8, FINAL EXAM. Last six digits of Student ID#: Circle your Discussion Section: 1 2 3 4

STATISTICS 8, FINAL EXAM NAME: KEY Seat Number: Last six digits of Student ID#: Circle your Discussion Section: 1 2 3 4 Make sure you have 8 pages. You will be provided with a table as well, as a separate

### Multinomial and Ordinal Logistic Regression

Multinomial and Ordinal Logistic Regression ME104: Linear Regression Analysis Kenneth Benoit August 22, 2012 Regression with categorical dependent variables When the dependent variable is categorical,

### Introduction to Regression and Data Analysis

Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

### Simple linear regression

Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between

### Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable

### COLLEGE ALGEBRA. Paul Dawkins

: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

### Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### Standard Deviation Estimator

CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

277 CHAPTER VI COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES. This chapter contains a full discussion of customer loyalty comparisons between private and public insurance companies

### Regression Analysis: A Complete Example

Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

### Final Exam Practice Problem Answers

Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

### Descriptive Statistics

Descriptive Statistics Primer Descriptive statistics Central tendency Variation Relative position Relationships Calculating descriptive statistics Descriptive Statistics Purpose to describe or summarize

Chapter 11 Testing Hypotheses About Proportions Hypothesis testing method: uses data from a sample to judge whether or not a statement about a population may be true. Steps in Any Hypothesis Test 1. Determine

### DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,

### Mind on Statistics. Chapter 15

Mind on Statistics Chapter 15 Section 15.1 1. A student survey was done to study the relationship between class standing (freshman, sophomore, junior, or senior) and major subject (English, Biology, French,

### Students' Opinion about Universities: The Faculty of Economics and Political Science (Case Study)

Cairo University Faculty of Economics and Political Science Statistics Department English Section Students' Opinion about Universities: The Faculty of Economics and Political Science (Case Study) Prepared

### EXCEL Tutorial: How to use EXCEL for Graphs and Calculations.

EXCEL Tutorial: How to use EXCEL for Graphs and Calculations. Excel is powerful tool and can make your life easier if you are proficient in using it. You will need to use Excel to complete most of your

### How To Run Statistical Tests in Excel

How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting

### SPSS Guide: Regression Analysis

SPSS Guide: Regression Analysis I put this together to give you a step-by-step guide for replicating what we did in the computer lab. It should help you run the tests we covered. The best way to get familiar

### Using Stata for Categorical Data Analysis

Using Stata for Categorical Data Analysis NOTE: These problems make extensive use of Nick Cox s tab_chi, which is actually a collection of routines, and Adrian Mander s ipf command. From within Stata,

### Simple Linear Regression, Scatterplots, and Bivariate Correlation

1 Simple Linear Regression, Scatterplots, and Bivariate Correlation This section covers procedures for testing the association between two continuous variables using the SPSS Regression and Correlate analyses.

### Section 12 Part 2. Chi-square test

Section 12 Part 2 Chi-square test McNemar s Test Section 12 Part 2 Overview Section 12, Part 1 covered two inference methods for categorical data from 2 groups Confidence Intervals for the difference of

### Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

### Correlational Research

Correlational Research Chapter Fifteen Correlational Research Chapter Fifteen Bring folder of readings The Nature of Correlational Research Correlational Research is also known as Associational Research.

### CONTINGENCY TABLES ARE NOT ALL THE SAME David C. Howell University of Vermont

CONTINGENCY TABLES ARE NOT ALL THE SAME David C. Howell University of Vermont To most people studying statistics a contingency table is a contingency table. We tend to forget, if we ever knew, that contingency

### CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

1 2 CONTENTS OF DAY 2 I. More Precise Definition of Simple Random Sample 3 Connection with independent random variables 3 Problems with small populations 8 II. Why Random Sampling is Important 9 A myth,

### How To Check For Differences In The One Way Anova

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

### Odds ratio, Odds ratio test for independence, chi-squared statistic.

Odds ratio, Odds ratio test for independence, chi-squared statistic. Announcements: Assignment 5 is live on webpage. Due Wed Aug 1 at 4:30pm. (9 days, 1 hour, 58.5 minutes ) Final exam is Aug 9. Review

### Is it statistically significant? The chi-square test

UAS Conference Series 2013/14 Is it statistically significant? The chi-square test Dr Gosia Turner Student Data Management and Analysis 14 September 2010 Page 1 Why chi-square? Tests whether two categorical

### Tests for Two Proportions

Chapter 200 Tests for Two Proportions Introduction This module computes power and sample size for hypothesis tests of the difference, ratio, or odds ratio of two independent proportions. The test statistics

### Ordinal Regression. Chapter

Ordinal Regression Chapter 4 Many variables of interest are ordinal. That is, you can rank the values, but the real distance between categories is unknown. Diseases are graded on scales from least severe

### Biostatistics: Types of Data Analysis

Biostatistics: Types of Data Analysis Theresa A Scott, MS Vanderbilt University Department of Biostatistics theresa.scott@vanderbilt.edu http://biostat.mc.vanderbilt.edu/theresascott Theresa A Scott, MS

### Calculating the Probability of Returning a Loan with Binary Probability Models

Calculating the Probability of Returning a Loan with Binary Probability Models Associate Professor PhD Julian VASILEV (e-mail: vasilev@ue-varna.bg) Varna University of Economics, Bulgaria ABSTRACT The

### Factors affecting online sales

Factors affecting online sales Table of contents Summary... 1 Research questions... 1 The dataset... 2 Descriptive statistics: The exploratory stage... 3 Confidence intervals... 4 Hypothesis tests... 4

### Introduction to Quantitative Methods

Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................

### 1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material

### Causal Forecasting Models

CTL.SC1x -Supply Chain & Logistics Fundamentals Causal Forecasting Models MIT Center for Transportation & Logistics Causal Models Used when demand is correlated with some known and measurable environmental

### Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### Introduction to Statistics and Quantitative Research Methods

Introduction to Statistics and Quantitative Research Methods Purpose of Presentation To aid in the understanding of basic statistics, including terminology, common terms, and common statistical methods.

### Levels of measurement in psychological research:

Research Skills: Levels of Measurement. Graham Hole, February 2011 Page 1 Levels of measurement in psychological research: Psychology is a science. As such it generally involves objective measurement of

### Exercise 1.12 (Pg. 22-23)

Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.

### Unit 12 Logistic Regression Supplementary Chapter 14 in IPS On CD (Chap 16, 5th ed.)

Unit 12 Logistic Regression Supplementary Chapter 14 in IPS On CD (Chap 16, 5th ed.) Logistic regression generalizes methods for 2-way tables Adds capability studying several predictors, but Limited to

### Outline. Dispersion Bush lupine survival Quasi-Binomial family

Outline 1 Three-way interactions 2 Overdispersion in logistic regression Dispersion Bush lupine survival Quasi-Binomial family 3 Simulation for inference Why simulations Testing model fit: simulating the

### Two-sample inference: Continuous data

Two-sample inference: Continuous data Patrick Breheny April 5 Patrick Breheny STA 580: Biostatistics I 1/32 Introduction Our next two lectures will deal with two-sample inference for continuous data As

### Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

### Free Trial - BIRT Analytics - IAAs

Free Trial - BIRT Analytics - IAAs 11. Predict Customer Gender Once we log in to BIRT Analytics Free Trial we would see that we have some predefined advanced analysis ready to be used. Those saved analysis

### Example: Boats and Manatees

Figure 9-6 Example: Boats and Manatees Slide 1 Given the sample data in Table 9-1, find the value of the linear correlation coefficient r, then refer to Table A-6 to determine whether there is a significant

### Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

### Bill Burton Albert Einstein College of Medicine william.burton@einstein.yu.edu April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1

Bill Burton Albert Einstein College of Medicine william.burton@einstein.yu.edu April 28, 2014 EERS: Managing the Tension Between Rigor and Resources 1 Calculate counts, means, and standard deviations Produce

### Basic research methods. Basic research methods. Question: BRM.2. Question: BRM.1

BRM.1 The proportion of individuals with a particular disease who die from that condition is called... BRM.2 This study design examines factors that may contribute to a condition by comparing subjects

### Basic Concepts in Research and Data Analysis

Basic Concepts in Research and Data Analysis Introduction: A Common Language for Researchers...2 Steps to Follow When Conducting Research...3 The Research Question... 3 The Hypothesis... 4 Defining the

### MULTIPLE REGRESSION WITH CATEGORICAL DATA

DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 86 MULTIPLE REGRESSION WITH CATEGORICAL DATA I. AGENDA: A. Multiple regression with categorical variables. Coding schemes. Interpreting

### Statistics in Retail Finance. Chapter 2: Statistical models of default

Statistics in Retail Finance 1 Overview > We consider how to build statistical models of default, or delinquency, and how such models are traditionally used for credit application scoring and decision

### Statistics. Measurement. Scales of Measurement 7/18/2012

Statistics Measurement Measurement is defined as a set of rules for assigning numbers to represent objects, traits, attributes, or behaviors A variableis something that varies (eye color), a constant does

### The Chi-Square Test. STAT E-50 Introduction to Statistics

STAT -50 Introduction to Statistics The Chi-Square Test The Chi-square test is a nonparametric test that is used to compare experimental results with theoretical models. That is, we will be comparing observed

### socscimajor yes no TOTAL female 25 35 60 male 30 27 57 TOTAL 55 62 117

Review for Final Stat 10 (1) The table below shows data for a sample of students from UCLA. (a) What percent of the sampled students are male? 57/117 (b) What proportion of sampled students are social

### AP Statistics. Chapter 4 Review

Name AP Statistics Chapter 4 Review 1. In a study of the link between high blood pressure and cardiovascular disease, a group of white males aged 35 to 64 was followed for 5 years. At the beginning of

### Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation