Simple example of collinearity in logistic regression
|
|
|
- Theodora Clarke
- 9 years ago
- Views:
Transcription
1 1 Confounding and Collinearity in Multivariate Logistic Regression We have already seen confounding and collinearity in the context of linear regression, and all definitions and issues remain essentially unchanged in logistic regression. Recall the definition of confounding: Confounding: A third variable (not the independent or dependent variable of interest) that distorts the observed relationship between the exposure and outcome. Confounding complicates analyses owing to the presence of a third factor that is associated with both the putative risk factor and the outcome. Criteria for a confounding factor: 1. A confounder must be a risk factor (or protective factor) for the outcome of interest. 2. A confounder must be associated with the main independent variable of interest. 3. A confounder must not be an intermediate step in the causal pathway between the exposure and outcome. All of the above remains true when investigating confounding in logistic regression models. In linear regression, one way we identified confounders was to compare results from two regression models, with and without a certain suspected confounder, and see how much the coefficient from the main variable of interest changes. The same principle can be used to identify confounders in logistic regression. An exception possibly occurs when the range of probabilities is very wide (implying an s-shaped curve rather than a close to linear portion), in which case more care can be required (beyond scope of this course). As in linear regression, collinearity is an extreme form of confounding, where variables become non-identifiable. Let s look at some examples. Simple example of collinearity in logistic regression Suppose we are looking at a dichotomous outcome, say cured = 1 or not cured = 0, from a certain clinical trial of Drug A versus Drug B. Suppose by extreme bad
2 2 luck, all subjects randomized to Drug A were female, and all subjects randomized to drug B were male. Suppose further that both drugs are equally effective in males and females, and that Drug A has a cure rate of 30%, while Drug B has a cure rate of 50%. We can simulate a data set that follows this scenario in R as follows: # Suppose sample size of trial is 600, with 300 on each medication > drug <- as.factor(c(rep("a", 300), rep("b", 300))) # Ensure that we have collinearity of sex and the medication > sex <- as.factor(c(rep("f", 300), rep("m", 300))) # Generate cure rates of 30% and then 50% > cure <- c(rbinom(300, 1, 0.3), rbinom(300, 1, 0.5)) # Place variables into a data frame, check descriptive statistics > cure.dat <- data.frame(cure=cure, sex=sex, drug=drug) > summary(cure.dat) cure sex drug Min. :0.00 F:300 A:300 1st Qu.:0.00 M:300 B:300 Median :0.00 Mean :0.42 3rd Qu.:1.00 Max. :1.00 # Run a logistic regression model for cure with both variables in the model > output <- glm(cure ~ drug + sex, family = binomial) # Use usual summary when there is collinearity > summary(output) Call: glm(formula = cure ~ drug + sex, family = binomial) Deviance Residuals: Min 1Q Median 3Q Max
3 (1 not defined because of singularities) (Intercept) e-12 *** drugb e-10 *** sexm NA NA NA NA Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: on 599 degrees of freedom Residual deviance: on 598 degrees of freedom AIC: Number of Fisher Scoring iterations: 4 Notice that R has automatically eliminated the sex variable, and we see that the OR for drug B compared to drug A is exp(1.0961) = 2.99, which is close to correct, because OR = (.5/(1-.5))/(.3/(1-.3)) = 2.33, and the CI is (exp( *0.1722), exp( *0.1722)) = (2.13, 4.19). In fact, this exactly matches the observed OR, from the table of data we simulated: > table(cure.dat$cure, cure.dat$drug) A B > 213*165/(87*135) [1] # Why was sex eliminated, rather than drug? # Depends on order entered into the glm statement # Check the other order: > output <- glm(cure ~ sex + drug, family = binomial) > summary(output) (1 not defined because of singularities)
4 4 (Intercept) e-12 *** sexm e-10 *** drugb NA NA NA NA # Exactly the same numerical result, but for sex rather than drug. Second example of collinearity in logistic regression A more subtle example can occur when two variables act to be collinear with a third variable. Collinearity can also occur in continuous variables, so let s see an example there: # Create any first independent variable (round to one decimal place) > x1 <- round(rnorm(400, mean=0, sd=1), 1) # Create any second independent variable (round to one decimal place) > x2 <- round(rnorm(400, mean = 4, sd=2), 1) # Now create a third independent variable that is a direct function # of the first two variables > x3 <- 3*x1 + 2 *x2 # Create a binary outcome variable that depends on all three variables # Note that the probability of the binomial is an inv.logit function > y <- rbinom(400, 1, exp(x1 + 2*x2-3 * x3)/(1+ exp(x1 + 2*x2-3 * x3))) # Put all variables into a data frame > collinear.dat <- data.frame(x1=x1, x2=x2, x3=x3, y=y) # If looked at pairwise, the perfect collinearity is not obvious > pairs(collinear.dat)
5 5 One can see high correlations, but cannot tell that there is perfect collinearity. But let s see what happens if we run an analysis: > output <- glm(y ~ x1 + x2 + x3, data = collinear.dat, family = binomial) Warning message: fitted probabilities numerically 0 or 1 occurred in: glm.fit(x = X, y = Y, weights = weights, start = start, etastart = etastart, # Note the warning message below...r has detected collinearity > summary(output) Call: glm(formula = y ~ x1 + x2 + x3, family = binomial, data = collinear.dat) Deviance Residuals: Min 1Q Median 3Q Max e e e e e+00 (1 not defined because of singularities) (Intercept)
6 6 x e-05 *** x e-05 *** x3 NA NA NA NA Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: on 399 degrees of freedom Residual deviance: on 397 degrees of freedom AIC: Number of Fisher Scoring iterations: 11 # x3 has been eliminated, other variables reasonably estimated. # If we want to get the CIs automatically, rerun model without x3 # and use logistic.regression.or.ci function, or more simply, # just use the built-in confint function in R > output <- glm(y ~ x1 + x2, data = collinear.dat, family = binomial) Warning message: fitted probabilities numerically 0 or 1 occurred in: glm.fit(x = X, y = Y, weights = weights, start = start, etastart = etastart, # Above warning message refers to strong results, not collinearity $regression.table Call: glm(formula = y ~ x1 + x2, family = binomial, data = collinear.dat) Deviance Residuals: Min 1Q Median 3Q Max e e e e e+00 (Intercept) x e-05 *** x e-05 *** Signif. codes: 0 *** ** 0.01 *
7 7 (Dispersion parameter for binomial family taken to be 1) Null deviance: on 399 degrees of freedom Residual deviance: on 397 degrees of freedom AIC: Number of Fisher Scoring iterations: 11 $intercept.ci [1] $slopes.ci [,1] [,2] [1,] [2,] x1 x [,1] [,2] [1,] e [2,] e # Two very strong effects, not surprising given data set # construction # Check a few fitted values > output$fitted[1:6] e e e e e e-16 # Note how close some values are to zero, others much higher. In real practice, most collinearity problems happen when several categorical variables line up to perfectly predict another variable.
8 8 Example of confounding in logistic regression Let s consider a similar example again, but with x3 not quite perfectly derived from the first two variables: # Create any first independent variable (round to one decimal place) x1 <- round(rnorm(400, mean=0, sd=1), 1) # Create any second independent variable (round to one decimal place) x2 <- round(rnorm(400, mean = 4, sd=2), 1) # Now create a third independent variable that is a # related by not a direct function of the first two variables # because of the error term added x3 <- round(3*x1 + 2 *x2 + rnorm(400, mean = 0, sd=5), 1) # Create a binary outcome variable that depends on all three variables # Note that the probability of the binomial is an inv.logit function # We will use smaller effects this time as well, more realistic. # Note that a coefficient of 0.2 has an OR of # exp(0.2) = 1.22 / one unit change y <- rbinom(400, 1, exp(.2*x1 +.3*x2 -.3 * x3)/(1+ exp(.2*x1 + 2*x2-3 * x3))) # Put all variables into a data frame confounding.dat <- data.frame(x1=x1, x2=x2, x3=x3, y=y) # If looked at pairwise, the very strong confounding is not obvious # because it arises from three variables working together pairs(confounding.dat)
9 9 Note the smaller effects as shown in the graphics. Now to analyze the data, comparing univariate to multivariate model outputs. # First univariate logistic regressions for each of the three variables > output <- glm(y ~ x1, data = confounding.dat, family = binomial) $regression.table Call: glm(formula = y ~ x1, family = binomial, data = confounding.dat) (Intercept) < 2e-16 *** x ** x
10 10 [1] > output <- glm(y ~ x2, data = confounding.dat, family = binomial) (Intercept) *** x x [1] > output <- glm(y ~ x3, data = confounding.dat, family = binomial) $regression.table (Intercept) e-07 *** x *** x [1] # Now let s run a logistic regression with all three variables included: > output <- glm(y ~ x1 + x2 + x3, data = confounding.dat, family = binomial) (Intercept) *** x x
11 11 x * x1 x2 x [,1] [,2] [1,] [2,] [3,] To investigate the above results for confounding, let s form a comparative table: Multivariate Univariate Variable OR CI OR CI x (0.63, 1.11) 0.71 (0.56, 0.89) x (0.88, 1.23) 0.93 (0.82, 1.04) x (0.90, 1.00) 0.95 (0.91, 0.98) Note how drastically different the results are, especially for x1. All CIs cross 1 in the multivariate model, but only x2 crosses 1 in the univariate models, the CI widths are smaller in the univariate models. OR s also change by large amounts. As x2 may not be contributing much, we can also run a model with just x1 and x3. > output <- glm(y ~ x1 + x3, data = confounding.dat, family = binomial) (Intercept) e-07 *** x x * x1 x
12 12 [,1] [,2] [1,] [2,] Not much change from the model with all three variables. We will soon see how we can run all interesting models with a single command using the bic.glm model selection function. This will allow us to investigate confounding and model selection at the same time. Real example of confounding in logistic regression Low birth weight is of concern, because infant mortality rates and birth defect rates are very high for low birth weight babies. A woman s behavior during pregnancy (including diet, smoking habits, and receiving prenatal care) can greatly alter the chances of carrying the baby to term and, consequently, of delivering a baby of normal birth weight. The following data are collected:
13 13 Variable Low Birth Weight (0 = Birth Weight 2500g, 1 = Birth Weight < 2500g) Age of the Mother in Years Weight in Pounds at the Last Menstrual Period Race (1 = White, 2 = Black, 3 = Other) Smoking Status During Pregnancy (1 = Yes, 0 = No) History of Premature Labor (0 = None 1 = One, etc.) History of Hypertension (1 = Yes, 0 = No) Presence of Uterine Irritability (1 = Yes, 0 = No) Number of Physician Visits During the First Trimester (0 = None, 1 = One, 2 = Two, etc.) Birth Weight in Grams Coding low age lwt race smoke ptl ht ui ftv bwt We might suspect some confounding. For example, smoking may be related to weight and hypertension, and so on. We will follow all of our usual steps in analyzing these data. Recall that the steps are: 1. Look at various descriptive statistics to get a feel for the data. For logistic regression, this usually includes looking at descriptive statistics within outcome = yes = 1 versus outcome = no = 0 groups. 2. The above by outcome group descriptive statistics are often sufficient for discrete covariates, but you may want to prepare some graphics for continuous variables. 3. For all continuous variables being considered, calculate a correlation matrix of each variable against each other variable. This allows one to begin to investigate possible confounding and collinearity. 4. Similarly, for each categorical/continous independent variable pair, look at the values for the continuous variable in each category of the other variable. 5. Finally, create tables for all categorical/categorical independent variable pairs.
14 14 6. Perform a simple logistic regression for each independent variable. This begins to investigate confounding (we will see in more detail next class), as well as providing an initial unadjusted view of the importance of each variable, by itself. 7. Think about any interaction terms that you may want to try in the model. 8. Perform some sort of model selection technique, or, often much better, think about avoiding any strict model selection by finding a set of models that seem to have something to contribute to overall conclusions. 9. Based on all work done, draw some inferences and conclusions. Carefully interpret each estimated parameter, perform model criticism, possibly repeating some of the above steps (for example, run further models), as needed. 10. Other inferences, such as predictions for future observations, and so on. # Read in the data set, save as a data frame > lbw.dat <- read.table(file="g:\\lbw.txt", header=t) # Convert factor variables > lbw.dat$smoke <- as.factor(lbw.dat$smoke) > lbw.dat$race <- as.factor(lbw.dat$race) > lbw.dat$ptl <- as.factor(lbw.dat$ptl) > lbw.dat$ht <- as.factor(lbw.dat$ht) > lbw.dat$ui <- as.factor(lbw.dat$ui) # Summarize > summary(lbw.dat) low age lwt race smoke ptl Min. : Min. :14.00 Min. : :96 0:115 0:159 1st Qu.: st Qu.: st Qu.: :26 1: 74 1: 24 Median : Median :23.00 Median : :67 2: 5 Mean : Mean :23.24 Mean : : 1 3rd Qu.: rd Qu.: rd Qu.:140.0 Max. : Max. :45.00 Max. :250.0 ht ui ftv bwt 0:177 0:161 Min. : Min. : 709 1: 12 1: 28 1st Qu.: st Qu.:2414
15 15 Median : Median :2977 Mean : Mean :2945 3rd Qu.: rd Qu.:3475 Max. : Max. :4990 # Examining the categorical variables, ptl has few cases at 2 or 3, so # combine these categories with category 1. > for (i in 1:length(lbw.dat$ptl)) { if (lbw.dat$ptl[i] == 2 lbw.dat$ptl[i] == 3) lbw.dat$ptl[i] <- 1} > summary(lbw.dat$ptl) # Look at some correlations for continuous variables > pairs(list(age=lbw.dat$age, lwt=lbw.dat$lwt, ftv=lbw.dat$ftv, bwt=lbw.dat$bwt))
16 16 Some correlations to keep in mind, e.g., age and lwt, although nothing too extreme. Although included up to this point, the outcome variable low is in fact just a dichotomized version of the bwt variable, so the latter is omitted for the rest of these analyses. Should also check some tables and values of continuous variables against categorical variables, I leave this as an exercise. [And, since we will soon see another way to check for confounding, this is not always needed.] # Run univariate regressions > output <- glm(low ~ age, data = lbw.dat, family=binomial) (Intercept)
17 17 age age [1] > output <- glm(low ~ lwt, data = lbw.dat, family=binomial) (Intercept) lwt * lwt [1] > output <- glm(low ~ race, data = lbw.dat, family=binomial) (Intercept) e-06 *** race race race2 race [,1] [,2] [1,] [2,] > output <- glm(low ~ smoke, data = lbw.dat, family=binomial)
18 18 (Intercept) e-07 *** smoke * smoke [1] > output <- glm(low ~ ptl, data = lbw.dat, family=binomial) (Intercept) e-09 *** ptl *** ptl [1] > output <- glm(low ~ ht, data = lbw.dat, family=binomial) (Intercept) e-07 *** ht * ht [1] > output <- glm(low ~ ui, data = lbw.dat, family=binomial)
19 19 (Intercept) e-08 *** ui * ui [1] > output <- glm(low ~ ftv, data = lbw.dat, family=binomial) (Intercept) *** ftv ftv [1] # Run a multivariate model > output <- glm(low ~ age + lwt + race + smoke + ptl + ht + ui + ftv, data = lbw.dat, family=binomial) $regression.table Call: glm(formula = low ~ age + lwt + race + smoke + ptl + ht + ui + ftv, family = binomial, data = lbw.dat) Deviance Residuals: Min 1Q Median 3Q Max
20 20 (Intercept) age lwt * race * race smoke * ptl ** ht ** ui ftv Signif. codes: 0 *** ** 0.01 * (Dispersion parameter for binomial family taken to be 1) Null deviance: on 188 degrees of freedom Residual deviance: on 179 degrees of freedom AIC: Number of Fisher Scoring iterations: 4 $intercept.ci [1] $slopes.ci [,1] [,2] [1,] [2,] [3,] [4,] [5,] [6,] [7,] [8,] [9,] age lwt race2 race3 smoke1 ptl ht1 ui1 ftv
21 21 [,1] [,2] [1,] [2,] [3,] [4,] [5,] [6,] [7,] [8,] [9,] Compare univariate to multivariate results: Multivariate Univariate Variable OR CI OR CI age 0.96 (0.89, 1.04) 0.95 (0.89, 1.01) lwt 0.99 (0.97, 1.00) 0.99 (0.97, 1.00) race (1.19, 9.62) 2.02 (0.93, 5.77) race (0.94, 5.49) 1.89 (0.96, 3.74) smoke (1.06, 5.27) 2.02 (1.08, 3.78 ) ptl (1.36, 8.38) 4.32 (1.92, 9.73 ) ht (1.60, 25.75) 3.37 (1.02, 11.09) ui (0.83, 5.09) 2.58 (1.14, 5.83 ) ftv 1.05 (0.75, 1.48) 0.87 (0.64, 1.19) Maybe some confounding with race, ht1, ftv, etc. We could investigate this further here, but will rather revisit this example after covering model selection and the bic.glm program, which makes such investigations much easier.
Multivariate Logistic Regression
1 Multivariate Logistic Regression As in univariate logistic regression, let π(x) represent the probability of an event that depends on p covariates or independent variables. Then, using an inv.logit formulation
Generalized Linear Models
Generalized Linear Models We have previously worked with regression models where the response variable is quantitative and normally distributed. Now we turn our attention to two types of models where the
Logistic Regression (a type of Generalized Linear Model)
Logistic Regression (a type of Generalized Linear Model) 1/36 Today Review of GLMs Logistic Regression 2/36 How do we find patterns in data? We begin with a model of how the world works We use our knowledge
A Handbook of Statistical Analyses Using R. Brian S. Everitt and Torsten Hothorn
A Handbook of Statistical Analyses Using R Brian S. Everitt and Torsten Hothorn CHAPTER 6 Logistic Regression and Generalised Linear Models: Blood Screening, Women s Role in Society, and Colonic Polyps
Lab 13: Logistic Regression
Lab 13: Logistic Regression Spam Emails Today we will be working with a corpus of emails received by a single gmail account over the first three months of 2012. Just like any other email address this account
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION
HYPOTHESIS TESTING: CONFIDENCE INTERVALS, T-TESTS, ANOVAS, AND REGRESSION HOD 2990 10 November 2010 Lecture Background This is a lightning speed summary of introductory statistical methods for senior undergraduate
Outline. Dispersion Bush lupine survival Quasi-Binomial family
Outline 1 Three-way interactions 2 Overdispersion in logistic regression Dispersion Bush lupine survival Quasi-Binomial family 3 Simulation for inference Why simulations Testing model fit: simulating the
Association Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
Régression logistique : introduction
Chapitre 16 Introduction à la statistique avec R Régression logistique : introduction Une variable à expliquer binaire Expliquer un risque suicidaire élevé en prison par La durée de la peine L existence
Section 14 Simple Linear Regression: Introduction to Least Squares Regression
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma [email protected] The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association
Psychology 205: Research Methods in Psychology
Psychology 205: Research Methods in Psychology Using R to analyze the data for study 2 Department of Psychology Northwestern University Evanston, Illinois USA November, 2012 1 / 38 Outline 1 Getting ready
11. Analysis of Case-control Studies Logistic Regression
Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:
Examining a Fitted Logistic Model
STAT 536 Lecture 16 1 Examining a Fitted Logistic Model Deviance Test for Lack of Fit The data below describes the male birth fraction male births/total births over the years 1931 to 1990. A simple logistic
Multiple Linear Regression
Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is
Module 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
STAT 350 Practice Final Exam Solution (Spring 2015)
PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects
A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution
A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September
PRACTICE PROBLEMS FOR BIOSTATISTICS
PRACTICE PROBLEMS FOR BIOSTATISTICS BIOSTATISTICS DESCRIBING DATA, THE NORMAL DISTRIBUTION 1. The duration of time from first exposure to HIV infection to AIDS diagnosis is called the incubation period.
II. DISTRIBUTIONS distribution normal distribution. standard scores
Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
CALCULATIONS & STATISTICS
CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents
Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection
Chapter 311 Introduction Often, theory and experience give only general direction as to which of a pool of candidate variables (including transformed variables) should be included in the regression model.
Hypothesis testing - Steps
Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =
I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Beckman HLM Reading Group: Questions, Answers and Examples Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Algebra Slide 1 of
A Basic Introduction to Missing Data
John Fox Sociology 740 Winter 2014 Outline Why Missing Data Arise Why Missing Data Arise Global or unit non-response. In a survey, certain respondents may be unreachable or may refuse to participate. Item
Chapter 5 Analysis of variance SPSS Analysis of variance
Chapter 5 Analysis of variance SPSS Analysis of variance Data file used: gss.sav How to get there: Analyze Compare Means One-way ANOVA To test the null hypothesis that several population means are equal,
Chapter 2 Data Exploration
Chapter 2 Data Exploration 2.1 Data Visualization and Summary Statistics After clearly defining the scientific question we try to answer, selecting a set of representative members from the population of
Solución del Examen Tipo: 1
Solución del Examen Tipo: 1 Universidad Carlos III de Madrid ECONOMETRICS Academic year 2009/10 FINAL EXAM May 17, 2010 DURATION: 2 HOURS 1. Assume that model (III) verifies the assumptions of the classical
DATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
. 58 58 60 62 64 66 68 70 72 74 76 78 Father s height (inches)
PEARSON S FATHER-SON DATA The following scatter diagram shows the heights of 1,0 fathers and their full-grown sons, in England, circa 1900 There is one dot for each father-son pair Heights of fathers and
Multivariate Analysis of Variance (MANOVA)
Chapter 415 Multivariate Analysis of Variance (MANOVA) Introduction Multivariate analysis of variance (MANOVA) is an extension of common analysis of variance (ANOVA). In ANOVA, differences among various
MIXED MODEL ANALYSIS USING R
Research Methods Group MIXED MODEL ANALYSIS USING R Using Case Study 4 from the BIOMETRICS & RESEARCH METHODS TEACHING RESOURCE BY Stephen Mbunzi & Sonal Nagda www.ilri.org/rmg www.worldagroforestrycentre.org/rmg
MSwM examples. Jose A. Sanchez-Espigares, Alberto Lopez-Moreno Dept. of Statistics and Operations Research UPC-BarcelonaTech.
MSwM examples Jose A. Sanchez-Espigares, Alberto Lopez-Moreno Dept. of Statistics and Operations Research UPC-BarcelonaTech February 24, 2014 Abstract Two examples are described to illustrate the use of
Point Biserial Correlation Tests
Chapter 807 Point Biserial Correlation Tests Introduction The point biserial correlation coefficient (ρ in this chapter) is the product-moment correlation calculated between a continuous random variable
Regression 3: Logistic Regression
Regression 3: Logistic Regression Marco Baroni Practical Statistics in R Outline Logistic regression Logistic regression in R Outline Logistic regression Introduction The model Looking at and comparing
Linear Models in STATA and ANOVA
Session 4 Linear Models in STATA and ANOVA Page Strengths of Linear Relationships 4-2 A Note on Non-Linear Relationships 4-4 Multiple Linear Regression 4-5 Removal of Variables 4-8 Independent Samples
SAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:
Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours
Introduction to Quantitative Methods
Introduction to Quantitative Methods October 15, 2009 Contents 1 Definition of Key Terms 2 2 Descriptive Statistics 3 2.1 Frequency Tables......................... 4 2.2 Measures of Central Tendencies.................
LOGISTIC REGRESSION ANALYSIS
LOGISTIC REGRESSION ANALYSIS C. Mitchell Dayton Department of Measurement, Statistics & Evaluation Room 1230D Benjamin Building University of Maryland September 1992 1. Introduction and Model Logistic
LAB : THE CHI-SQUARE TEST. Probability, Random Chance, and Genetics
Period Date LAB : THE CHI-SQUARE TEST Probability, Random Chance, and Genetics Why do we study random chance and probability at the beginning of a unit on genetics? Genetics is the study of inheritance,
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Part 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
Multinomial and ordinal logistic regression using PROC LOGISTIC Peter L. Flom National Development and Research Institutes, Inc
ABSTRACT Multinomial and ordinal logistic regression using PROC LOGISTIC Peter L. Flom National Development and Research Institutes, Inc Logistic regression may be useful when we are trying to model a
Lecture 6: Poisson regression
Lecture 6: Poisson regression Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Introduction EDA for Poisson regression Estimation and testing in Poisson regression
Logistic regression (with R)
Logistic regression (with R) Christopher Manning 4 November 2007 1 Theory We can transform the output of a linear regression to be suitable for probabilities by using a logit link function on the lhs as
Row vs. Column Percents. tab PRAYER DEGREE, row col
Bivariate Analysis - Crosstabulation One of most basic research tools shows how x varies with respect to y Interpretation of table depends upon direction of percentaging example Row vs. Column Percents.
Review Jeopardy. Blue vs. Orange. Review Jeopardy
Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS
QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS This booklet contains lecture notes for the nonparametric work in the QM course. This booklet may be online at http://users.ox.ac.uk/~grafen/qmnotes/index.html.
Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade
Statistics Quiz Correlation and Regression -- ANSWERS 1. Temperature and air pollution are known to be correlated. We collect data from two laboratories, in Boston and Montreal. Boston makes their measurements
AP: LAB 8: THE CHI-SQUARE TEST. Probability, Random Chance, and Genetics
Ms. Foglia Date AP: LAB 8: THE CHI-SQUARE TEST Probability, Random Chance, and Genetics Why do we study random chance and probability at the beginning of a unit on genetics? Genetics is the study of inheritance,
The first three steps in a logistic regression analysis with examples in IBM SPSS. Steve Simon P.Mean Consulting www.pmean.com
The first three steps in a logistic regression analysis with examples in IBM SPSS. Steve Simon P.Mean Consulting www.pmean.com 2. Why do I offer this webinar for free? I offer free statistics webinars
Nonlinear Regression:
Zurich University of Applied Sciences School of Engineering IDP Institute of Data Analysis and Process Design Nonlinear Regression: A Powerful Tool With Considerable Complexity Half-Day : Improved Inference
The Importance of Statistics Education
The Importance of Statistics Education Professor Jessica Utts Department of Statistics University of California, Irvine http://www.ics.uci.edu/~jutts [email protected] Outline of Talk What is Statistics? Four
1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
Multinomial and Ordinal Logistic Regression
Multinomial and Ordinal Logistic Regression ME104: Linear Regression Analysis Kenneth Benoit August 22, 2012 Regression with categorical dependent variables When the dependent variable is categorical,
Statistics 305: Introduction to Biostatistical Methods for Health Sciences
Statistics 305: Introduction to Biostatistical Methods for Health Sciences Modelling the Log Odds Logistic Regression (Chap 20) Instructor: Liangliang Wang Statistics and Actuarial Science, Simon Fraser
Statistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate
The Binomial Distribution
The Binomial Distribution James H. Steiger November 10, 00 1 Topics for this Module 1. The Binomial Process. The Binomial Random Variable. The Binomial Distribution (a) Computing the Binomial pdf (b) Computing
Elasticity. I. What is Elasticity?
Elasticity I. What is Elasticity? The purpose of this section is to develop some general rules about elasticity, which may them be applied to the four different specific types of elasticity discussed in
Descriptive Statistics. Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion
Descriptive Statistics Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion Statistics as a Tool for LIS Research Importance of statistics in research
Prospective, retrospective, and cross-sectional studies
Prospective, retrospective, and cross-sectional studies Patrick Breheny April 3 Patrick Breheny Introduction to Biostatistics (171:161) 1/17 Study designs that can be analyzed with χ 2 -tests One reason
Point and Interval Estimates
Point and Interval Estimates Suppose we want to estimate a parameter, such as p or µ, based on a finite sample of data. There are two main methods: 1. Point estimate: Summarize the sample by a single number
LOGIT AND PROBIT ANALYSIS
LOGIT AND PROBIT ANALYSIS A.K. Vasisht I.A.S.R.I., Library Avenue, New Delhi 110 012 [email protected] In dummy regression variable models, it is assumed implicitly that the dependent variable Y
Module 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
Descriptive statistics; Correlation and regression
Descriptive statistics; and regression Patrick Breheny September 16 Patrick Breheny STA 580: Biostatistics I 1/59 Tables and figures Descriptive statistics Histograms Numerical summaries Percentiles Human
Two Correlated Proportions (McNemar Test)
Chapter 50 Two Correlated Proportions (Mcemar Test) Introduction This procedure computes confidence intervals and hypothesis tests for the comparison of the marginal frequencies of two factors (each with
Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)
Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the
Basic Statistical and Modeling Procedures Using SAS
Basic Statistical and Modeling Procedures Using SAS One-Sample Tests The statistical procedures illustrated in this handout use two datasets. The first, Pulse, has information collected in a classroom
Mind on Statistics. Chapter 12
Mind on Statistics Chapter 12 Sections 12.1 Questions 1 to 6: For each statement, determine if the statement is a typical null hypothesis (H 0 ) or alternative hypothesis (H a ). 1. There is no difference
containing Kendall correlations; and the OUTH = option will create a data set containing Hoeffding statistics.
Getting Correlations Using PROC CORR Correlation analysis provides a method to measure the strength of a linear relationship between two numeric variables. PROC CORR can be used to compute Pearson product-moment
This can dilute the significance of a departure from the null hypothesis. We can focus the test on departures of a particular form.
One-Degree-of-Freedom Tests Test for group occasion interactions has (number of groups 1) number of occasions 1) degrees of freedom. This can dilute the significance of a departure from the null hypothesis.
Example 1. so the Binomial Distrubtion can be considered normal
Chapter 6 8B: Examples of Using a Normal Distribution to Approximate a Binomial Probability Distribution Example 1 The probability of having a boy in any single birth is 50%. Use a normal distribution
Econometrics Problem Set #2
Econometrics Problem Set #2 Nathaniel Higgins [email protected] Assignment The homework assignment was to read chapter 2 and hand in answers to the following problems at the end of the chapter: 2.1 2.5
Introduction. Chapter 1. 1.1 Before you start. 1.1.1 Formulation
Chapter 1 Introduction 1.1 Before you start Statistics starts with a problem, continues with the collection of data, proceeds with the data analysis and finishes with conclusions. It is a common mistake
Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving
Section 7 Algebraic Manipulations and Solving Part 1 Expressions, Equations, and Inequalities: Simplifying and Solving Before launching into the mathematics, let s take a moment to talk about the words
Economics of Strategy (ECON 4550) Maymester 2015 Applications of Regression Analysis
Economics of Strategy (ECON 4550) Maymester 015 Applications of Regression Analysis Reading: ACME Clinic (ECON 4550 Coursepak, Page 47) and Big Suzy s Snack Cakes (ECON 4550 Coursepak, Page 51) Definitions
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
Developing Risk Adjustment Techniques Using the SAS@ System for Assessing Health Care Quality in the lmsystem@
Developing Risk Adjustment Techniques Using the SAS@ System for Assessing Health Care Quality in the lmsystem@ Yanchun Xu, Andrius Kubilius Joint Commission on Accreditation of Healthcare Organizations,
Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS
Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple
ANALYSING LIKERT SCALE/TYPE DATA, ORDINAL LOGISTIC REGRESSION EXAMPLE IN R.
ANALYSING LIKERT SCALE/TYPE DATA, ORDINAL LOGISTIC REGRESSION EXAMPLE IN R. 1. Motivation. Likert items are used to measure respondents attitudes to a particular question or statement. One must recall
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Students' Opinion about Universities: The Faculty of Economics and Political Science (Case Study)
Cairo University Faculty of Economics and Political Science Statistics Department English Section Students' Opinion about Universities: The Faculty of Economics and Political Science (Case Study) Prepared
2. Simple Linear Regression
Research methods - II 3 2. Simple Linear Regression Simple linear regression is a technique in parametric statistics that is commonly used for analyzing mean response of a variable Y which changes according
Handling missing data in Stata a whirlwind tour
Handling missing data in Stata a whirlwind tour 2012 Italian Stata Users Group Meeting Jonathan Bartlett www.missingdata.org.uk 20th September 2012 1/55 Outline The problem of missing data and a principled
Statistics. Measurement. Scales of Measurement 7/18/2012
Statistics Measurement Measurement is defined as a set of rules for assigning numbers to represent objects, traits, attributes, or behaviors A variableis something that varies (eye color), a constant does
1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ
STA 3024 Practice Problems Exam 2 NOTE: These are just Practice Problems. This is NOT meant to look just like the test, and it is NOT the only thing that you should study. Make sure you know all the material
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
Pregnancy Intendedness
Pregnancy Intendedness What moms had to say: "Very excited! We wanted to be pregnant for 8 years!" "I felt too old." "I wanted to have a baby to get some support so I could be on my own; if didn't have
SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES
SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR
Lecture 13/Chapter 10 Relationships between Measurement (Quantitative) Variables
Lecture 13/Chapter 10 Relationships between Measurement (Quantitative) Variables Scatterplot; Roles of Variables 3 Features of Relationship Correlation Regression Definition Scatterplot displays relationship
MULTIPLE REGRESSION EXAMPLE
MULTIPLE REGRESSION EXAMPLE For a sample of n = 166 college students, the following variables were measured: Y = height X 1 = mother s height ( momheight ) X 2 = father s height ( dadheight ) X 3 = 1 if
Chapter 7 Factor Analysis SPSS
Chapter 7 Factor Analysis SPSS Factor analysis attempts to identify underlying variables, or factors, that explain the pattern of correlations within a set of observed variables. Factor analysis is often
Psychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck!
Psychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck! Name: 1. The basic idea behind hypothesis testing: A. is important only if you want to compare two populations. B. depends on
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,
Methods for Interaction Detection in Predictive Modeling Using SAS Doug Thompson, PhD, Blue Cross Blue Shield of IL, NM, OK & TX, Chicago, IL
Paper SA01-2012 Methods for Interaction Detection in Predictive Modeling Using SAS Doug Thompson, PhD, Blue Cross Blue Shield of IL, NM, OK & TX, Chicago, IL ABSTRACT Analysts typically consider combinations
