# E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F

Save this PDF as:

Size: px
Start display at page:

Download "E(y i ) = x T i β. yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F"

## Transcription

1 Random and Mixed Effects Models (Ch. 10) Random effects models are very useful when the observations are sampled in a highly structured way. The basic idea is that the error associated with any linear, or nonlinear E(y i ) = x T i β E(y i ) = η(x i, β) model has more structure than simply N(0, σ 2 ). Sometimes, for example, the y s are obtained by two stage sampling: a batch of chemical is sampled from a production run, and then several smaller samples are taken from each batch. Or a group of schools is chosen at random, and then classes are sampled within each school. Or a sample of patients is followed over time, so that successive measurements on an individual patient might be expected to be correlated. This last case is sometimes called repeated measures, modelling; in social science examples such as the schools example this is sometimes called multi-level modelling. I will follow the discussion in the text fairly closely, filling in where I think it is helpful. This handout only considers linear models ( 10.1). The gasoline data in library MASS (data(petrol)), is an example of the first type. There were 10 batches of crude oil (called samples in the book), and several measurements were made on each batch. The measurements are: response Y covariate SG covariate VP covariate V10 covariate EP yield of the refined product as a percentage of crude specific gravity vapour pressure ASTM 10% point ASTM end point in degrees F There is another variable in the data frame, No, which records the batch. It is a factor variable with 10 levels "A" through "J". The first three covariates were measured on the batch, and then within each batch there were several (between 2 and 4) measurements taken of EP and Y. My first steps were to try to get a sense of the data from various plots and summaries. > library(mass) > data(petrol) > dim(petrol) [1] 32 6 > petrol No SG VP V10 EP Y 1 A A A

2 4 A B B B C C C D D D D > tapply(petrol\$y,petrol\$no,mean) A B C D E F G H I J > petrol.mean <- cbind(tapply(petrol\$y,petrol\$no,mean),tapply(petrol\$sg,petrol\$no,mean), + tapply(petrol\$vp,petrol\$no,mean),tapply(petrol\$v10,petrol\$no,mean), + tapply(petrol\$ep,petrol\$no,mean) ) > petrol.mean <- data.frame(petrol.mean) > names(petrol.mean)<-c("y","sg","vp","v10","ep") > petrol.mean Y SG VP V10 EP A B C D E F G H I J Then I tried a few elementary plots, and the code given in the book (p.272) for Figure > plot(petrol\$no,petrol\$y,main="petrol Boxplot", xlab="no") > plot(petrol\$no,petrol\$ep,main="petrol Boxplot", xlab="no") 2

3 > library(lattice) > xyplot(y~ep No, data=petrol, + xlab="astm end point (deg. F)", + ylab="yield as a percent of crude", + panel=function(x,y){ + panel.grid() + m<-sort.list(x) + panel.xyplot(x[m],y[m],type="b",cex=0.5)}) The left boxplot is yield and the right boxplot is EP. The grey plot shows the regression of Y on EP in each group. (See Figure 10.1 for a clearer picture.) 3

4 The first regression model fit in the text is separate linear regressions for each of the groups. A new data frame was created that replaced each covariate by (x i x). This just changes the estimates of the intercepts; the claim is that these are then easier to interpret. > ## > ## Fit separate regressions for each of the 10 groups > ## > ## (replace each covariate by covariate - mean(covariate) > ## > Petrol <- petrol > Petrol[,2:5] <- scale(petrol[,2:5],scale=f) > pet1.lm <- lm(y ~ No/EP -1, data=petrol) > coef(pet1.lm) NoA NoB NoC NoD NoE NoF NoG NoH NoI NoJ NoA:EP NoB:EP NoC:EP NoD:EP NoE:EP NoF:EP NoG:EP NoH:EP NoI:EP NoJ:EP > matrix(round(coef(pet1.lm),2),2,10,byrow=t, + dimnames=list(c("b0","b1"),levels(petrol\$no))) A B C D E F G H I J b b The model formula No/EP, or in general a/b is explained on p.150 near the bottom. It is a shorthand for a + a:b, which means a separate model 1 + b for each level of a. (Although it can be used if a is not a factor, this doesn t usually make sense.) The mathematical model is y ij = β 0i + β 1i EP ij + ɛ ij (1) where j = 1,..., n i ; i = 1,..., 10 and we assume the ɛ ij are independent, mean 0 and variance σ 2. (We haven t used any random effects yet.) Since the slopes are all fairly similar, but the intercepts are very different, the second model tried is y ij = β 0i + βep ij + ɛ ij. (2) > pet2.lm <- lm(y~no -1 +EP, data=petrol) > summary(pet2.lm) Call: 4

5 lm(formula = Y ~ No EP, data = Petrol) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(> t ) NoA < 2e-16 *** NoB e-16 *** NoC < 2e-16 *** NoD e-16 *** NoE e-15 *** NoF e-14 *** NoG e-13 *** NoH e-11 *** NoI e-07 *** NoJ *** EP < 2e-16 *** --- Signif. codes: 0 *** ** 0.01 * Residual standard error: on 21 degrees of freedom Multiple R-Squared: ,Adjusted R-squared: F-statistic: on 11 and 21 DF, p-value: < 2.2e-16 > anova(pet2.lm,pet1.lm) Analysis of Variance Table Model 1: Y ~ No EP Model 2: Y ~ No/EP - 1 Res.Df RSS Df Sum of Sq F Pr(>F) The last command compares the full model (1) with the reduced model (2). In (1) there are 10 intercepts and 10 slopes estimated, leaving 32-20=12 residual degrees of freedom. In (2) there are 10 intercepts and 1 slope estimated, leaving 21 residual degrees of freedom. The improvement in residual sum of squares by fitting 10 different slopes is not enough to justify all these extra parameters. The next step is to see if there is any structure in the intercepts. We haven t yet used the covariates SG, VP and V10. So we now try the model y ij = µ + β 1 SG i + β 2 V P i + β 3 V 10 i + β 4 EP ij + ɛ ij (3) 5

6 where we have replaced β 0i in (2) with a linear structure. (Still no random effects!) This model has 5 parameters, leaving 27 residual degrees of freedom. The details are at the bottom of p.273 and the top of p.274. This model doesn t seem to be as good as model (2), so we still haven t really explained the variation in the intercepts (β 0i ). So now we try a different explanation, we model the intercepts as random variables with a common mean and variance σ 2 1, say: y ij = µ + ζ i + β 1 SG i + β 2 V P i + β 3 V 10 i + β 4 EP ij + ɛ ij (4) where we assume ζ i N(0, σ 2 1) independently of ɛ ij. The model means that each group has a random intercept. The variance component σ 2 1 measures the variation due to the choice of batch of crude oil, whereas σ 2 measures the variation in the process to measure the yield. Linear mixed models are fit using lme, in the library nlme. Model (3) is called a mixed effects model, as it has some random effects (intercept) and some fixed effects (everything else). >?lme > library(nlme) >?lme > pet3.lme <- lme(fixed = Y ~ SG + VP + V10 + EP, + random = ~ 1 No, data=petrol) > summary(pet3.lme) Linear mixed-effects model fit by REML Data: Petrol AIC BIC loglik Random effects: Formula: ~1 No (Intercept) Residual StdDev: Fixed effects: Y ~ SG + VP + V10 + EP Value Std.Error DF t-value p-value (Intercept) SG VP V EP Correlation: (Intr) SG VP V10 SG VP V EP

7 Standardized Within-Group Residuals: Min Q1 Med Q3 Max Number of Observations: 32 Number of Groups: 10 Note that the degrees of freedom for some of the covariates (SG, VP, V10) are 6, whereas for EP it is 21. This is because the first set only takes 10 different values, and there are four fixed effects parameters estimated. But it s a bit tricky to figure out why the intercept and EP have 21. The output gives us estimates of σ 2 1 and σ 2 : ˆσ 2 1 = (1.444) 2 = 2.09 ˆσ 2 = (1.872) 2 = Note also that the estimates of β 1,... β 4 are not very different between the fixed and mixed effects models, but that the standard errors of the estimates for SG, VP, V10 are much smaller. This is because we have separated out batch-to-batch variation from within batch variation. On p.275 the book compares the fixed and mixed effects models using anova, but before doing this they had to refit the mixed effects model using a different method for estimating the variance components (σ 2 and σ 2 1). This is related to a theoretical point about likelihood ratio tests; but the bottom line is if you are interested in estimating the components of variance σ 2 and σ 2 1 then it is better to use method REML, but if you are interested in comparing models, it is better to use method ML. Their main conclusion is that the mixed effects model doesn t really fit any better, but we re pressing on anyway. Note from the summary of pet3.lme that the variables SG and VP do not seem to have much effect on Y, so the next model they tried omits these variables. Fitted objects with different fixed effects. REML comparisons are not meaningful. in: an > pet4.lme <- update(pet3.lme, fixed=y~v10 +EP) > anova(pet4.lme,pet3.lme) Model df AIC BIC loglik Test L.Ratio p-value pet4.lme pet3.lme vs Warning message: I just tried the above anova statement for fun, but note that I got a warning telling me it was not valid, because of the fitting method used. This is a reminder to refit both models using method="ml" if we want to do likelihood ratio tests to compare nested models. 7

8 > pet3.lme <- update(pet3.lme, method="ml") > pet4.lme <- update(pet3.lme, fixed=y~v10+ep) > anova(pet4.lme,pet3.lme) Model df AIC BIC loglik Test L.Ratio p-value pet4.lme pet3.lme vs > summary(pet4.lme) Linear mixed-effects model fit by maximum likelihood Data: Petrol AIC BIC loglik Random effects: Formula: ~1 No (Intercept) Residual StdDev: Fixed effects: Y ~ V10 + EP Value Std.Error DF t-value p-value (Intercept) V EP Correlation: (Intr) V10 V EP Standardized Within-Group Residuals: Min Q1 Med Q3 Max Number of Observations: 32 Number of Groups: 10 There are many components to an lme object; see?lme.object for details. For example, > fixed.effects(pet4.lme) (Intercept) V10 EP > coef(pet4.lme) (Intercept) V10 EP A

9 B C D E F G H I J The intercepts are actual estimates of the individual random effects ζ i, although for random variables we usually think of predicting them, rather than estimating them. They are not explicitly used for summarizing the data; for that we use the estimates ˆσ 2 1 and ˆσ 2. The final model is to let the slope on EP have a random component: y ij = µ + ζ i + β 3 V 10 i + (β 4 + η i )EP ij + ɛ ij (5) where we have as before ɛ ij N(0, σ 2 ), ζ i N(0, σ1) 2 and now η i N(0, σ2) 2 and cov(ζ i, η i ) = σ 12, i.e. we don t assume the random effects for the intercept and slope are necessarily independent. > pet5.lme <- update (pet4.lme, random=~1 + EP No) > pet5.lme Linear mixed-effects model fit by maximum likelihood Data: Petrol Log-likelihood: Fixed: Y ~ V10 + EP (Intercept) V10 EP Random effects: Formula: ~1 + EP No Structure: General positive-definite, Log-Cholesky parametrization StdDev Corr (Intercept) (Intr) EP Residual Number of Observations: 32 Number of Groups: 10 > anova(pet4.lme,pet5.lme) Model df AIC BIC loglik Test L.Ratio p-value pet4.lme pet5.lme vs

10 This shows that model (5) fits the data no better than model (4). The anova command gives different output for lme than for lm, although it uses the same general theory. In this last comparison, pet4.lme has fitted 5 parameters (µ, β 3, β 4, σ 2, σ 2 1) and pet5.lme has fitted 7 parameters: these 5 plus σ 2 2 and σ 12. So the difference in loglikelihoods has 2 degrees of freedom. The hardest part in specifying the model for lme is getting the random effects part correctly specified. I didn t find the help file very helpful. The general format for lme is lme(fixed =... [formula], data=... [data.frame], random =... [formula],... [other stuff] ). Once you see a formula for a given example it s kind of obvious, but I find it hard to construct it from scratch. The definitive reference is the book by Pinheiro and Bates (exact reference in the help file). The next two linear model examples are a multi-level sampling example involving students (in classes, in schools), and a growth curve model, where measurements are repeated on the same experimental unit (in this case trees) at several time points. A very general formulation of the mixed effects linear model is given on p.279: y ij = x ij β + z ij ζ i + ɛ ij (6) where we assume we have the responses in groups indexed by i, and then the responses within each group are indexed by j. In fact the schools example has more structure, I think it should be indexed by ijk, where i = 1, 2 indexes levels of COMB, j indexes schools and k indexes pupils; but I m not completely sure. In (6) x ij and z ij are row vectors of explanatory variables. The most general assumption about ɛ ij is that they are independent among levels of i (cov(ɛ ij, ɛ i j) = 0), but possibly dependent within groups: var(ɛ ij ) = σ 2 g(µ ij, z ij, θ), corr(ɛ ij ) = Γ(α). Note this is a very general formulation of the variance; in our example above we had a much simpler structure. It is usually assumed as well that ζ i are independent of the ɛ ij and have variance-covariance matrix (note that ζ i are vectors) var(ζ i ) = D(α ζ ). Usually g 1, Γ = I, and only D is unrestricted (as we had above for pet5.lme). 10

### Statistical Models in R

Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate

### ANOVA. February 12, 2015

ANOVA February 12, 2015 1 ANOVA models Last time, we discussed the use of categorical variables in multivariate regression. Often, these are encoded as indicator columns in the design matrix. In [1]: %%R

### Multiple Linear Regression

Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is

### MIXED MODEL ANALYSIS USING R

Research Methods Group MIXED MODEL ANALYSIS USING R Using Case Study 4 from the BIOMETRICS & RESEARCH METHODS TEACHING RESOURCE BY Stephen Mbunzi & Sonal Nagda www.ilri.org/rmg www.worldagroforestrycentre.org/rmg

### Statistical Models in R

Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova

### 5. Linear Regression

5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

### Regression in ANOVA. James H. Steiger. Department of Psychology and Human Development Vanderbilt University

Regression in ANOVA James H. Steiger Department of Psychology and Human Development Vanderbilt University James H. Steiger (Vanderbilt University) 1 / 30 Regression in ANOVA 1 Introduction 2 Basic Linear

### DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9

DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,

### Time-Series Regression and Generalized Least Squares in R

Time-Series Regression and Generalized Least Squares in R An Appendix to An R Companion to Applied Regression, Second Edition John Fox & Sanford Weisberg last revision: 11 November 2010 Abstract Generalized

### Stat 5303 (Oehlert): Tukey One Degree of Freedom 1

Stat 5303 (Oehlert): Tukey One Degree of Freedom 1 > catch

### Psychology 205: Research Methods in Psychology

Psychology 205: Research Methods in Psychology Using R to analyze the data for study 2 Department of Psychology Northwestern University Evanston, Illinois USA November, 2012 1 / 38 Outline 1 Getting ready

### Introduction to Hierarchical Linear Modeling with R

Introduction to Hierarchical Linear Modeling with R 5 10 15 20 25 5 10 15 20 25 13 14 15 16 40 30 20 10 0 40 30 20 10 9 10 11 12-10 SCIENCE 0-10 5 6 7 8 40 30 20 10 0-10 40 1 2 3 4 30 20 10 0-10 5 10 15

### Comparing Nested Models

Comparing Nested Models ST 430/514 Two models are nested if one model contains all the terms of the other, and at least one additional term. The larger model is the complete (or full) model, and the smaller

### Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma siersma@sund.ku.dk The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association

### Stats 112/203: Sample Midterm Exam Solutions

Stats 112/203: Sample Midterm Exam Solutions 1. Consider the following linear mixed model: Y ij = β 0 + b 0i + (β 1 + b 1i )t ij + ɛ ij, where Y ij is the response measurement for the ith individual s

### Introducing the Multilevel Model for Change

Department of Psychology and Human Development Vanderbilt University GCM, 2010 1 Multilevel Modeling - A Brief Introduction 2 3 4 5 Introduction In this lecture, we introduce the multilevel model for change.

### n + n log(2π) + n log(rss/n)

There is a discrepancy in R output from the functions step, AIC, and BIC over how to compute the AIC. The discrepancy is not very important, because it involves a difference of a constant factor that cancels

### EDUCATION AND VOCABULARY MULTIPLE REGRESSION IN ACTION

EDUCATION AND VOCABULARY MULTIPLE REGRESSION IN ACTION EDUCATION AND VOCABULARY 5-10 hours of input weekly is enough to pick up a new language (Schiff & Myers, 1988). Dutch children spend 5.5 hours/day

### Stat 411/511 ANOVA & REGRESSION. Charlotte Wickham. stat511.cwick.co.nz. Nov 31st 2015

Stat 411/511 ANOVA & REGRESSION Nov 31st 2015 Charlotte Wickham stat511.cwick.co.nz This week Today: Lack of fit F-test Weds: Review email me topics, otherwise I ll go over some of last year s final exam

### Correlation and Simple Linear Regression

Correlation and Simple Linear Regression We are often interested in studying the relationship among variables to determine whether they are associated with one another. When we think that changes in a

### Using R for Linear Regression

Using R for Linear Regression In the following handout words and symbols in bold are R functions and words and symbols in italics are entries supplied by the user; underlined words and symbols are optional

### A short course in Longitudinal Data Analysis ESRC Research Methods and Short Course Material for Practicals with the joiner package.

A short course in Longitudinal Data Analysis ESRC Research Methods and Short Course Material for Practicals with the joiner package. Lab 2 - June, 2008 1 jointdata objects To analyse longitudinal data

### Mixed models in R using the lme4 package Part 2: Longitudinal data, modeling interactions

Mixed models in R using the lme4 package Part 2: Longitudinal data, modeling interactions Douglas Bates 8 th International Amsterdam Conference on Multilevel Analysis 2011-03-16 Douglas

### Multilevel Modeling in R, Using the nlme Package

Multilevel Modeling in R, Using the nlme Package William T. Hoyt (University of Wisconsin-Madison) David A. Kenny (University of Connecticut) March 21, 2013 Supplement to Kenny, D. A., & Hoyt, W. (2009)

### Testing for Lack of Fit

Chapter 6 Testing for Lack of Fit How can we tell if a model fits the data? If the model is correct then ˆσ 2 should be an unbiased estimate of σ 2. If we have a model which is not complex enough to fit

### Week 5: Multiple Linear Regression

BUS41100 Applied Regression Analysis Week 5: Multiple Linear Regression Parameter estimation and inference, forecasting, diagnostics, dummy variables Robert B. Gramacy The University of Chicago Booth School

### We extended the additive model in two variables to the interaction model by adding a third term to the equation.

Quadratic Models We extended the additive model in two variables to the interaction model by adding a third term to the equation. Similarly, we can extend the linear model in one variable to the quadratic

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance

Lecture 11: Confidence intervals and model comparison for linear regression; analysis of variance 14 November 2007 1 Confidence intervals and hypothesis testing for linear regression Just as there was

### ANOVA Designs - Part II. Nested Designs. Nested Designs. Nested Designs (NEST) Design Linear Model Computation

ANOVA Designs - Part II Nested Designs (NEST) Design Linear Model Computation Example NCSS s (FACT) Design Linear Model Computation Example NCSS RCB Factorial (Combinatorial Designs) Nested Designs A nested

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### data visualization and regression

data visualization and regression Sepal.Length 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 I. setosa I. versicolor I. virginica I. setosa I. versicolor I. virginica Species Species

### Univariate Regression

Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is

### Multivariate Logistic Regression

1 Multivariate Logistic Regression As in univariate logistic regression, let π(x) represent the probability of an event that depends on p covariates or independent variables. Then, using an inv.logit formulation

### 12-1 Multiple Linear Regression Models

12-1.1 Introduction Many applications of regression analysis involve situations in which there are more than one regressor variable. A regression model that contains more than one regressor variable is

### I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

Beckman HLM Reading Group: Questions, Answers and Examples Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Algebra Slide 1 of

### SAS Syntax and Output for Data Manipulation:

Psyc 944 Example 5 page 1 Practice with Fixed and Random Effects of Time in Modeling Within-Person Change The models for this example come from Hoffman (in preparation) chapter 5. We will be examining

### Lucky vs. Unlucky Teams in Sports

Lucky vs. Unlucky Teams in Sports Introduction Assuming gambling odds give true probabilities, one can classify a team as having been lucky or unlucky so far. Do results of matches between lucky and unlucky

### Chapter 5: Linear regression

Chapter 5: Linear regression Last lecture: Ch 4............................................................ 2 Next: Ch 5................................................................. 3 Simple linear

### Analysis of Longitudinal Data in Stata, Splus and SAS

Analysis of Longitudinal Data in Stata, Splus and SAS Rino Bellocco, Sc.D. Department of Medical Epidemiology Karolinska Institutet Stockholm, Sweden rino@mep.ki.se March 12, 2001 NASUGS, 2001 OUTLINE

### Highlights the connections between different class of widely used models in psychological and biomedical studies. Multiple Regression

GLMM tutor Outline 1 Highlights the connections between different class of widely used models in psychological and biomedical studies. ANOVA Multiple Regression LM Logistic Regression GLM Correlated data

### An Sweave Demo. Charles J. Geyer. July 27, latex

An Sweave Demo Charles J. Geyer July 27, 2010 This is a demo for using the Sweave command in R. To get started make a regular L A TEX file (like this one) but give it the suffix.rnw instead of.tex and

### 1.5 Oneway Analysis of Variance

Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

### Data and Regression Analysis. Lecturer: Prof. Duane S. Boning. Rev 10

Data and Regression Analysis Lecturer: Prof. Duane S. Boning Rev 10 1 Agenda 1. Comparison of Treatments (One Variable) Analysis of Variance (ANOVA) 2. Multivariate Analysis of Variance Model forms 3.

### Applications of R Software in Bayesian Data Analysis

Article International Journal of Information Science and System, 2012, 1(1): 7-23 International Journal of Information Science and System Journal homepage: www.modernscientificpress.com/journals/ijinfosci.aspx

### Lets suppose we rolled a six-sided die 150 times and recorded the number of times each outcome (1-6) occured. The data is

In this lab we will look at how R can eliminate most of the annoying calculations involved in (a) using Chi-Squared tests to check for homogeneity in two-way tables of catagorical data and (b) computing

### MSwM examples. Jose A. Sanchez-Espigares, Alberto Lopez-Moreno Dept. of Statistics and Operations Research UPC-BarcelonaTech.

MSwM examples Jose A. Sanchez-Espigares, Alberto Lopez-Moreno Dept. of Statistics and Operations Research UPC-BarcelonaTech February 24, 2014 Abstract Two examples are described to illustrate the use of

### e = random error, assumed to be normally distributed with mean 0 and standard deviation σ

1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.

### 2009 Jon Wakefield, Stat/Biostat 571

CHAPTER: MULTILEVEL MODELS We have so far considered dependencies within data, when given a single set of random effects units were viewed as independent in these situations we have a single layer of clustering.

### Generalized Linear Models

Generalized Linear Models We have previously worked with regression models where the response variable is quantitative and normally distributed. Now we turn our attention to two types of models where the

### N-Way Analysis of Variance

N-Way Analysis of Variance 1 Introduction A good example when to use a n-way ANOVA is for a factorial design. A factorial design is an efficient way to conduct an experiment. Each observation has data

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction

### Exchange Rate Regime Analysis for the Chinese Yuan

Exchange Rate Regime Analysis for the Chinese Yuan Achim Zeileis Ajay Shah Ila Patnaik Abstract We investigate the Chinese exchange rate regime after China gave up on a fixed exchange rate to the US dollar

### Introduction to Generalized Linear Models

to Generalized Linear Models Heather Turner ESRC National Centre for Research Methods, UK and Department of Statistics University of Warwick, UK WU, 2008 04 22-24 Copyright c Heather Turner, 2008 to Generalized

### Module 5: Introduction to Multilevel Modelling. R Practical. Introduction to the Scottish Youth Cohort Trends Dataset. Contents

Introduction Module 5: Introduction to Multilevel Modelling R Practical Camille Szmaragd and George Leckie 1 Centre for Multilevel Modelling Some of the sections within this module have online quizzes

: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

### Chapter 4 Models for Longitudinal Data

Chapter 4 Models for Longitudinal Data Longitudinal data consist of repeated measurements on the same subject (or some other experimental unit ) taken over time. Generally we wish to characterize the time

### Outline. Topic 4 - Analysis of Variance Approach to Regression. Partitioning Sums of Squares. Total Sum of Squares. Partitioning sums of squares

Topic 4 - Analysis of Variance Approach to Regression Outline Partitioning sums of squares Degrees of freedom Expected mean squares General linear test - Fall 2013 R 2 and the coefficient of correlation

### Simple Predictive Analytics Curtis Seare

Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use

### Regression, least squares

Regression, least squares Joe Felsenstein Department of Genome Sciences and Department of Biology Regression, least squares p.1/24 Fitting a straight line X Two distinct cases: The X values are chosen

### Random effects and nested models with SAS

Random effects and nested models with SAS /************* classical2.sas ********************* Three levels of factor A, four levels of B Both fixed Both random A fixed, B random B nested within A ***************************************************/

### 2008 Jon Wakefield, Stat/Biostat 571

CHAPTER: MULTILEVEL MODELS We have so far considered dependencies within data, when given a single set of random effects units were viewed as independent in these situations we have a single layer of clustering.

### Generalized Linear Models. Today: definition of GLM, maximum likelihood estimation. Involves choice of a link function (systematic component)

Generalized Linear Models Last time: definition of exponential family, derivation of mean and variance (memorize) Today: definition of GLM, maximum likelihood estimation Include predictors x i through

### Chapter 7. One-way ANOVA

Chapter 7 One-way ANOVA One-way ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The t-test of Chapter 6 looks

### Deterministic and Stochastic Modeling of Insulin Sensitivity

Deterministic and Stochastic Modeling of Insulin Sensitivity Master s Thesis in Engineering Mathematics and Computational Science ELÍN ÖSP VILHJÁLMSDÓTTIR Department of Mathematical Science Chalmers University

### Statistics II Final Exam - January Use the University stationery to give your answers to the following questions.

Statistics II Final Exam - January 2012 Use the University stationery to give your answers to the following questions. Do not forget to write down your name and class group in each page. Indicate clearly

### Stat 412/512 CASE INFLUENCE STATISTICS. Charlotte Wickham. stat512.cwick.co.nz. Feb 2 2015

Stat 412/512 CASE INFLUENCE STATISTICS Feb 2 2015 Charlotte Wickham stat512.cwick.co.nz Regression in your field See website. You may complete this assignment in pairs. Find a journal article in your field

### Lecture 5 Hypothesis Testing in Multiple Linear Regression

Lecture 5 Hypothesis Testing in Multiple Linear Regression BIOST 515 January 20, 2004 Types of tests 1 Overall test Test for addition of a single variable Test for addition of a group of variables Overall

### Supplement on the Kruskal-Wallis test. So what do you do if you don t meet the assumptions of an ANOVA?

Supplement on the Kruskal-Wallis test So what do you do if you don t meet the assumptions of an ANOVA? {There are other ways of dealing with things like unequal variances and non-normal data, but we won

### Final Exam Practice Problem Answers

Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

### MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)

MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part

### Basic Statistcs Formula Sheet

Basic Statistcs Formula Sheet Steven W. ydick May 5, 0 This document is only intended to review basic concepts/formulas from an introduction to statistics course. Only mean-based procedures are reviewed,

### SUMAN DUVVURU STAT 567 PROJECT REPORT

SUMAN DUVVURU STAT 567 PROJECT REPORT SURVIVAL ANALYSIS OF HEROIN ADDICTS Background and introduction: Current illicit drug use among teens is continuing to increase in many countries around the world.

### Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear. In the main dialog box, input the dependent variable and several predictors.

### Exercise 1.12 (Pg. 22-23)

Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.

### Getting Correct Results from PROC REG

Getting Correct Results from PROC REG Nathaniel Derby, Statis Pro Data Analytics, Seattle, WA ABSTRACT PROC REG, SAS s implementation of linear regression, is often used to fit a line without checking

### Quantitative Understanding in Biology Module II: Model Parameter Estimation Lecture I: Linear Correlation and Regression

Quantitative Understanding in Biology Module II: Model Parameter Estimation Lecture I: Linear Correlation and Regression Correlation Linear correlation and linear regression are often confused, mostly

### Using Minitab for Regression Analysis: An extended example

Using Minitab for Regression Analysis: An extended example The following example uses data from another text on fertilizer application and crop yield, and is intended to show how Minitab can be used to

### Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

### Mixed-effects regression and eye-tracking data

Mixed-effects regression and eye-tracking data Lecture 2 of advanced regression methods for linguists Martijn Wieling and Jacolien van Rij Seminar für Sprachwissenschaft University of Tübingen LOT Summer

### BIOL 933 Lab 6 Fall 2015. Data Transformation

BIOL 933 Lab 6 Fall 2015 Data Transformation Transformations in R General overview Log transformation Power transformation The pitfalls of interpreting interactions in transformed data Transformations

### Linear Mixed-Effects Modeling in SPSS: An Introduction to the MIXED Procedure

Technical report Linear Mixed-Effects Modeling in SPSS: An Introduction to the MIXED Procedure Table of contents Introduction................................................................ 1 Data preparation

### Statistics 104: Section 6!

Page 1 Statistics 104: Section 6! TF: Deirdre (say: Dear-dra) Bloome Email: dbloome@fas.harvard.edu Section Times Thursday 2pm-3pm in SC 109, Thursday 5pm-6pm in SC 705 Office Hours: Thursday 6pm-7pm SC

### Math 141. Lecture 24: Model Comparisons and The F-test. Albyn Jones 1. 1 Library jones/courses/141

Math 141 Lecture 24: Model Comparisons and The F-test Albyn Jones 1 1 Library 304 jones@reed.edu www.people.reed.edu/ jones/courses/141 Nested Models Two linear models are Nested if one (the restricted

### The Latent Variable Growth Model In Practice. Individual Development Over Time

The Latent Variable Growth Model In Practice 37 Individual Development Over Time y i = 1 i = 2 i = 3 t = 1 t = 2 t = 3 t = 4 ε 1 ε 2 ε 3 ε 4 y 1 y 2 y 3 y 4 x η 0 η 1 (1) y ti = η 0i + η 1i x t + ε ti

### Use of deviance statistics for comparing models

A likelihood-ratio test can be used under full ML. The use of such a test is a quite general principle for statistical testing. In hierarchical linear models, the deviance test is mostly used for multiparameter

### Part II. Multiple Linear Regression

Part II Multiple Linear Regression 86 Chapter 7 Multiple Regression A multiple linear regression model is a linear model that describes how a y-variable relates to two or more xvariables (or transformations

### Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

### Technical report. in SPSS AN INTRODUCTION TO THE MIXED PROCEDURE

Linear mixedeffects modeling in SPSS AN INTRODUCTION TO THE MIXED PROCEDURE Table of contents Introduction................................................................3 Data preparation for MIXED...................................................3

### Yiming Peng, Department of Statistics. February 12, 2013

Regression Analysis Using JMP Yiming Peng, Department of Statistics February 12, 2013 2 Presentation and Data http://www.lisa.stat.vt.edu Short Courses Regression Analysis Using JMP Download Data to Desktop

### Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011

Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this

### MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could

### Applied Statistics. J. Blanchet and J. Wadsworth. Institute of Mathematics, Analysis, and Applications EPF Lausanne

Applied Statistics J. Blanchet and J. Wadsworth Institute of Mathematics, Analysis, and Applications EPF Lausanne An MSc Course for Applied Mathematicians, Fall 2012 Outline 1 Model Comparison 2 Model

### Statistiek II. John Nerbonne. March 24, 2010. Information Science, Groningen Slides improved a lot by Harmut Fitz, Groningen!

Information Science, Groningen j.nerbonne@rug.nl Slides improved a lot by Harmut Fitz, Groningen! March 24, 2010 Correlation and regression We often wish to compare two different variables Examples: compare

### EPS 625 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM

EPS 6 ANALYSIS OF COVARIANCE (ANCOVA) EXAMPLE USING THE GENERAL LINEAR MODEL PROGRAM ANCOVA One Continuous Dependent Variable (DVD Rating) Interest Rating in DVD One Categorical/Discrete Independent Variable