How To Model A Relationship Between Two Variables In A Scatterplot

Similar documents
Session 7 Bivariate Data and Analysis

Exercise 1.12 (Pg )

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Scatter Plot, Correlation, and Regression on the TI-83/84

Relationships Between Two Variables: Scatterplots and Correlation

Lecture 11: Chapter 5, Section 3 Relationships between Two Quantitative Variables; Correlation

Chapter 7: Simple linear regression Learning Objectives

Section 14 Simple Linear Regression: Introduction to Least Squares Regression

You buy a TV for $1000 and pay it off with $100 every week. The table below shows the amount of money you sll owe every week. Week

Describing Relationships between Two Variables

Linear Regression. Chapter 5. Prediction via Regression Line Number of new birds and Percent returning. Least Squares

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

Correlation key concepts:

Correlation and Regression

AP STATISTICS REVIEW (YMS Chapters 1-8)

Section 3 Part 1. Relationships between two numerical variables

Descriptive statistics; Correlation and regression

MTH 140 Statistics Videos

2. Simple Linear Regression

Correlation Coefficient The correlation coefficient is a summary statistic that describes the linear relationship between two numerical variables 2

Lecture 13/Chapter 10 Relationships between Measurement (Quantitative) Variables

Module 3: Correlation and Covariance

Pie Charts. proportion of ice-cream flavors sold annually by a given brand. AMS-5: Statistics. Cherry. Cherry. Blueberry. Blueberry. Apple.

Homework 8 Solutions

2 Describing, Exploring, and

How Does My TI-84 Do That

Scatter Plots with Error Bars

Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Simple linear regression

2. Here is a small part of a data set that describes the fuel economy (in miles per gallon) of 2006 model motor vehicles.

Diagrams and Graphs of Statistical Data

Fairfield Public Schools

Describing, Exploring, and Comparing Data

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

Simple Regression Theory II 2010 Samuel L. Baker

Chapter 7 Scatterplots, Association, and Correlation

Copyright 2007 by Laura Schultz. All rights reserved. Page 1 of 5

Course Objective This course is designed to give you a basic understanding of how to run regressions in SPSS.

Copyright 2013 by Laura Schultz. All rights reserved. Page 1 of 7

FREE FALL. Introduction. Reference Young and Freedman, University Physics, 12 th Edition: Chapter 2, section 2.5

Chapter 1: Looking at Data Section 1.1: Displaying Distributions with Graphs

Chapter 9 Descriptive Statistics for Bivariate Data

Name: Date: Use the following to answer questions 2-3:

The Effect of Dropping a Ball from Different Heights on the Number of Times the Ball Bounces

Chapter 23. Inferences for Regression

Summary of important mathematical operations and formulas (from first tutorial):

Father s height (inches)

Using Excel for Statistical Analysis

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

Simple Predictive Analytics Curtis Seare

with functions, expressions and equations which follow in units 3 and 4.

Descriptive Statistics

Univariate Regression

Example: Boats and Manatees

Lab 1: The metric system measurement of length and weight

CALCULATIONS & STATISTICS

Chapter 11: r.m.s. error for regression


Summarizing and Displaying Categorical Data

HISTOGRAMS, CUMULATIVE FREQUENCY AND BOX PLOTS

Unit 9 Describing Relationships in Scatter Plots and Line Graphs

MULTIPLE REGRESSION EXAMPLE

Linear functions Increasing Linear Functions. Decreasing Linear Functions

Formula for linear models. Prediction, extrapolation, significance test against zero slope.

The Correlation Coefficient

USING A TI-83 OR TI-84 SERIES GRAPHING CALCULATOR IN AN INTRODUCTORY STATISTICS CLASS

2013 MBA Jump Start Program. Statistics Module Part 3

AP Statistics Solutions to Packet 2

Final Exam Practice Problem Answers

We are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries?

STAT 350 Practice Final Exam Solution (Spring 2015)

Graphing Quadratic Functions

Your Name: Section: INTRODUCTION TO STATISTICAL REASONING Computer Lab Exercise #5 Analysis of Time of Death Data for Soldiers in Vietnam

Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data

Measurement with Ratios

Chapter 2: Descriptive Statistics

Elements of a graph. Click on the links below to jump directly to the relevant section

Linear Equations. 5- Day Lesson Plan Unit: Linear Equations Grade Level: Grade 9 Time Span: 50 minute class periods By: Richard Weber

Additional sources Compilation of sources:

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

Dealing with Data in Excel 2010

Formulas, Functions and Charts

WEB APPENDIX. Calculating Beta Coefficients. b Beta Rise Run Y X

c. Construct a boxplot for the data. Write a one sentence interpretation of your graph.

Exploratory data analysis (Chapter 2) Fall 2011

Descriptive Statistics

Microeconomics Sept. 16, 2010 NOTES ON CALCULUS AND UTILITY FUNCTIONS

The data set we have taken is about calculating body fat percentage for an individual.

Elasticity. I. What is Elasticity?

Determination of g using a spring

the Median-Medi Graphing bivariate data in a scatter plot

6.4 Normal Distribution

Pennsylvania System of School Assessment

The right edge of the box is the third quartile, Q 3, which is the median of the data values above the median. Maximum Median

AP Statistics. Chapter 4 Review

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Academic Support Center. Using the TI-83/84+ Graphing Calculator PART II

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Transcription:

TI 83/84 Plus commands To enter the data: Press [STAT] Under EDIT select 1: Edit and press ENTER Columns with names L1, L2 etc. will appear Type the data value under the column; each data entry will be followed by ENTER. To clear data: Pressing CLEAR will clear the particular data. To clear all data from all columns press [2nd] & + and then choose 4: ClrAllLists.

TI 83/84 Plus commands

How to plot time series data with Minitab Graph/Timeseries Plot/Simple

Looking at Data - Relationships Scatterplots IPS Chapter 2.1 2009 W.H. Freeman and Company

Example If we consider purebred dogs, breeds that are large tend to have a shorter life spans than that are small. For example, a study by Patronek, Waters and Glickman (1977) found that miniature poodles lived an average of 9.3 years, while Great Danes have an average life span of only 4.6 years. 1. What sort of relationship one can expect between weight and longevity? 2. Is it possible to quantify this relationship? 3. Given the weight of a dog, can we predict its longevity? 4. Is it the weight of the dog that has effect on longevity? Or, do large breeds of dogs have shorter life?

Things to consider What is the direction of the relationship? What is the form of the relationship? How strong is the relationship? What are the type of variables? Notice here weight and longevity are quantitative variables, but breed is a categorical variable. Does a strong relationship really implies that one variable is the cause and the other the effect?

Goal Exploring relationships (or association) between two quantitative variables a) by drawing a picture (known as scatterplot), b) using a quantitative summary (known as correlation coefficient or simply correlation). Also we shall discuss how to get an idea about relationship between two categorical variables through contingency tables.

Example: Height and Weight How is weight of an individual related to his/her height? Typically, one can expect a taller person to be heavier. Is it supported by the data? If yes, how to determine this association?

What is a scatterplot? A scatterplot is a diagram which is used to display values of two quantitative variables from a data-set. The data is displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.

Example 1: Scatterplot of height and weight

Example 2: Scatterplot of hours watching TV and test scores

Looking at Scatterplots We look at the following features of a scatterplot:- Direction (positive or negative) Form (linear, curved) Strength (of the relationship) Unusual Features. When we describe histograms we mention Shape Center Spread Outliers

Asking Questions on a Scatterplot Are test scores higher or lower when the TV watching is longer? Direction (positive or negative association). Does the cloud of points seem to show a linear pattern, a curved pattern, or no pattern at all? Form. If there is a pattern, how strong does the relationship look? Strength. Are there any unusual features? (2 or more groups or outliers).

Form and direction of an association Linear No relationship Nonlinear

This association is: A.positive B. negative.

This association is: A. positive B. negative.

Positive association: High values of one variable tend to occur together with high values of the other variable. Negative association: High values of one variable tend to occur together with low values of the other variable.

No relationship: X and Y vary independently. Knowing X tells you nothing about Y.

Strength of the association The strength of the relationship between the two variables can be seen by how much variation, or scatter, there is around the main form. With a strong relationship, you can get a pretty good estimate of y if you know x. With a weak relationship, for any x you might get a wide range of y values.

This is a weak relationship. For a particular state median household income, you can t predict the state per capita income very well. This is a very strong relationship. The daily amount of gas consumed can be predicted quite accurately for a given temperature value.

Which one has stronger linear association? A.left one, B.right one. Because, in the right graph the points are closer to a straight line.

Which one has stronger linear association? A.left one, B.right one. Hard to say we need a measure of linear association.

Outliers An outlier is a data value that has a very low probability of occurrence (i.e., it is unusual or unexpected). In a scatterplot, outliers are points that fall outside of the overall pattern of the relationship.

Not an outlier: Outliers The upper right-hand point here is not an outlier of the relationship It is what you would expect for this many beers given the linear relationship between beers/weight and blood alcohol. This point is not in line with the others, so it is an outlier of the relationship.

IQ score and Grade point average a)describe in words what this plot shows. b)describe the direction, shape, and strength. Are there outliers? c) What is the deal with these people?

Unusual Feature: Two Subgroups This scatterplot clearly has two subgroups.

Transformation Sometimes the actual recorded data may not reveal the relationship very well. In many cases transformed data may help. The graph on the left shows how the weight of an animal s brain is related to its body weight. Not a very clear picture to reveal the relationship. Seems outliers are present. The graph on the right plots logarithm of brain weights against the logarithm of the body weight, which shows clear relationship. Outliers are also not present. Scatterplot of brain wt g vs body wt kg Scatterplot of log brain vs log body 6000 4 5000 3 4000 brain wt g 3000 2000 log brain 2 1 1000 0 0 0 1000 2000 3000 4000 body wt kg 5000 6000 7000-1 -3-2 -1 0 1 log body 2 3 4

Explanatory and Response Variables The main variable of interest (the one which we would like to predict) is called the response variable. The other variable is called the explanatory variable or the predictor variable. Typically we plot the explanatory variable along the horizonatal axis (x-axis) and the response variable along the vertical axis (y-axis).

Example: Scatterplot of height and weight In this case, we are trying to predict the weight based on the height of a person. Therefore weight is the response variable, and height is the explanatory variable.

Looking at Data - Relationships - Correlation IPS Chapter 2.2

How to measure linear association?

Correlation is unit-free Because correlation is calculated using standardized scores it is free of unit (i.e. does not have any unit); does not change if the data are rescaled. In particular, this means that correlation does not depend on the unit of the two quantitative variables. For example, if you are computing the correlation between the heights and weights of a bunch of individuals, it does not matter if the heights are measured in inches or cms and if the weights are measured in lbs or kgs.

Properties of Correlation Correlation is unit-free. Correlation does not change if the data are rescaled. It is a number between -1 and 1. The sign of the correlation indicates the direction of the linear association (if the association is positive then so is the correlation and if the association is negative then so is the correlation). The closer the correlation is to -1 or 1, the stronger is the linear association. Correlations near 0 indicate weak linear association.

Words of Warning about Correlation Correlation measures linear association between two quantitative variables. Correlation measures only the strength of the linear association. If correlation between two variables is 0, it only means that they are not linearly associated. They may still be nonlinearly associated. To measure the strength of linear association only the value of correlation matters. A correlation of -0.8 is a stronger linear association compared to a correlation value 0.7. The negative and positive signs of correlation only indicate direction of association. Presence of outlier(s) may severely influence correlation. High correlation value may not always imply causation.

"r" ranges from -1 to +1 "r" quantifies the strength and direction of a linear relationship between 2 quantitative variables. Strength: how closely the points follow a straight line. Direction: is positive when individuals with higher X values tend to have higher values of Y.

When variability in one or both variables decreases, the correlation coefficient gets stronger ( closer to +1 or -1).

Correlation only describes linear relationships No matter how strong the association, r does not describe curved relationships. Note: You can sometimes transform a non-linear association to a linear form, for instance by taking the logarithm. You can then calculate a correlation using the transformed data.

Influential points Correlations are calculated using means and standard deviations, and thus are NOT resistant to outliers. Just moving one point away from the general trend here decreases the correlation from -0.91 to -0.75

Try it out for yourself --- companion book website http://www.whfreeman.com/ips6e Adding two outliers decreases r from 0.95 to 0.61.

Thought quiz on correlation 1. Why is there no distinction between explanatory and response variables in correlation? 2. Why do both variables have to be quantitative? 3. How does changing the units of measurement affect correlation? 4. What is the effect of outliers on correlations? 5. Why doesn t a tight fit to a horizontal line imply a strong correlation?

Check before calculation of correlation Are the variables quantitative? Is the form of the scatter plot straight enough (so that a linear relationship makes sense)? Have we removed the outliers? Or else, the value of the correlation can get distorted dramatically.

Looking at Data - Relationships Least-Squares Regression IPS Chapter 2.3

Explanatory and Response Variables Above scatter plot indicates a linear relationship between height and weight. Suppose an individual is 68 in tall. How can we predict his weight? The main variable of interest (the one which we would like to predict) is called the response variable (denoted by y). The other variable is called the explanatory variable or the predictor variable (denoted by x). Here height is the predictor (or explanatory variable) and weight is the response variable.

Correlation tells us about strength (scatter) and direction of the linear relationship between two quantitative variables. In addition, we would like to have a numerical description of how both variables vary together. For instance, is one variable increasing faster than the other one? And we would like to make predictions based on that numerical description. But which line best describes our data?

What is Linear Regression? When the scatter plot looks roughly linear, we may model the relationship between the variables with best-fitted line (known as regression line): y = b 0 + b 1 x. b 1 (the coefficient of x) is called the slope of the regression line. b 1 shows how much change occurs at the mean of y with one unit increase in x. b 0 is called the intercept of the regression line. We estimate the slope (b 1 ) and the intercept (b 0 ). Next given the value of x, we plug in that value in the regression line equation to predict y. This procedure is called linear regression.

Conditions for Linear Regression Quantitative Variables Condition: both variables have to be quantitative. Straight Enough Condition: the scatter plot must appear to have moderate linear association. Outlier Condition: there should not be any outliers.

Example of Linear Regression Suppose x = amount of protein (in gm) in a burger (explanatory variable), y = amount of fat in (in gm) the burger (response variable). Goal: Express the relationship of x and y using a line (the regression line): y = B 0 + B 1 x + error(ε). Questions: 1. How to find b 1 (slope) and b 0 (intercept)? 2. How will it help in prediction?

Best Fit Means Least Squares How do we find the actual values of slope and intercept? We need to build the model that fits the data best. The line should go through the mean of y-mean of x point. So we may just try to build a model by minimizing the distance between the line and observed data values. The distance between the line and observed values are residuals. Instead of minimizing each residual value, we can try to minize their total.

Best Fit Means Least Squares(cont.) Some residuals are positive, others are negative, and, on average, they cancel each other out. So, we can t assess how well the line fits by adding up all the residuals. Similar to what we did with standard deviations, we square the residuals and add the squares. The smaller the sum, the better the fit. The line of best fit is the line for which the sum of the squared residuals is smallest, the least squares line.

Formulae of b 0 and b 1 Although there are many lines that can describe the relationship, there is a way to find the line that fits best. For the best fitted line: Slope: b 1 = (correlation) (std.dev. of y)/(std.dev. of x) i.e. sy 1 b = r sx. Intercept: b 0 = (mean of y) b 1 (mean of x) i.e. b 0 = y b1 x.

Computation of b 0 and b 1 If we are given the summary statistics, i.e. mean, standard deviations of x and y and their correlations, then we plug in those values in the formulae to find b 0 and b 1. If we are given the actual data (not the summary), then we need to compute all those summary values. However given the data TI 83/84 Plus can find the equation of regression line. But be careful, because TI 83/84 writes the regression equation as y = ax + b. So a = slope (= b 1 ), and b = intercept (= b 0 ).

Example 1

Example 2 Fat (g) Sodium (mg) Calories 19 920 410 31 1500 580 34 1310 590 35 860 570 39 1180 640 39 940 680 43 1260 660 Fat (in gm), Sodium (in mg) and Calorie content in 7 burgers are given above.

Using TI 83/84 Plus for regression First we should prepare the TI 83/84 Plus calculator for regression by getting the diagnostic switched on: 1. Press [2nd] and [0] (that will choose CATALOG). 2. Select using arrow keys DiagnosticOn. 3. Press [ENTER] and [ENTER] again. 4. This will switch the diagnostic on. Press [STAT] and choose 1: Edit. Type the Fat data under L1, Sodium under L2 and Calories under L3.

Using TI 83/84 Plus for regression Suppose (L1) Fat is the predictor and (L2) Sodium is the response. Press [STAT] again and select CALC using right-arrow. Select 4: LinReg(ax+b) (LinReg(ax+b) appears on screen). Type [2nd] and [1] (L1 appears on screen), followed by, (comma) and then [2nd] and [2] (L2 appears on screen). Press [ENTER]. This will produce a (slope), b (intercept), r 2 and r (correlation coefficient). Caution: After LinReg(ax+b) you must first put the predictor (explanatory) variable, and then the response variable.

Scatterplot with TI 83/84 Press [2nd] [Y=] to access the STAT PLOT editor. Press [ENTER] to edit Plot1. Press [ENTER] to turn ON Plot1. Scroll down and highlight the scatter plot graph type (first option in the first row). Press [ENTER]. Scroll down and make sure Xlist: is set to L1 (press [2nd] [1]) and Ylist: is set to L2 (press [2nd] [2]). Press [GRAPH] to display the scatter plot. To get a better view of the graph, press [ZOOM] [9] to perform a ZoomStat.

Fat vs. Sodium Fat vs Sodium 1600 1400 Sodium (mg) 1200 1000 800 600 15 20 25 30 35 40 45 Fat (g)

Fat vs. Calories Fat vs Calories 700 650 600 Calories 550 500 450 400 350 15 20 25 30 35 40 45 Fat (g)

Example 3 Country Percent with Cell Phone Life Expectancy (years) Turkey 85.7% 71.96 France 92.5% 80.98 Uzbekistan 46.1% 71.96 China 47.4% 73.47 Malawi 11.9% 50.03 Brazil 75.8% 71.99 Israel 123.1% 80.73 Switzerland 115.5% 80.85 Bolivia 49.4% 66.89 Georgia 59.7% 76.72 Cyprus 93.8% 77.49 Spain 122.6% 80.05 Indonesia 58.5% 70.76 Botswana 74.6% 61.85 U.S. 87.9% 78.11

Example 3: Scatter plot with regression line %Cell Phone vs Life Expectancy 85 80 Life Expectancy 75 70 65 60 55 50 45 Possible outliers 0 20 40 60 80 100 120 140 % Cell Phone y = 0.21x + 56.91 R = 0.7848 R 2 = 0.6159

Example 3: Scatter plot with regression line % Cell Phones vs Life Expectancy (without outliers) 85 80 Life Expectancy 75 70 65 60 55 50 y = 0.13x + 64.7 R = 0.802 R 2 = 0.6437 45 0 20 40 60 80 100 120 140 % Cell Phones

Predicted values and residuals

Example 1 revisited

Evaluating regression

R 2 (the coefficient of determination)

R 2 (the coefficient of determination) For instance, if R 2 = 0.54, then 54% of the total sample variation in y is explained by the regression model. It indicates a moderate fit of the regression line. On the scatter plot the points will not be very close to the regression line. If R 2 = 0.96, then 96% of the total sample variation in y is explained by the regression model. It indicates a very good fit of the regression line. On the scatter plot the points will be very close to the regression line. On the other hand, if R 2 = 0.19, then only 19% of the total sample variation in y is explained by the regression model, which indicates a very bad fit of the regression line. The scatter plot will show either a curved pattern, or the points will be clustered showing no pattern.

s e (standard deviation of residuals) s e is the standard deviation of the residuals. In case there is no ambiguity, we often just write s instead of s e. Smaller the s e better the model fit. Larger the s e worse the model fit. Remember that residuals are the errors due to prediction using the regression line. So larger value of s e implies that there is more spread in the residuals, as a result there is more error in the prediction. Hence the observations are not close to the regression line. On the other hand, smaller value of s e indicates that the observations are closer to the regression line, implying a better fit.

Residuals Revisited (cont.) Residuals help us to see whether the model makes sense. When a regression model is appropriate, nothing interesting should be left behind. After we fit a regression model, we usually plot the residuals in the hope of finding nothing. A scatter plot of the residuals versus the x-values should be the most boring scatter plot you ve ever seen. It shouldn t have any interesting features like a direction or shape. It should stretch horizantally, with about the same amount of scatter throughout. It should have no bends, and it should have no outliers.

Choose the best description of the scatter plot A. Moderate, negative, linear association B. Strong, curved, association C. Moderate, positive, linear association D. Strong, negative, non-linear association E. Weak, positive, linear association

Match the following values of correlation coefficients for the data shown in this scatter plots. Fig. 1 Fig. 2 A. r = -0.67 B. r = -0.10 C. r = 0.71 D. r = 0.96 E. r = 1.00 Fig. 3

Software output Stat/regression/regression intercept slope R 2 r R 2 intercept slope

Looking at Data - Relationships Data analysis for two-way tables IPS Chapter 2.5 2009 W.H. Freeman and Company

Objectives (IPS Chapter 2.5) Data analysis for two-way tables Two-way tables Joint distributions Marginal distributions Relationships between categorical variables Conditional distributions Simpson s paradox

Two-way tables An experiment has a two-way, or block, design if two categorical factors are studied with several levels of each factor. Two-way tables organize data about two categorical variables obtained from a two-way, or block, design. (There are now two ways to group the data). Group by age Record education First factor: age Second factor: education

Two-way tables We call education the row variable and age group the column variable. Each combination of values for these two variables is called a cell. For each cell, we can compute a proportion by dividing the cell entry by the total sample size. The collection of these proportions would be the joint distribution of the two variables.

Marginal distributions We can look at each categorical variable separately in a two-way table by studying the row totals and the column totals. They represent the marginal distributions, expressed in counts or percentages (They are written as if in a margin.) 2000 U.S. census

The marginal distributions can then be displayed on separate bar graphs, typically expressed as percents instead of raw counts. Each graph represents only one of the two variables, completely ignoring the second one.

Parental smoking Does parental smoking influence the smoking habits of their high school children? Summary two-way table: High school students were asked whether they smoke and whether their parents smoke. Marginal distribution for the categorical variable parental smoking : Both parent smoke (1780/5375)* 100 33% One parent smokes (2239/5375)* 100 42% Neither parent smokes (1356/5375)* 100 25%

Relationships between categorical variables The marginal distributions summarize each categorical variable independently. But the two-way table actually describes the relationship between both categorical variables. The cells of a two-way table represent the intersection of a given level of one categorical factor and a given level of the other categorical factor.

Conditional Distribution In the table below, the 25 to 34 age group occupies the first column. To find the complete distribution of education in this age group, look only at that column. Compute each count as a percent of the column total. These percents should add up to 100% because all persons in this age group fall into one of the education categories. These four percents together are the conditional distribution of education, given the 25 to 34 age group. 2000 U.S. census

Conditional distributions The percents within the table represent the conditional distributions. Comparing the conditional distributions allows you to describe the relationship between both categorical variables. Here the percents are calculated by age range (columns). 29.30% = 11071 37785 = cell total. column total

The conditional distributions can be graphically compared using side by side bar graphs of one variable for each value of the other variable. Here, the percents are calculated by age range (columns).

Music and wine purchase decision What is the relationship between type of music played in supermarkets and type of wine purchased? We want to compare the conditional distributions of the response variable (wine purchased) for each value of the explanatory variable (music played). Therefore, we calculate column percents. Calculations: When no music was played, there were 84 bottles of wine sold. Of these, 30 were French wine. 30/84 = 0.357 35.7% of the wine sold was French when no music was played. 30 = 35.7% 84 = cell total. column total We calculate the column conditional percents similarly for each of the nine cells in the table:

For every two-way table, there are two sets of possible conditional distributions. Does background music in supermarkets influence customer purchasing decisions? Wine purchased for each kind of music played (column percents) Music played for each kind of wine purchased (row percents)

Simpson s paradox An association or comparison that holds for all of several groups can reverse direction when the data are combined (aggregated) to form a single group. This reversal is called Simpson s paradox. Example: Hospital death rates Hospital A Hospital B Died 63 16 Survived 2037 784 Total 2100 800 % surv. 97.0% 98.0% On the surface, Hospital B would seem to have a better record. Patients in good condition Patients in poor condition Hospital A Hospital B Hospital A Hospital B Died 6 8 Died 57 8 Survived 594 592 Survived 1443 192 But once patient condition is taken into account, we see that hospital A has in fact a better record for both patient conditions (good and poor). Total 600 600 Total 1500 200 % surv. 99.0% 98.7% % surv. 96.2% 96.0% Here, patient condition was the lurking variable.

Lurking Variable It is a variable that is not among the explanatory or response variables in a study and yet may influence the interpretation of relationships among those variables.

74/792=0.093(9.3%) 532/6066=8.8%

62/559=11.1% 117/811=14.4% 12/233=5.2% 415/5255=7.9%