DATA INTERPRETATION AND STATISTICS



Similar documents
Exercise 1.12 (Pg )

2. Simple Linear Regression

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

Exploratory data analysis (Chapter 2) Fall 2011

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Statistics. Measurement. Scales of Measurement 7/18/2012

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

II. DISTRIBUTIONS distribution normal distribution. standard scores

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

Statistics Review PSY379

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

Diagrams and Graphs of Statistical Data

Northumberland Knowledge

Regression Analysis: A Complete Example

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.


Descriptive Statistics

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Additional sources Compilation of sources:

Simple Linear Regression Inference

Descriptive Statistics. Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion

Dongfeng Li. Autumn 2010

Descriptive Statistics

Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure?

Variables. Exploratory Data Analysis

2013 MBA Jump Start Program. Statistics Module Part 3

Study Guide for the Final Exam

Biostatistics: Types of Data Analysis

Simple Regression Theory II 2010 Samuel L. Baker

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Introduction to Statistics for Psychology. Quantitative Methods for Human Sciences

Exploratory Data Analysis

Introduction to Statistics and Quantitative Research Methods

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

Final Exam Practice Problem Answers

Bowerman, O'Connell, Aitken Schermer, & Adcock, Business Statistics in Practice, Canadian edition

Comparing Means in Two Populations

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

How To Write A Data Analysis

Simple linear regression

11. Analysis of Case-control Studies Logistic Regression

2. Filling Data Gaps, Data validation & Descriptive Statistics

Engineering Problem Solving and Excel. EGN 1006 Introduction to Engineering

MBA 611 STATISTICS AND QUANTITATIVE METHODS

Univariate Regression

Measurement & Data Analysis. On the importance of math & measurement. Steps Involved in Doing Scientific Research. Measurement

Biostatistics: DESCRIPTIVE STATISTICS: 2, VARIABILITY

AP Physics 1 and 2 Lab Investigations

1. The parameters to be estimated in the simple linear regression model Y=α+βx+ε ε~n(0,σ) are: a) α, β, σ b) α, β, ε c) a, b, s d) ε, 0, σ

Correlation Coefficient The correlation coefficient is a summary statistic that describes the linear relationship between two numerical variables 2

Correlation and Regression

Introduction to Regression and Data Analysis

STA-201-TE. 5. Measures of relationship: correlation (5%) Correlation coefficient; Pearson r; correlation and causation; proportion of common variance

4. Continuous Random Variables, the Pareto and Normal Distributions

Projects Involving Statistics (& SPSS)

Introduction to Quantitative Methods

6.4 Normal Distribution

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.

DESCRIPTIVE STATISTICS AND EXPLORATORY DATA ANALYSIS

Elementary Statistics Sample Exam #3

Lecture Notes Module 1

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Descriptive statistics Statistical inference statistical inference, statistical induction and inferential statistics

Descriptive Analysis

Module 3: Correlation and Covariance

Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1)

Session 7 Bivariate Data and Analysis

Means, standard deviations and. and standard errors

Analysing Questionnaires using Minitab (for SPSS queries contact -)

Chicago Booth BUSINESS STATISTICS Final Exam Fall 2011

UNIVERSITY OF NAIROBI

4.1 Exploratory Analysis: Once the data is collected and entered, the first question is: "What do the data look like?"

Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools

Section 3 Part 1. Relationships between two numerical variables

Using Excel for inferential statistics

Part 2: Analysis of Relationship Between Two Variables

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Categorical Data Analysis

Foundation of Quantitative Data Analysis

Normality Testing in Excel

Chapter 7. One-way ANOVA

The importance of graphing the data: Anscombe s regression examples

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

Biology statistics made simple using Excel

3.2 Measures of Spread

Data Analysis Tools. Tools for Summarizing Data

Florida Math for College Readiness

Fairfield Public Schools

Algebra I Vocabulary Cards

Correlation key concepts:

Statistics for Sports Medicine

List of Examples. Examples 319

Chi Square Tests. Chapter Introduction

430 Statistics and Financial Mathematics for Business

CALCULATIONS & STATISTICS

NCSS Statistical Software

Probability and Statistics Vocabulary List (Definitions for Middle School Teachers)

Transcription:

PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE STATISTICS Data Data are obtained by making observations of the world about us. Data are obtained from experiments or from studying patients. Data contain information about the system or individuals under study, but in order to make judgements it is usually necessary to process the data to extract relevant information. Types of data: Non-parametric data: Parametric data: nominal or categorical data (e.g. names, colours etc without any preferences) ordinal data (rankings, 1st nd 3rd etc.) numerical/quantitative measurements may be on an interval scale (e.g. height, weight) or they may be discrete values on a discontinuous scale (e.g. number of offspring) Data, especially biological data, tend to be scattered. This form of variability may be an inherent property of the quantity measured or it may be due to the limited accuracy of measurement. It is more difficult to draw conclusions from data that are very scattered. Samples and populations To assess the properties of populations it is frequently necessary to make measurements on subsets called samples. This may be because it is often impossible or unreasonable to carry out measurements on the entire population because it is too large (e.g. the height of all Africans) or because the population is infinite (e.g. a subject s height measured several times. You will not get exactly the same result each time, so you settle for a finite number of measurements since it would not be possible to make an infinite number of measurements). Samples should be representative and not biased. Usually randomly selected. From the properties of sample data, we infer the properties of the population. Describing data numerically Data may be described by calculating quantities that measure:- 1. central tendency: mean, median or mode; X Mean : X = N (n +1) Median : th value Mode : most frequent value. spread or scatter or dispersion: range, variance, standard deviation, coefficient of variation The range = X max - X min is a poor measure of dispersion because it depends entirely on the extreme values, providing no information about the intermediate ones. The mean of the differences between each X and the mean is (X - X) n but this is not a useful measure because it is close to zero when the distribution is symmetrical. If we square the differences before dividing by n we get the variance: (X - X ) n If x, for example, is in cm then the variance is in cm, consequently we use the standard deviation (SD) which is the square root of the variance. SD = (X - X ) n c60notestat.doc 1

PholC60 September 001 This formula calculates the SD of a population of n observations X. We know that a sample mean is an estimator of the population mean, but a sample SD, calculated from the above formula, would give a biased estimate of the population SD. This is for rather complicated reasons. Briefly, it is because a single sample is not likely to contain extreme values and the SD calculated over n tends to underestimate the population SD. To remove this bias in the estimate of population SD we divide by (n-1), rather than n, when calculating the sample SD. Thus for a sample:- SD = (X - X ) n -1 The quantity n - 1 is called the number of degrees of freedom (d.f.). Statisticians will tell you that each time you calculate a statistic from sample data the number of degrees of freedom is reduced by 1. Thus we use n in calculating the mean value X, but since we use the mean in calculating SD the d.f. becomes n - 1 and we use this instead of n. This is the way you will nearly always calculate SD. CHECK YOUR CALCULATOR! Use this simple example. The SD of the sample 1,, 3 is 1. If the population is 1,, 3 then the SD is 0.816. Presenting data graphically Graphic methods allow the visual assessment of data. For nominal data this can take the form of a bar chart or a pie diagram. For example: Number of people 50 0 Histograms Blue Brown brown blue Parametric data i.e. numerical data may be plotted as a histogram. The quantity measured is divided into intervals or classes of appropriate size and the number of observations within each class is plotted. We are, therefore, classifying the data. The total area under the histogram is proportional to the total number of observations. Rather than plotting the number of observations in each class on the vertical axis, it is common to plot the frequency. This is number of observations in each class divided by the total number of observations. The sum of all the frequencies will be 1. Instead of plotting each class as a block, a frequency polygon outlining the profile can be drawn. Such a graph is called the frequency distribution. To summarise: 1. Measurements are performed upon a sample taken from a population.. We may construct a histogram or frequency distribution of the sample data Using Histograms to classify data Scatter plot Frequency polygon 3. We may calculate from our sample data quantities called statistics that are estimators of population properties. These include measures of central tendency: e.g. mean, median and mode. The scatter or spread in the data is best described by statistics such as SD or coefficient of variation (SD/mean as a percentage). FREQUENCY NUMBER NUMBER 1 8 4 0 6 4 0 0. 0.1 0 40 50 60 70 80 Body mass (kg) 9 data points CLASS INTERVAL = 10 40-45 50-55 60-65 70-75 45-50 55-60 65-70 75-80 Body mass (kg) CLASS INTERVAL = 5 40-45 50-55 60-65 70-75 45-50 55-60 65-70 75-80 Body mass (kg) 40-45 50-55 60-65 70-75 45-50 55-60 65-70 75-80 Body mass (kg) 0. 0.1 0 FREQUENCY c60notestat.doc

PholC60 September 001 Standard error of the mean If many samples are selected from a population, each has its own mean value, X. The distribution of these means is called a sampling distribution and it is centred around the population mean µ. The width of the sampling distribution depends on the number of items in each sample. Larger samples give narrower sampling distributions. This means that if you take a sample of 0 items, the mean value will be closer to µ than if you take a sample of, say, 5 items. The SD of the sampling distribution is called the standard error of the mean (SE or SEM). The smaller it is, the closer a sample mean is likely to be to µ. Estimating population statistics 1. The sample mean X provides an estimate of the population mean m.. The sample SD s provides an estimate of the population SD s. Just how close these estimates are to the actual values depends on the number of measurements or items in the sample. The standard error of the mean is a measure of the closeness of the sample mean to the population mean. It is given by (Remember, it's always n here, never n-1) SE = Illustrating the spread of data graphically It is usual to show the SD or more commonly the SE on data plots as error bars Box and whisker plot This can be used instead of a dot or scatter plot to indicate the central tendency and the spread of data. It may be drawn horizontally, as below, or vertically. The ends of the whiskers indicate the limits of the data (range), while the box encloses the values within SD s either side of the mean. The central vertical line is the mean value. Alternatively, another common 7 8 9 convention is that the central line is the median and the box encloses the upper and lower quartiles. The Normal Distribution The Normal distribution is one of the most common frequency distributions that occurs. It is bell-shaped and symmetrical about the central value. Its shape is completely defined by the mathematical equation or formula that describes it. y = y 1 0 πσ e -(x-µ ) σ It's not necessary for you to manipulate this rather forbidding equation, but if you are mathematical you may notice that Frequency Frequency s n Frequency Frequency SD µ µ n = 0 Distribution of sample means (X) n = 10 The Normal Distribution Measured quantity Central value Samples n = 5 - SD -1 SD +1 SD + SD Standard error of the mean 1 SEM = s n e = 0.61 0.61 height c60notestat.doc 3

PholC60 September 001 when x = µ, y is at its maximum. Also when x-µ = ± σ, y is 1/ e or 0.61 of its maximum value. Thus, the Normal curve for a large population is the frequency polygon, centred at the population mean µ and with a half-width of σ (the SD) at 61% of the maximum height. Any Normal curve is completely defined in terms of shape by the parameters µ and σ, which determine its centre and width. Its area is equal to the number of items/observations. A simplified form is provided by the Standard Normal Distribution (SND), where µ is set to zero and the units of measure on the horizontal axis are SD's; (i.e. x has become z = (x - µ)/σ). The area under the whole curve is 1. The area may be divided into parts by drawing vertical lines. We can use this property of a Normal curve to provide an additional way of describing the spread of data in a sample. It applies to large (n > 60) samples taken from a Normally distributed population and it is called the 95% confidence interval: 95% c.i. = X ± (1.96 SE) This doesn't apply to small samples (n < 60) since although the population may be Normally distributed, the samples tend to be distributed according to the the so-called t distribution (a little broader than a Normal curve). An additional complication is that, unlike the Normal distribution, the shape of the t distribution depends on the number of degrees of freedom. Thus 95% c.i. = X ± ( t SE) where the value of t is given by the t tables at d.f. = n - 1 and p =.05. STATISTICAL INFERENCE Tests of significance Significance tests are used to determine the likelihood that two (or more) samples come from the same population. For example, does a particular form of treatment make the patient better or could it have happened by chance? The general procedure is as follows. 1. Formulate a null hypothesis (called H 0 for short). This takes the pessimistic view that differences between sample means is due entirely to chance, i.e. both samples are derived from the same population.. Calculate the significance level of H 0. This is the probability that the null hypothesis is true. 3. If the significance level is (by convention) below 5% (p <.05) we reject H 0. Decisions about significance depend on the area under the appropriate distribution. Test can be two-tailed or they can be single tailed (for differences in one direction only). More on this below. Paired data, small samples For two samples of paired data, i.e. data that are matched or that correspond in a one-to-one relation e.g. measurements on the same individual "before" and "after" treatment, and where n < 60 and the data are from a Normal distribution, we use a paired t test. (t is the number of SE's between the means). This test is best performed by calculating the differences between the measurements on each individual and then determining if the mean difference is significantly different from zero; H 0 states that it is not. t = mean diff (s / n ) d.f. = n - 1 c60notestat.doc 4

PholC60 September 001 Example: Hours of sleep in patients after taking sleeping drug. Patient Without drug After drug Difference 1 5. 6.1 0.9 7.9 7.0-0.9 3 3.9 8. 4.3 4 4.7 7.6.9 5 5.3 6.5 1. 6 5.4 8.4 3.0 7 4. 6.9.7 8 6.1 6.7 0.6 9 3.8 7.4 3.6 10 6.3 5.8-0.5 mean 5.8 7.06 1.78 SD 1.768 SE 0.559 t 3.18 d.f. 9 H 0 : The means 5.8 h and 7.8 h are not significantly different. Alternatively, the mean difference 1.78 h is not significantly different from zero. Looking in the t table we find: at d.f. = 9 and p <.05 t =.6, at d.f. = 9 and p <.0 t =.8, at d.f. = 9 and p <.01 t = 3.5. We can therefore reject the null hypothesis at p <.0 and conclude that the drug is effective at changing the number of hours of sleep. Another way of putting it is that the probability that the difference in the amounts of sleep was achieved purely by chance is less than %. NOTE: This was a two-sided or two-tailed comparison. It told us that the number of sleep hours would be different but not specifically more. If there was no chance that a particular treatment could reduce sleep hours, then we could use the data in a single-tailed (-sided) test and conclude that for t =.8 and d.f. = 9 the probability of H 0 is < 1% (i.e. half of %). The t tables give values for either case and you have to make the choice. You will nearly always use two-tailed comparisons. Paired data, large samples When n > 60 the t distribution and the Normal distribution are very similar, so we calculate not t but z, the Standard Normal Deviate (see above). Remember z = (difference in means)/se; it does not depend on d.f. Unpaired data An unpaired, or two sample, t test is used to compare samples that have no correspondence, for example a set of patients and a set of healthy controls. The number in each sample does not have to be the same. If the SE for each sample is similar then it is necessary to calculate a pooled SE s p. (If the SE's are rather different then other methods may be used). This is then used to compute t as t = s = p (n -1)s + (n -1)s n + n - 1 1 1 X 1 - X s ( 1 n + 1 d.f. = n 1 + n - p n ) 1 c60notestat.doc 5

PholC60 September 001 For example birth weights (kg) of children born to smokers and non-smokers: Non-smokers Heavy Smokers 3.99 3.18 3.79.84 3.60.90 3.73 3.7 3.1 3.85 3.60 3.5 4.08 3.3 3.61.76 3.83 3.60 3.31 3.75 4.13 3.59 3.6 3.63 3.54.38 3.51.34 _.71 X 3.593 3.03 d.f. = 15 + 14 - = 7 SD 0.371 0.493 SE 0.096 0.13 t =.4 with d.f. = 7 n 15 14 In the t table t =.47 at p <.0 and so we clearly reject the null hypothesis. Note: For large samples use the Normal table (SND) and compute z from z = X - X s1 n 1 Non-parametric tests of significance When data are not normally distributed we can often still use the parametric tests described above if we can transform the data in a way that makes them normal. This can be achieved in a variety of ways, sometimes simply by taking the logarithm. If this cannot be done or if the data are ordinal rather than parametric, then we must resort to a nonparametric test. For these tests the data are converted from an interval scale into ranked data. The subsequent tests then only consider the relative magnitudes of the data, not the actual values, so some information is lost. There are many different non-parametric tests, all with specific applications. However, there is a correspondence between the parametric and non-parametric methods. These tests are not difficult to use and an appropriate textbook can be consulted for the methods when necessary. As with many of the less common statistical tests, it is advisable to seek the assistance of a statistician before embarking on extensive usage. To illustrate a non-parametric method the Wilcoxon signed rank test will be used on the data used above for the paired t test. Hours of sleep in patients after taking sleeping drug. 1 s + n Patient Before After drug Difference Rank 1 5. 6.1 0.9 3.5 tied 7.9 7.0-0.9 3.5 3 & 4 3 3.9 8. 4.3 10 4 4.7 7.6.9 7 5 5.3 6.5 1. 5 6 5.4 8.4 3.0 8 7 4. 6.9.7 6 8 6.1 6.7 0.6 9 3.8 7.4 3.6 9 10 6.3 5.8-0.5 1 Procedure: 1. Rank the differences, excluding any that = 0; (ignore the signs).. Sum the ranks with positive and with negative differences: T+ = 3.5 + 10 + 7 + 5 + 8 + 6 + + 9 = 50.5 T- = 3.5 + 1 = 4.5 c60notestat.doc 6

PholC60 September 001 H 0 : Drug and placebo give the same results. Thus we expect T+ to be similar to T-. If they are not then compare the smallest with that due to chance alone. Let T = smallest of T+ and T-. Thus T = 4.5. Look up T in the Wilcoxon signed rank table at a sample size of N, where N = number of ranked differences excluding zeros. Thus N = 10 and we find that p <.0 and reject H 0. Comparing more than two samples Suppose you were asked to compare blood pressure readings from English, Welsh and Scottish people and were asked if they were different from one-another. The t test is not appropriate for such a study. The equivalent of a t test for more than two samples is called analysis of variance (anova for short). This procedure, which can only be applied to normally distributed data, enables you to determine if the variation between sample means can be accounted for by the variation that occurs within the data as a whole, (this is the null hypothesis), or whether the variation between the means is due to significant differences between them. For a factor of analysis (such as nationality) one way anova is performed, for two factors of analysis (for instance nationality and sex) two way anova is used, and so on. Variances are calculated from "sums of squares" (i.e. Σ(X - X), let us call it SS for short). These may be partitioned in the following way The procedure is as follows. 1. Calculate the total SS, i.e. over all the data. SS total = SS between groups + SS within groups. Calculate the SS between the means of each group or sample. 3. Calculate the residual SS which is the SS within the groups. Now calculate the ratio F of the between-group variance to the within-group variance and deduce the p value from the F table. (Note that for only two groups the result is identical to the t test.) Comparing observed and expected data. The c test. A way of comparing data that can be grouped into categories is to place the results in a contingency table that contains both the observed and expected data. One of the ways of testing that the difference between observed and expected values are significant is the χ test. (Note χ or chi, pronounced as in sky, is a Greek letter. It is not always available to typists and printers, which is why it is sometimes written as chi). The restrictions on the use of this test are 1. n > 0. There must be at least 5 items in any "expected" box 3. The boxes must contain actual data not proportions On the other hand, χ tests are not restricted to Normally distributed data. The χ test can be used to detect an association between two (or more) variables measured for each individual. These variables need not be continuous. They can be discrete or nominal (see above). For two variables we use a x contingency table. For example: Does influenza vaccination reduce the chance of contracting the disease? OBSERVED DATA: 'flu Vaccinated Placebo Total Yes 0 80 100 No 0 140 360 40 0 460 Expected values are calculated assuming the null hypothesis; e.g. in the first box multiply 40 by the overall proportion catching 'flu: 40 x 100/460 = 5. etc. EXPECTED DATA: Yes 5. 47.8 100 No 187.8 17. 360 40 0 460 χ = (Obs - Exp ) Exp c60notestat.doc 7

PholC60 September 001 χ = (0-5. ) 5. + (80-47.8 ) 47.8 + (0-187.8 ) 187.8 + (140-17. ) 17. = 53.09 The number of degrees of freedom is (no rows-1)(no columns-1) = 1. From the χ table, χ =10.83 for p =.001. 53.09 greatly exceeds this, so we may reject H 0 conclude that the vaccine is effective. Errors in significance testing Rejection of H 0 is sometimes termed a "positive" finding while acceptance is "negative". For example, when a patient is tested for a particular disease and the result is significantly different from controls, the individual is termed positive for that test. If the test was faulty it might give false positive or false negative results. These are classified as: Type I errors or false positives Incorrect rejection of H 0 Type II errors or false negatives Incorrect acceptance of H 0 Statistical power By definition, the probability of a Type I error is equal to the chosen significance level (usually 5%). We can reduce the probability of a Type I error by setting a lower significance level, say to 1%. The probability of a Type II error is a little more complicated. If H 0 is false then the distribution of sample means will be centred around a population mean that is different from µ. Let us call it µ. We reject H 0 when our sample mean lies in the tails of the sampling distribution centred on µ. However, there is a chance that our sample could have a mean in the overlap region, i.e. there is a b% chance that we would incorrectly accept the null hypothesis. The power of a statistical test is given by the probability of not doing this. i.e. 100-b%. Decreasing the significance level will reduce the power. Increasing sample size will increase the power. c60notestat.doc 8

PholC60 September 001 CORRELATION AND LINEAR REGRESSION If we want to measure the degree of association between two variables that we suspect may be dependent on one another we can calculate the correlation coefficient or perform linear regression. These methods test only for a linear association, i.e. that the data are related by an expression of the type y = a + bx. (Recall that this is the equation of a straight line with a slope b and an intercept on the y axis at y = a): An alternative approach and an important preliminary test is to draw a scatter plot of the data. For example compare IQ and height for a sample of individuals. In another example compare probability of heart disease with daily fat intake. r = 0 0 < r < 1 y x 1 b There doesn't seem to be much correlation between height and intelligence, but there appears to be an increased likelihood of heart disease when more fat is consumed. The horizontal axis (sometimes called the abscissa) is usually the independent variable, the one whose values you select or are determined already. The vertical axis (or ordinate) is usually reserved for the dependent variable, the one that is determined by nature. Correlation coefficient This is given by Examples: r = r = 1 r = -1 Intelligence score 100 Height (X - X)(Y - Y) (X - X ) (Y - Y ) Risk of Heart Attack Dietary fat intake Y Y X X It would be inappropriate to calculate the correlation coefficient of data that is non-linear, (i.e. does not follow a straight line relationship) e.g. Y Notice: 1. r has no units.. The closer r is to ± 1, the better the correlation. 3. Correlation doesn't necessarily indicate direct causality. Remember: The data must also be Normally distributed, (otherwise use a non-parametric test such as Spearman's rank correlation test). X c60notestat.doc 9

PholC60 September 001 Example: Is there a correlation between body weights of 8 healthy men and their corresponding blood plasma volumes? Subject weight (kg) plasma vol. (l) 1 58.75 70.86 3 74 3.37 4 63.5.76 5 6.6 6 70.5 3.49 7 71 3.05 8 66 3.1 We find r = 0.76 which is a rather weak correlation. Clearly other factors must affect plasma volume. How much of the observed variation is determined by body weight? This is given by r which is called the coefficient of determination. In our example r = 0.58, so 58% of the variation in plasma volume is accounted for by its correlation with body weight. Linear regression This is an alternative way of assessing dependence, but it also provides the equation of the straight line that best fits the data, by specifying its slope and intercept. This line is called the regression line. This is achieved by minimising the distances between the data points and the fitted line: Usually x is the independent variable (i.e. determined by the investigation) and the vertical (y) distances are minimised. (For example we wish to know how plasma volume is determined by body weight not the converse). The line we obtain is then termed the regression of y upon x. Its equation is given by (X - X )(Y - Y ) Y = a + bx where b = (X - X ) Plasma volume (l) 3.6 3.4 3. 3.0.8.6 and a = Y - bx 56 58 60 6 64 66 68 70 7 74 76 Body mass (kg) In our example b = 0.0436 a = 0.0857 so that Y = 0.0857+ 0.0436 X and we can construct the line by calculating x and y values. The derived equation can be used to calculate values of y for a given x. Alternatively, y values may be read directly from the straight line graph. Both of these operations should be restricted to the region encompassed by the original data. This is called interpolation. The estimations of y values beyond the data region is called extrapolation. Often there is no reason to assume that the regression line will apply beyond the data limits, so extrapolation can be misleading. 3.6 3.4 Plasma volume (l) 3. 3.0.8.6 56 58 60 6 64 66 68 70 7 74 76 Body mass (kg) c60notestat.doc 10

Areas in tail of the standard normal distribution Proportion of area above z Second decimal place of z z 0 0.01 0.0 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.0 0.5000 0.4960 0.490 0.4880 0.4840 0.4801 0.4761 0.471 0.4681 0.4641 0.1 0.460 0.456 0.45 0.4483 0.4443 0.4404 0.4364 0.435 0.486 0.447 0. 0.407 0.4168 0.419 0.4090 0.405 0.4013 0.3974 0.3936 0.3897 0.3859 0.3 0.381 0.3783 0.3745 0.3707 0.3669 0.363 0.3594 0.3557 0.350 0.3483 0.4 0.3446 0.3409 0.337 0.3336 0. 3300 0.364 0.38 0.319 0.3156 0.311 0.5 0.3085 0.3050 0.3015 0.981 0.946 0.91 0.877 0.843 0.810 0.776 0.6 0.743 0.709 0.676 0.643 0.611 0.578 0.546 0.514 0.483 0.451 0.7 0.40 0.389 0.358 0.37 0.97 0.66 0.36 0.07 0.177 0.148 0.8 0.119 0.090 0.061 0.033 0.005 0.1977 0.1949 0.19 0.1894 0.1867 0.9 0.1841 0.1814 0.1788 0.176 0.1736 0.1711 0.1685 0.1660 0.1635 0.1611 1.0 0.1587 0.1563 0.1539 0.1515 0.149 0.1469 0.1446 0.143 0.1401 0.1379 1.1 0.1357 0.1335 0.1314 0.19 0.171 0.151 0.130 0.110 0.1190 0.1170 1. 0.1151 0.1131 0.111 0.1094 0.1075 0.1057 0.1038 0.100 0.1003 0.0985 1.3 0.0968 0.0951 0.0934 0.0918 0.0901 0.0885 0.0869 0.0853 0.0838 0.083 1.4 0.0808 0.0793 0.0778 0.0764 0.0749 0.0735 0.071 0.0708 0.0694 0.0681 1.5 0.0668 0.0655 0.0643 0.0630 0.0618 0.0606 0.0594 0.058 0.0571 0.0559 1.6 0.0548 0.0537 0.056 0.0516 0.0505 0.0495 0.0485 0.0475 0.0465 0.0455 1.7 0.0446 0.0436 0.047 0.0418 0.0409 0.0401 0.039 0.0384 0.0375 0.0367 1.8 0.0359 0.035 0.0344 0.0336 0.039 0.03 0.0314 0.0307 0.0301 0.094 1.9 0.087 0.081 0.074 0.068 0.06 0.056 0.050 0.044 0.039 0.033.0 0.075 0.0 0.0169 0.0118 0.0067 0.0018 0.01970 0.0193 0.01876 0.01831.1 0.01786 0.01743 0.01700 0.01659 0.01618 0.01578 0.01539 0.01500 0.01463 0.0146. 0.01390 0.01355 0.0131 0.0187 0.0155 0.01 0.01191 0.01160 0.01130 0.01101.3 0.0107 0.01044 0.01017 0.00990 0.00964 0.00939 0.00914 0.00889 0.00866 0.0084.4 0.0080 0.00798 0.00776 0.00755 0.00734 0.00714 0.00695 0.00676 0.00657 0.00639.5 0.0061 0.00604 0.00587 0.00570 0.00554 0.00539 0.0053 0.00508 0.00494 0.00480.6 0.00466 0.00453 0.00440 0.0047 0.00415 0.0040 0.00391 0.00379 0.00368 0.00357.7 0.00347 0.00336 0.0036 0.00317 0.00307 0.0098 0.0089 0.0080 0.007 0.0064.8 0.0056 0.0048 0.0040 0.0033 0.006 0.0019 0.001 0.0005 0.00199 0.00193.9 0.00187 0.00181 0.00175 0.00169 0.00164 0.00159 0.00154 0.00149 0.00144 0.00139 3.0 0.00135 0.00131 0.0016 0.001 0.00118 0.00114 0.00111 0.00107 0.00103 0.00100 3.1 0.00097 0.00094 0.00090 0.00087 0.00084 0.0008 0.00079 0.00076 0.00074 0.00071 3. 0.00069 0.00066 0.00064 0.0006 0.00060 0.00058 0.00056 0.00054 0.0005 0.00050 3.3 0.00048 0.00047 0.00045 0.00043 0.0004 0.00040 0.00039 0.00038 0.00036 0.00035 3.4 0.00034 0.0003 0.00031 0.00030 0.0009 0.0008 0.0007 0.0006 0.0005 0.0004 3.5 0.0003 0.000 0.000 0.0001 0.0000 0.00019 0.00019 0.00018 0.00017 0.00017 3.6 0.00016 0.00015 0.00015 0.00014 0.00014 0.00013 0.00013 0.0001 0.0001 0.00011 3.7 0.00011 0.00010 0.00010 0.00010 0.00009 0.00009 0.00008 0.00008 0.00008 0.00008 3.8 0.00007 0.00007 0.00007 0.00006 0.00006 0.00006 0.00006 0.00005 0.00005 0.00005 3.9 0.00005 0.00005 0.00004 0.00004 0.00004 0.00004 0.00004 0.00004 0.00003 0.00003 4.0 0.00003 0.00003 0.00003 0.00003 0.00003 0.00003 0.0000 0.0000 0.0000 0.0000 Critical values of t p values one tailed 0.5 0. 0.15 0.1 0.05 0.05 0.0 0.01 0.005 0.005 0.001 0.0005 two tailed 0.5 0.4 0.3 0. 0.1 0.05 0.04 0.0 0.01 0.005 0.00 0.001 df 1 1.00 1.38 1.96 3.08 6.31 1.71 15.89 31.8 63.66 17.30 318.30 636.60 0.8 1.06 1.39 1.89.9 4.30 4.85 6.97 9.93 14.09.33 31.60 3 0.77 0.98 1.5 1.64.35 3.18 3.48 4.54 5.84 7.45 10.1 1.9 4 0.74 0.94 1.19 1.53.13.78 3.00 3.75 4.60 5.60 7.17 8.61 5 0.73 0.9 1.16 1.48.0.57.76 3.37 4.03 4.77 5.89 6.87 6 0.7 0.91 1.13 1.44 1.94.45.61 3.14 3.71 4.3 5.1 5.96 7 0.71 0.90 1.1 1.4 1.90.37.5 3.00 3.50 4.03 4.79 5.41 8 0.71 0.89 1.11 1.40 1.86.31.45.90 3.36 3.83 4.50 5.04 9 0.70 0.88 1.10 1.38 1.83.6.40.8 3.5 3.69 4.30 4.78 10 0.70 0.88 1.09 1.37 1.81.3.36.76 3.17 3.58 4.14 4.59 11 0.70 0.88 1.09 1.36 1.80.0.33.7 3.11 3.50 4.03 4.44 1 0.70 0.87 1.08 1.36 1.78.18.30.68 3.06 3.43 3.93 4.3 13 0.69 0.87 1.08 1.35 1.77.16.8.65 3.01 3.37 3.85 4. 14 0.69 0.87 1.08 1.35 1.76.15.6.6.98 3.33 3.79 4.14 15 0.69 0.87 1.07 1.34 1.75.13.5.60.95 3.9 3.73 4.07 16 0.69 0.87 1.07 1.34 1.75.1.4.58.9 3.5 3.69 4.0 17 0.69 0.86 1.07 1.33 1.74.11..57.90 3. 3.65 3.97 18 0.69 0.86 1.07 1.33 1.73.10.1.55.88 3.0 3.61 3.9 19 0.69 0.86 1.07 1.33 1.73.09.1.54.86 3.17 3.58 3.88 0 0.69 0.86 1.06 1.33 1.73.09.0.53.85 3.15 3.55 3.85 1 0.66 0.86 1.06 1.3 1.7.08.19.5.83 3.14 3.53 3.8 0.69 0.86 1.06 1.3 1.7.07.18.51.8 3.1 3.51 3.79

3 0.69 0.86 1.06 1.3 1.71.07.18.50.81 3.10 3.49 3.77 4 0.69 0.86 1.06 1.3 1.71.06.17.49.80 3.09 3.47 3.75 5 0.68 0.86 1.06 1.3 1.71.06.17.49.79 3.08 3.45 3.73 6 0.68 0.86 1.06 1.3 1.71.06.16.48.78 3.07 3.44 3.71 7 0.68 0.86 1.06 1.31 1.70.05.15.47.77 3.06 3.4 3.69 8 0.68 0.86 1.06 1.31 1.70.05.15.47.76 3.05 3.41 3.67 9 0.68 0.85 1.06 1.31 1.70.05.15.46.76 3.04 3.40 3.66 30 0.68 0.85 1.06 1.31 1.70.04.15.46.75 3.03 3.39 3.65 40 0.68 0.85 1.05 1.30 1.68.0.1.4.70.97 3.31 3.55 50 0.68 0.85 1.05 1.30 1.68.01.11.40.68.94 3.6 3.50 60 0.68 0.85 1.05 1.30 1.67.00.10.39.66.9 3.3 3.46 80 0.68 0.85 1.04 1.9 1.66 1.99.09.37.64.89 3.0 3.4 100 0.68 0.85 1.04 1.9 1.66 1.98.08.36.63.87 3.17 3.39 1000 0.68 0.84 1.04 1.8 1.65 1.96.06.33.58.81 3.10 3.30 inf. 0.67 0.84 1.04 1.8 1.64 1.96.05.33.58.81 3.09 3.9 c Distribution p value 0.5 0. 0.15 0.1 0.05 0.05 0.0 0.01 0.005 0.005 0.001 0.0005 df 1 1.3 1.64.07.71 3.84 5.0 5.41 6.63 7.88 9.14 10.83 1.1.77 3. 3.79 4.61 5.99 7.38 7.8 9.1 10.60 11.98 13.8 15.0 3 4.11 4.64 5.3 6.5 7.81 9.35 9.84 11.34 1.84 14.3 16.7 17.73 4 5.39 5.59 6.74 7.78 9.49 11.14 11.67 13.3 14.86 16.4 18.47 0.00 5 6.63 7.9 8.1 9.4 11.07 1.83 13.33 15.09 16.75 18.39 0.51.11 6 7.84 8.56 9.45 10.64 1.53 14.45 15.03 16.81 18.55 0.5.46 4.10 7 9.04 9.80 10.75 1.0 14.07 16.01 16.6 18.48 0.8.04 4.3 6.0 8 10. 11.03 1.03 13.36 15.51 17.53 18.17 0.09 1.95 3.77 6.1 7.87 9 11.39 1.4 13.9 14.68 16.9 19.0 19.63 1.67 3.59 5.46 7.83 9.67 10 1.55 13.44 14.53 15.99 18.31 0.48 1.16 3.1 5.19 7.11 9.59 31.4 11 13.70 14.63 15.77 17.9 19.68 1.9.6 4.7 6.76 8.73 31.6 33.14 1 14.85 15.81 16.99 18.55 1.03 3.34 4.05 6. 8.30 30.3 3.91 34.8 13 15.93 15.58 18.90 19.81.36 4.74 5.47 7.69 9.8 31.88 34.53 36.48 14 17.1 18.15 19.40 1.06 3.68 6.1 6.87 9.14 31.3 33.43 36.1 38.11 15 18.5 19.31 0.60.31 5.00 7.49 8.6 30.58 3.80 34.95 37.70 39.7 16 19.37 0.47 1.79 3.54 6.30 8.85 9.63 3.00 34.7 36.46 39.5 41.31 17 0.49 1.61.98 4.77 7.59 30.19 31.00 33.41 35.7 37.95 40.79 4.88 18 1.60.76 4.16 5.99 8.87 31.53 3.35 34.81 37.16 39.4 4.31 44.43 19.7 3.90 5.33 7.0 30.14 3.85 33.69 36.19 38.58 40.88 43.8 45.97 0 3.83 5.04 6.50 8.41 31.41 34.17 35.0 37.57 40.00 4.34 45.31 47.50 1 4.93 6.17 7.66 9.6 3.67 35.48 36.34 38.93 41.40 43.78 46.80 49.01 6.04 7.30 8.8 30.81 33.9 36.78 37.66 40.9 4.80 45.0 48.7 50.51 3 7.14 8.43 9.98 3.01 35.17 38.08 38.97 41.64 44.18 46.6 49.73 5.00 4 8.4 9.55 31.13 33.0 36.4 39.36 40.7 4.98 45.56 48.03 51.18 53.48 5 9.34 30.68 3.8 34.38 37.65 40.65 41.57 44.31 46.93 49.44 5.6 54.95 6 30.43 31.79 33.43 35.56 38.89 41.9 4.86 45.64 48.9 50.83 54.05 56.41 7 31.53 3.91 34.57 36.74 40.11 43.19 44.14 46.96 49.64 5. 55.48 57.86 8 3.6 34.03 35.71 37.9 41.34 44.46 45.4 48.8 50.99 53.59 56.89 59.30 9 33.71 35.14 36.85 39.09 4.56 45.7 46.69 49.59 5.34 54.97 58.30 60.73 30 34.80 36.5 37.99 40.6 43.77 46.98 47.96 50.89 53.67 56.33 59.70 6.16 40 45.6 47.7 49.4 51.81 55.76 59.34 60.44 63.69 66.77 69.70 73.40 76.09 50 56.33 53.16 60.35 63.17 67.50 71.4 7.61 76.15 79.49 8.66 86.66 89.56 60 66.98 68.97 71.34 74.40 79.08 83.30 84.58 88.38 91.95 95.34 99.61 10.7 80 88.13 90.41 93.11 96.58 101.9 106.6 108.1 11.3 116.3 10.1 14.8 18.3 100 109.1 111.7 114.7 118.5 14.3 19.6 131.1 135.8 140. 144.3 149.4 153.