Basic Statistics for SGPE Students Part I: Descriptive Statistics



Similar documents
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Pie Charts. proportion of ice-cream flavors sold annually by a given brand. AMS-5: Statistics. Cherry. Cherry. Blueberry. Blueberry. Apple.

Descriptive Statistics

Covariance and Correlation

Descriptive statistics Statistical inference statistical inference, statistical induction and inferential statistics

The right edge of the box is the third quartile, Q 3, which is the median of the data values above the median. Maximum Median

CALCULATIONS & STATISTICS

Descriptive Statistics. Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion

Module 3: Correlation and Covariance

Means, standard deviations and. and standard errors

4. Continuous Random Variables, the Pareto and Normal Distributions

Random Variables. Chapter 2. Random Variables 1

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Correlation key concepts:

CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction

Week 1. Exploratory Data Analysis

Introduction to Statistics for Psychology. Quantitative Methods for Human Sciences

6.4 Normal Distribution

Simple linear regression

Exercise 1.12 (Pg )

Session 7 Bivariate Data and Analysis

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

The correlation coefficient

Section 3 Part 1. Relationships between two numerical variables

Descriptive statistics; Correlation and regression

MTH 140 Statistics Videos

Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Probability and Statistics Vocabulary List (Definitions for Middle School Teachers)

Two-sample inference: Continuous data

MEASURES OF VARIATION

Correlation Coefficient The correlation coefficient is a summary statistic that describes the linear relationship between two numerical variables 2

Exploratory data analysis (Chapter 2) Fall 2011

Statistics I for QBIC. Contents and Objectives. Chapters 1 7. Revised: August 2013

Lecture 1: Review and Exploratory Data Analysis (EDA)

BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

CHAPTER 5 Round-off errors

Exploratory Data Analysis

This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.

COMP6053 lecture: Relationship between two variables: correlation, covariance and r-squared.

Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

Introduction to Quantitative Methods

Data Exploration Data Visualization

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March Due:-March 25, 2015.

HISTOGRAMS, CUMULATIVE FREQUENCY AND BOX PLOTS

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

Chapter 3 RANDOM VARIATE GENERATION

SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12

Fairfield Public Schools

Week 4: Standard Error and Confidence Intervals

Chapter 1: Looking at Data Section 1.1: Displaying Distributions with Graphs

Standard Deviation Estimator

Simple Regression Theory II 2010 Samuel L. Baker

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

Notes on Continuous Random Variables

Hypothesis Testing for Beginners

Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics

MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

MBA 611 STATISTICS AND QUANTITATIVE METHODS

Describing, Exploring, and Comparing Data

3: Summary Statistics

Describing and presenting data

6 3 The Standard Normal Distribution

Chapter 7 - Roots, Radicals, and Complex Numbers

CURVE FITTING LEAST SQUARES APPROXIMATION

6.4 Logarithmic Equations and Inequalities

Course Objective This course is designed to give you a basic understanding of how to run regressions in SPSS.

DATA INTERPRETATION AND STATISTICS

Solving Quadratic Equations

Father s height (inches)

Unit 26 Estimation with Confidence Intervals

Diagrams and Graphs of Statistical Data

Mind on Statistics. Chapter 2

AP Statistics Solutions to Packet 2

Chicago Booth BUSINESS STATISTICS Final Exam Fall 2011

On Correlating Performance Metrics

Statistics E100 Fall 2013 Practice Midterm I - A Solutions

Unit 7: Normal Curves

2. Discrete random variables

WEEK #22: PDFs and CDFs, Measures of Center and Spread

Random variables, probability distributions, binomial random variable

Chapter 5. Random variables

Descriptive Statistics and Measurement Scales

Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools

Elasticity. I. What is Elasticity?

99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, cm

The Normal Distribution

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

2. Filling Data Gaps, Data validation & Descriptive Statistics

Exploratory Data Analysis. Psychology 3256

17. SIMPLE LINEAR REGRESSION II

Density Curve. A density curve is the graph of a continuous probability distribution. It must satisfy the following properties:

Week 3&4: Z tables and the Sampling Distribution of X

BASIC STATISTICAL METHODS FOR GENOMIC DATA ANALYSIS

Chapter 4 Lecture Notes

2013 MBA Jump Start Program. Statistics Module Part 3

Introduction to Regression and Data Analysis

Expression. Variable Equation Polynomial Monomial Add. Area. Volume Surface Space Length Width. Probability. Chance Random Likely Possibility Odds

STAT 360 Probability and Statistics. Fall 2012

Transcription:

Basic Statistics for SGPE Students Part I: Descriptive Statistics Achim Ahrens Anna Babloyan ahrensachim@gmail.com Erkal Ersoy annababloyan@gmail.com erkalersoy@gmail.com Heriot-Watt University, Edinburgh September 2015

Hypothesis testing and p-values 1 / 46 Outline 1. Descriptive statistics Sample statistics (mean, variance, percentiles) Graphs (box plot, histogram) Data transformations (log transformation, unit of measure) Correlation vs. Causation 2. Probability theory Conditional probabilities and independence Bayes theorem 3. Probability distributions Discrete and continuous probability functions Probability density function & cumulative distribution function Binomial, Poisson and Normal distribution E[X] and V[X] 4. Statistical inference Population vs. sample Law of large numbers Central limit theorem Confidence intervals

Descriptive statistics In recent years, more and better-quality data have been recorded than any other time in history. The increasing size of data sets that are readily available to us has enabled us to adopt new and more robust statistical tools. Rising data availability has (unfortunately) led to empirical researchers to sometimes overlook some preliminary steps, such as summarizing and visually examining their data sets. Ignoring these preliminary steps can lead to important issues and invalidate seemingly significant results. As we will see in this and following lectures, there are ways in which we can numerically summarize a data set. Before we discuss those approaches, let s take a quick look at what s available to us to visualize a data set graphically. 2 / 46

Descriptive statistics Histograms Histograms are extremely useful in getting a good graphical representation of the distribution of data. These figures consist of adjacent rectangles over discrete intervals, whose areas are the frequency of observations in the interval. Histograms are often normalized to show the proportion (or densities) of observations that fall into non-overlapping categories. In such cases, the total area under the bins equal 1. Remark The height of each bin in a normalized histogram represents density or proportion of observations that fall into that category. These can more easily be interpreted as percentages. 3 / 46

Descriptive statistics Histograms 1960 Density 0.01.02.03.04.05.06.07.08 25 30 35 40 45 50 55 60 65 70 75 80 85 Life expectancy (in years) Approximately, what is the average life expectancy in 1960? Roughly what percentage of countries had life expectancy above 65? What proportion of countries had a life expectancy less than 55 years? 4 / 46

Descriptive statistics Histograms 1960 Density 0.01.02.03.04.05.06.07.08 25 30 35 40 45 50 55 60 65 70 75 80 85 Life expectancy (in years) 5 / 46

Descriptive statistics Histograms 1990 Density 0.01.02.03.04.05.06.07.08 25 30 35 40 45 50 55 60 65 70 75 80 85 Life expectancy (in years) 5 / 46

Descriptive statistics Histograms 2011 Density 0.01.02.03.04.05.06.07.08 25 30 35 40 45 50 55 60 65 70 75 80 85 Life expectancy (in years) 5 / 46

Descriptive statistics The Mean and Standard Deviation A histogram can help summarize large amounts of data, but we often like to see an even shorter (and sometimes easier to interpret) summary. This is usually provided by the mean and the standard deviation. The mean (and median) are frequently used to find the center, whereas standard deviation measures the spread. Definition The (arithmetic) mean of a list of numbers is their sum divided by how many there are. For example, the mean of 9, 1, 2, 2, 0 is 9+1+2+2+0 5 = 2.8. More generally, mean = x = 1 n n x i ; i=1 i = 1...n 6 / 46

Descriptive statistics The Mean and Standard Deviation The standard deviation (SD) tells us how far numbers on a list deviate from their average. Usually, most numbers are within one SD around the mean. More specifically, for normally distributed variables, about 68% of entries are within one SD of the mean and about 95% of entries are within two SDs. 68% mean mean - one SD mean + one SD 95% mean mean - two SDs mean + two SDs 7 / 46

Descriptive statistics Computing the Standard Deviation Definition Standard Deviation = mean of (deviations from the mean) 2 where deviation from mean = entry mean And in formal notation, σ = 1 N Ni=1 (x i µ) 2, where µ = 1 N (x 1 +... + x N ). Example: Find the SD of 20, 10, 15, 15. Answer: mean = x = 20+10+15+15 4 = 15 Then, the deviations are 5, 5, 0, 0, respectively. So, SD = 5 2 +( 5) 2 +0 2 +0 2 4 = 50 4 = 12.5 3.5 Remark The SD comes out in the same units as the data. For example, if the data are a set of individuals heights in inches, the SD is in inches too. 8 / 46

Descriptive statistics The Root-Mean-Square Consider the following list of numbers: 0, 5, 8, 7, 3. Question: How big are these numbers? What is their mean? The mean is 0.2, but this does not tell us much about the size of the numbers it only implies that the positive numbers slightly outweigh the negative ones. To get a better sense of their size, we could use the mean of their absolute values. Statisticians tend to use another measure, though: The root-mean-square. Definition Root mean square (rms) = average of (entries) 2 9 / 46

Descriptive statistics The Root-Mean-Square and Standard Deviation There is an alternative way of calculating SD using root-mean-square: Remark SD = mean of (entries) 2 (mean of entries) 2 Recall the four numbers we used earlier to calculate SD: 20, 10, 15, 15. mean of (entries) 2 = 202 +10 2 +15 2 +15 2 4 = 950 4 = 237.5 (mean of entries) 2 = ( 20+10+15+15 4 ) 2 = ( 60 4 )2 = 225 Therefore, SD = 237.5 225 = 12.5 3.5, which agrees with what we found earlier. 10 / 46

Descriptive statistics Variance In probability theory and statistics, variance gets mentioned nearly as often as the mean and standard deviation. It is very closely related to SD and is a measure of how far a set of numbers lie from their mean. Variance is the second moment of a distribution (mean being the first moment), and therefore, tells us about the properties of the distribution (more on these later). Definition Variance = (Std. Dev.) 2 = σ 2 11 / 46

Descriptive statistics Normal Approximation for Data and Percentiles S&P 500, January 2001 - December 2001-2 s.d. -1 s.d. mean +1 s.d. +2 s.d. +3 s.d. +4 s.d. Frequency 0 10 20 30 40 50 60 5,000 10,000 15,000 20,000 25,000 Volume (thousands) Source: Yahoo! Finance and Commodity Systems, Inc. Is the normal approximation satisfactory here? 12 / 46

Descriptive statistics Normal Approximation for Data and Percentiles -2 s.d. 2011 with normal -1 s.d. mean +1 s.d. +2 s.d. Density 0.01.02.03.04.05.06.07.08 45 50 55 60 65 70 75 80 85 90 95 Life expectancy (in years) How about here? 13 / 46

Descriptive statistics Normal Approximation for Data and Percentiles Remark The mean and SD can be used to effectively summarize data that follow the normal curve, but these summary statistics can be much less satisfactory for data that do not follow the normal curve. In such cases, statisticians often opt for using percentiles to summarize distributions. Table. Selected percentiles for life expectancy in 2011 Percentiles Value 1 48 10 52.6 25 63 50 73.4 75 76.9 95 81.8 99 82.7 14 / 46

Descriptive statistics Calculating percentiles 1. Order all the values in your data set in ascending order (i.e. smallest to largest). 2. Select a percentile, P, that you would like to calculate and multiply it by the total number of entries in your data set, n. The value you obtain here is called the index. 3. If the index is not a whole number, round it up to the next integer. 4. Count the entries in your list of numbers starting from the smallest one until you get to the number indicated by your index. 5. This entry is the kth percentile in your data set. 15 / 46

Descriptive statistics Calculating percentiles Example Consider the following list of 5 numbers: 10, 15, 20, 25, 30. What is the entry that corresponds to the 25th percentile? What is the median? To obtain the 25th percentile, all we need to do is 0.25 5 = 1.25. After rounding, this value becomes 1, so 25th percentile in this case is the first entry, 10. We were also asked to obtain the median. To do this, calculate 0.5 5 = 2.5. Rounding this to the nearest whole number gives 3. So, the median in this case is 20. 16 / 46

Descriptive statistics Percentiles The 1st percentile of the distribution is approximately 48, meaning that the life expectancy in 1% of countries in 2011 was 48 or less, and 99% of countries had life expectancy higher than that. Similarly, the fact that 25th percentile is 63 implies that 25% of countries had life expectancy of 63 or less, whereas 75% had a longer expected lifespan. Definition Interquartile range is defined as 75th percentile 25th percentile and is sometimes used as a measure of spread, particularly when the SD would pay too much (or too little) attention to a small percentage of cases in the tails of the distribution. From the table above, the interquartile range equals 76.9 63 = 13.9 (and SD was 10.14). 17 / 46

Descriptive statistics Box plots The structure of a box plot: Whiskers Adjacent line (Upper adjacent value) The largest value within 75th percentile + 75th percentile/3rd quartile (upper hinge) Box 50th percentile (median) 25th percentile/1st quartile (lower hinge) Whiskers Adjacent line (Lower adjacent value) The smallest value within 25th percentile - Entries less than the lower adjacent value 18 / 46

Descriptive statistics Box plots Life expectancy (in years) 50 60 70 80 90 Life expectancy by region in 2011 EAS ECS LCN MEA NAC SAS SSF Are there any clear patterns emerging from summarizing the data this way? Legend EAS: East Asia & Pacific ECS: Europe & Central Asia LCN: Latin America & Caribbean MEA: Middle East & North Africa NAC: North America SAS: South Asia SSF: Sub-Saharan Africa 19 / 46

Descriptive statistics Box plots We might be able to spot some patterns that developed over time if we look at different years: Life expectancy by region 1960 Life expectancy (in years) 25 35 45 55 65 75 85 EAS ECS LCN MEA NAC SAS SSF 20 / 46

Descriptive statistics Box plots We might be able to spot some patterns that developed over time if we look at different years: Life expectancy by region 1990 Life expectancy (in years) 25 35 45 55 65 75 85 EAS ECS LCN MEA NAC SAS SSF 20 / 46

Descriptive statistics Box plots We might be able to spot some patterns that developed over time if we look at different years: Life expectancy by region 2011 Life expectancy (in years) 25 35 45 55 65 75 85 EAS ECS LCN MEA NAC SAS SSF 20 / 46

Data Transformations The effects of changing the unit of measure Now that we know how to summarize a dataset, let us turn to investigating the effects of changing the unit of measure for a variable on the mean and standard deviation. Such changes in the unit of measure could be for practical reasons or based on theory, but regardless of the reason, a statistician should know what to expect. To study this, let s consider a dataset on 200 individuals weights and heights. Each entry is originally reported in kg and cm, respectively, and below are some summary statistics: Table. Summary statistics Variable Mean Standard Deviation Weight (kg) 65.8 15.1 Height (cm) 170.02 12.01 21 / 46

Data Transformations The effects of changing the unit of measure And here are some diagrams that summarize the distribution of the two variables. Weight measured in kg -2 s.d. -1 s.d. mean +1 s.d. +2 s.d. -2 s.d. Height measured in cm -1 s.d. mean +1 s.d. +2 s.d. Density 0.01.02.03.04 40 50 60 70 80 90 100 110 120 130 140 150 160 170 Measured weight in kg Density 0.01.02.03.04 140 150 160 170 180 190 200 Measured height in cm Does the normal approximation look satisfactory? 22 / 46

Data Transformations The effects of changing the unit of measure And here are some diagrams that summarize the distribution of the two variables. Weight (kg) by sex Height (cm) by sex Measured weight in kg 50 100 150 200 Measured height in cm 50 100 150 200 F M F M 23 / 46

Data Transformations The effects of changing the unit of measure And here are some diagrams that summarize the distribution of the two variables. Weight measured in kg -2 s.d. -1 s.d. mean +1 s.d. +2 s.d. -2 s.d. Weight measured in lb -1 s.d. mean +1 s.d. +2 s.d. Density 0.01.02.03.04 40 50 60 70 80 90 100 110 120 130 140 150 160 170 Measured weight in kg Density 0.01.02 80 100 120 140 160 180 200 220 240 260 280 300 320 340 360 Measured weight in pounds Do you think the mean matches the original one (in correct units)? How about the standard deviation? 24 / 46

Data Transformations The effects of changing the unit of measure And here are some diagrams that summarize the distribution of the two variables. -2 s.d. Height measured in cm -1 s.d. mean +1 s.d. +2 s.d. -2 s.d. Height measured in in -1 s.d. mean +1 s.d. +2 s.d. Density 0.01.02.03.04 140 150 160 170 180 190 200 Measured height in cm Density 0.02.04.06.08.1.12 55 60 65 70 75 80 Measured height in inches Do you think the mean matches the original one (in correct units)? How about the standard deviation? 25 / 46

Data Transformations The effects of changing the unit of measure Here are the box plots with the transformed data: Weight (lb) by sex Height (in) by sex Measured weight in pounds 100 150 200 250 300 350 Measured height in inches 20 40 60 80 F M F M 26 / 46

Data Transformations The effects of changing the unit of measure Observations made using the figures are, of course, based on what statisticians and econometricians often call "eye-balling" the data. These observations are certainly not formal, but are a crucial part of effectively analyzing any dataset. In fact, you should make plotting, investigating and eye-balling your data a habit before you dive into complicated models and overlook important features of your dataset. Now that we have made our informal observations, let s look at the actual numbers. Table. Summary statistics Variable Mean SD Mean (converted) SD (converted) Weight (kg) 65.8 15.1 65.8 2.2 145.06 15.1 2.2 33.28 Height (cm) 170.02 12.01 66.94 4.73 Weight (lb) 145.06 33.28 145.06/2.2 65.8 33.28/2.2 15.1 Height (in) 66.94 4.73 170.02 12.01 27 / 46

Data Transformations The effects of changing the unit of measure Observations made using the figures are, of course, based on what statisticians and econometricians often call "eye-balling" the data. These observations are certainly not formal, but are a crucial part of effectively analyzing any dataset. In fact, you should make plotting, investigating and eye-balling your data a habit before you dive into complicated models and overlook important features of your dataset. Now that we have made our informal observations, let s look at the actual numbers. Table. Summary statistics Variable Mean SD Mean (converted) SD (converted) Weight (kg) 65.8 15.1 145.06 33.28 Height (cm) 170.02 12.01 170.02 0.4 66.94 12.01 0.4 4.73 Weight (lb) 145.06 33.28 65.8 15.1 Height (in) 66.94 4.73 66.94 2.5 170.02 4.73 2.5 12.01 27 / 46

Data Transformations The effects of changing the unit of measure We have seen that the mean and the standard deviation remain the same when we change the unit of measure, but how does variance behave? Table. Summary statistics Variable Mean Variance Mean (converted) Variance (converted) Weight (kg) 65.8 228.01 65.8 2.2 145.06 228.01 2.2 502.68 Height (cm) 170.02 144.24 66.94 56.79 Weight (lb) 145.06 1107.56 145.06/2.2 65.8 1107.56/2.2 502.38 Height (in) 66.94 22.37 170.02 56.81 28 / 46

Data Transformations The effects of changing the unit of measure We have seen that the mean and the standard deviation remain the same when we change the unit of measure, but how does variance behave? Table. Summary statistics Variable Mean Variance Mean (converted) Variance (converted) Weight (kg) 65.8 228.01 145.06 502.68 Height (cm) 170.02 144.24 170.02 0.4 66.94 144.24 0.4 56.79 Weight (lb) 145.06 1107.56 65.8 502.38 Height (in) 66.94 22.37 66.94 2.54 170.02 22.37 2.54 56.81 Note that 1 inch = 2.54 cm and similarly, 1cm = 1 2.54 = 0.3937in. Then, 22.37 (0.3937) 2 144.24. The opposite is true as well: 144.24 (2.54) 2 22.37. And we can apply the same to the weights in kg and lbs. And in general... 28 / 46

Data Transformations Properties of Variance...variance is scaled by the square of the constant by which all the values are scaled. While we are at it, here are some basic properties of variance: Basic properties of variance Variance is non-negative: Var(X) 0 Variance of a constant random variable is zero: P(X = a) = 1 Var(X) = 0 Var(aX) = a 2 Var(X) However, Var(X + a) = Var(X) For two random variables X and Y, Var(aX + by ) = a 2 Var(X) + b 2 Var(Y ) + 2abCov(X, Y )...but Var(X Y ) = Var(X) + Var(Y ) 2Cov(X, Y ) 29 / 46

Data Transformations Log Transformation So far, we have only worked with transformations in which we multiply each value with a constant. However, more complicated transformations are quite common in statistics and econometrics. One of the most common and useful transformations uses the natural logarithm. Definition Data transformation refers to applying a specific operation to each point in a dataset, in which each data point is replaced with the transformed one. That is, x i are replaced by y i = f (x i ). In our previous example with heights, our function, f (x), was simply f (x) = 2.54x. Now, let us study a different function: the natural logarithm. 30 / 46

Data Transformations Log Transformation Log transformation in action: Output-side real GDP at current PPPs (in mil. 2005US$) 500000.000 1000000.000 1500000.000 2000000.000 UK Real GDP 1960 1970 1980 1990 2000 2010 Year Natural log of output-side real GDP at current PPPs 13 13.5 14 14.5 UK Real GDP 1960 1970 1980 1990 2000 2010 Year 31 / 46

Data Transformations Log Transformation Log transformation in action: Output-side real GDP at current PPPs (in mil. 2005US$) 0.000 1000000.000 2000000.000 UK Real GDP 1960 1970 1980 1990 2000 2010 Year Natural log of real GDP at current PPPs (in mil. 2005US$) 13 13.5 14 14.5 UK Real GDP 1960 1970 1980 1990 2000 2010 Year 31 / 46

Data Transformations Log Transformation Life expectancy (in years) 50 60 70 80 90 Life Expectancy vs. Real GDP 0.000 5000000.000 10000000.000 15000000.000 Output-side real GDP at current PPPs (in mil. 2005US$) Life expectancy (in years) 3.8 4 4.2 4.4 Log Life expectancy vs. Log Real GDP 6 8 10 12 14 16 Natural log of output-side real GDP at current PPPs Important note The log transformation can only be used for variables that have positive values (why?). If the variable has zeros, the transformation can be applied only after these figures are replaced (usually by one-half of the smallest positive value in the data set). 32 / 46

Data Transformations Log Transformation Year: 2011 80 JPN GBR USA Region EAS ECS Life expectancy (in years) [linear scale] 60 40 CHN IDN IND ZAF RUS LCN MEA NAC SAS SSF Population (in million) 10 50 100 250 500 1000 0 10000 20000 30000 40000 50000 60000 Real GDP per capita (at constant 2005 national prices) [linear scale] 33 / 46

Data Transformations Log Transformation Year: 1960 Region 80 EAS ECS Life expectancy (in years) [linear scale] 60 40 CHN IDN IND JPN ZAF GBR USA LCN MEA NAC SAS SSF Population (in million) 10 50 100 250 500 1000 156.25 312.5 625 1250 2500 5000 10000 20000 40000 80000 Real GDP per capita (at constant 2005 national prices) [log scale] 33 / 46

Data Transformations Log Transformation Year: 1990 Region Life expectancy (in years) [linear scale] 80 60 40 CHN IND IDN ZAF RUS JPN GBRUSA EAS ECS LCN MEA NAC SAS SSF Population (in million) 10 50 100 250 500 1000 156.25 312.5 625 1250 2500 5000 10000 20000 40000 80000 Real GDP per capita (at constant 2005 national prices) [log scale] 33 / 46

Data Transformations Log Transformation Year: 2011 80 JPN GBR USA Region EAS ECS Life expectancy (in years) [linear scale] 60 40 IND IDN CHN ZAF RUS LCN MEA NAC SAS SSF Population (in million) 10 50 100 250 500 1000 156.25 312.5 625 1250 2500 5000 10000 20000 40000 80000 Real GDP per capita (at constant 2005 national prices) [log scale] 33 / 46

Data Transformations Log Transformation and growth A useful feature of the log transformation is the interpretation of its first difference as a percentage change (for small changes). This is because ln(1 + x) x for a small x: Wolfram Alpha Strictly speaking, a percentage change in Y from period t 1 to period t is defined as Y t Y t 1 Y t 1, which is approximately equal to ln(y t ) ln(y t 1 ). And the approximation is almost exact if the percentage change is small. To see this, consider the percentage change in US GDP from 2010 to 2011: Table. US Real GDP (in mil. 2005 US$) Year GDP Percentage change ln(y t ) ln(y 2011 ) ln(y 2010 ) 2010 12993576 1.803507 16.379966 1.787436 2011 13227916. 16.39784. And the difference in percentage change is 0.01803507 0.01787436 = 0.00016071 a discrepancy that we might be willing to live with. 34 / 46

Examining Relationships Covariance and Correlation Our daily lives (and not just within economics) are filled with statements about the relationship between two variables. For example, we might read about a study that found that men spend more money online than women. The relationship between gender and spending more online may not be this simple, of course income might play a role in this observed pattern. Ideally, we would like to set up an experiment in which we control the behavior of one variable (keeping everything else the same) and observe its effect on another. This is often not feasible in economics (a lot more on this later!). For the time being, let s focus on simple correlation. 35 / 46

Examining Relationships Covariance and Correlation Scatter plots are very useful in identifying the sign and strength of the relationship between two variables. Therefore, it s always extremely useful to plot your data and investigate what the relationship between your two variables are: Life expectancy 65.00 70.00 75.00 80.00 85.00 Life Expectancy (in years) vs. Internet usage 0.00 20.00 40.00 60.00 80.00 Internet users per 100 people 36 / 46

Examining Relationships Covariance and Correlation But these plots can also be misleading to the eye simply by changing the scale of the axes: Life expectancy 65.00 70.00 75.00 80.00 85.00 Life Expectancy (in years) vs. Internet usage 0.00 20.00 40.00 60.00 80.00 Internet users per 100 people Life expectancy 45.00 55.00 65.00 75.00 85.00 95.00 Life Expectancy (in years) vs. Internet usage 0.00 20.00 40.00 60.00 80.00 100.00 120.00 Internet users per 100 people 37 / 46

Examining Relationships Covariance and Correlation Therefore, it s best to obtain a numerical measure of the relationship. And correlation is the measure statisticians and econometricians tend to use. Definition Correlation measures the strength and direction of a linear relationship between two variables and is usually denoted as r. r x,y = r y,x = s x,y s x s y where s x,y is the sample covariance, and s x and s y are sample standard deviations of x and y, respectively. The former (i.e. sample covariance) is calculated as: s x,y = s y,x = 1 N (x i x)(y i ȳ). N 1 i=1 38 / 46

Examining Relationships Understanding covariance To see how a scatter diagram can be read in terms of covariance between the two variables, consider the USA: Log of real GDP per capita (at constant 2005 national prices) 6 7 8 9 10 11 12 Education and GDP per capita (2010) COD KWT xusa x 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Average years of total schooling } USA yusa ȳ Because x USA > x and y USA > ȳ, the term (x USA x)(y USA ȳ) is positive. Also, (x COD x)(y COD ȳ) > 0, but (x KWT x)(y KWT ȳ) < 0. Thus, countries located in the top-right and bottom-left quadrants have a positive effect on s x,y, whereas countries in the top-left and bottom-right quadrants have a negative effect on s x,y. Question: Should we use covariance or correlation as a more "robust" measure of the relationship? Why? 39 / 46

Examining Relationships Understanding covariance To answer this question, let s look more closely at how covariance behaves: A positive (negative) covariance indicates that x tends to be above its mean value whenever y is above (below) its mean value. A sample covariance of zero suggests that x and y are unrelated. In our example, s x,y = 2.69. This suggests that there is a positive relationship between x and y. But what does the value of 2.69 tell us about the strength of the relationship? Nothing. Why not? Suppose we wanted to measure schooling in decades instead of years. That is, we generate a new variable which equals school measured in years divided by 10. The new covariance is s x,y = 0.269 which is much closer to zero. Technically speaking, covariance is not invariant to linear transformations of the variables. 40 / 46

Examining Relationships Covariance versus Correlation The sample correlation coefficient addresses this problem. While s x,y may take any value between and +, the correlation coefficient is standardised such that r [ 1, 1]. Recall that r x,y = r y,x = s x,y s x s y where s x,y is the covariance of x and y. s x and s y are the sample standard deviations of x and y, respectively. Note that because s x > 0 and s y > 0, the sign of the sample covariance is the same as the sign of the correlation coefficient. Correlation coefficient r x,y > 0 indicates positive correlation. r x,y < 0 indicates negative correlation. r x,y = 0 indicates that x and y are unrelated. r x,y = ±1 indicates perfect positive (negative) correlation. That is, there exists an exact linear relationship between x and y of the form y = a + bx. 41 / 46

y y y Examining Relationships Correlation In our example, r x,y = 0.7763, which indicates positive correlation (because r x,y > 0) and that the relationship is reasonably strong (because r x,y is not too far away from 1). To get a better feeling for what is "strong" and "weak", we generate 100 observations of x and y with varying degrees of correlation and plot them on a scatter diagram. To get a better feeling for what is "strong" and "weak", we generate 100 observations of x and y with varying degrees of correlation and plot them on a scatter diagram. r(x,y)=.9 r(x,y)=-.9 r(x,y)=.7-4 -3-2 -1 0 1 2 3 4-4 -3-2 -1 0 1 2 3 4 x -4-3 -2-1 0 1 2 3 4-4 -3-2 -1 0 1 2 3 4 x -4-3 -2-1 0 1 2 3 4-4 -3-2 -1 0 1 2 3 4 x 42 / 46

Examining Relationships Correlation -4-3 -2-1 0 1 2 3 4 y r(x,y)=.3-4 -3-2 -1 0 1 2 3 4 y r(x,y)=0-4 -3-2 -1 0 1 2 3 4 y r(x,y)=0-4 -3-2 -1 0 1 2 3 4 x -4-3 -2-1 0 1 2 3 4 x -4-3 -2-1 0 1 2 3 4 x What s unusual about the right-most diagram here? In the right-most diagram, the correlation coefficient indicates that x and y are unrelated, but the graph implies otherwise. In fact, there is a strong quadratic relationship between x and y in this case. 43 / 46

Examining Relationships Summary Correlation, r, measures the strength and direction of a linear relationship between two variables. The sign of r indicates the direction of the relationship: r > 0 for a positive association and r < 0 for a negative one. r always lies within [ 1, 1] and indicates the strength of a relationship by how close it is to 1 or 1. 44 / 46

Examining Relationships Correlation vs Causation You may have already encountered the statement that Correlation does not imply causation. This is an important concept to grasp, because even a strong correlation between two variables is not enough to draw conclusions about causation. For instance, consider the following examples: 1. Do televisions increase life expectancy? 2. Are big hospitals bad for you? 3. Do firefighters make fires worse? 45 / 46

Examining Relationships Correlation vs Causation You may have already encountered the statement that Correlation does not imply causation. This is an important concept to grasp, because even a strong correlation between two variables is not enough to draw conclusions about causation. For instance, consider the following examples: 1. Do televisions increase life expectancy? There is a high positive correlation between the number of television sets per person in a country and life expectancy in that country. That is, nations with more TV sets per person have higher life expectancies. Does this imply that we could extend people s lives in a country just by shipping TVs to them? No, of course not. The correlation between these two variables stem from the nation s income: Richer nations have more TVs per person than poorer ones. These nations also have access to better nutrition and health care. 2. Are big hospitals bad for you? 3. Do firefighters make fires worse? 45 / 46

Examining Relationships Correlation vs Causation You may have already encountered the statement that Correlation does not imply causation. This is an important concept to grasp, because even a strong correlation between two variables is not enough to draw conclusions about causation. For instance, consider the following examples: 1. Do televisions increase life expectancy? 2. Are big hospitals bad for you? A study has found positive correlation between the size of a hospital (measured by its number of beds) and the median number of days that patients remain in the hospital. Does this mean that you can shorten a hospital stay by choosing a small hospital? 3. Do firefighters make fires worse? 45 / 46

Examining Relationships Correlation vs Causation You may have already encountered the statement that Correlation does not imply causation. This is an important concept to grasp, because even a strong correlation between two variables is not enough to draw conclusions about causation. For instance, consider the following examples: 1. Do televisions increase life expectancy? 2. Are big hospitals bad for you? 3. Do firefighters make fires worse? A magazine has observed that "there s a strong positive correlation between the number of firefighters at a fire and the damage the fire does. So sending lots of firefighters just causes more damage." Is this reasoning flawed? 45 / 46

Examining Relationships Reverse Causality In addition to correlation feeding through a third (sometimes unobserved) variable, in economics, we often run into reverse causality problems. Earlier, we showed that real GDP per capita and education (measured by average years of schooling) are positively correlated. This could be because: 1. Rich countries can afford more (and better) education. That is, an increase in GDP per capita causes an increase in schooling. 2. More (and better) education promotes innovation and productivity. That is, an increase in schooling causes an increase in GDP per capita. The relationship between GDP per capita and education suffers from reverse causality. To reiterate, although we can make the statement that x and y are correlated, we do not know whether y is caused by x or vice versa. This is one of the central problems in empirical research in economics. In the course of the MSc, you will learn methods that allow you to identify the causal mechanisms in the relationship between y and x. 46 / 46