6. Heteroskedasticity (Violation of Assumption #B2)

Size: px
Start display at page:

Download "6. Heteroskedasticity (Violation of Assumption #B2)"

Transcription

1 6. Heteroskedasticity (Violation of Assumption #B2) Assumption #B2: Error term u i has constant variance for i = 1,..., N, i.e. Var(u i ) = σ 2 Terminology: Homoskedasticity: constant variances of the u i Heteroskedasticity: non-constant variances of the u i 100

2 Typical situations of heteroskedasticity: Cross-section data sets covering household and regional data Data with measurement errors following a trend Financial market data (exchange-rate, asset-price returns) Example: [I] Rents for (business) real-estate in distinct town quarters 101

3 Example: [II] Variables: y i = rent for real-estate in quarter i (EUR/m 2 ) x i = distance to city center (in km) Single-regressor model: y i = α + β x i + u i, i = 1,..., 12 RENT DISTANCE

4 Rent Distance Dependent Variable: RENT Method: Least Squares Date: 11/06/04 Time: 14:25 Sample: 1 12 Included observations: 12 Variable Coefficient Std. Error t-statistic Prob. C DISTANCE R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criterion Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic)

5 Example: [III] Residual variation increases with the regressor indication of heteroskedasticity 2 1 Residuals Distance 104

6 Issues: Consequences of heteroskedasticity Diagnostics (Tests for heteroskedasticity) Estimation and hypothesis-testing in the presence of heteroskedasticity (weighted OLS estimation, Aitken-estimator) 105

7 6.1 Consequences Homoskedasticity vs. heteroskedasticity: [I] Consider the linear regression model y = Xβ + u Homoskedasticity means that Cov(u) = σ 2 I N = (validity of Assumption #B3) σ σ σ 2 106

8 Homoskedasticity vs. heteroskedasticity: [II] Heteroskedasticity means that Cov(u) = where Ω = σ σ σn 2 σ2 Ω σ1 2/σ σ2 2/σ σn 2 /σ2 107

9 Example: For the real-estate data set we could assume σ 2 i = σ2 x i, that is Ω = x x x N Ω can be determined directly from the data 108

10 Now: Central result (proof and derivations are given below) Theorem 6.1: (Consequences of heteroskedasticity) In the presence of heteroskedasticity the OLS estimator β = (X X) 1 X y is unbiased. However, the OLS estimator β is no longer BLUE, i.e., there is another linear and unbiased estimator of β with a smaller covariance matrix. 109

11 Proof of unbiasedness: We have β = (X X) 1 X y = (X X) 1 X (Xβ + u) = β + (X X) 1 X u It follows that E ( β ) = β + (X X) 1 X E (u) From Assumption #B1 we have E (u) = 0 N 1 and E ( β ) = β (Assumption #B2 is not needed) 110

12 Now: Construction of a linear estimator of β that, in the presence of heteroskedasticity, is (1) unbiased and (2) more efficient than the OLS estimator β = (X X) 1 X y Procedure: [I] We transform the heteroskedastic model y = Xβ + u such that the parameter vector β remaines unchanged heteroskedasticity vanishes the transformed model y = X β + u satisfies all #A-, #B-, #C-assumptions 111

13 Procedure: [II] New estimator of β: OLS estimator of the transformed model β GLS = [ X X ] 1 X y Example: [I] Consider the single-regressor model where y i = α + β x i + u i (i = 1,..., N) (cf. our real-estate example) Var(u i ) = σ 2 i = σ2 x i 112

14 Example: [II] Transformation: y i xi = α xi + β x i xi + u i xi Define y i y i xi, z i 1 xi, x i x i xi, u i u i xi transformed model: y i = α z i + β x i + u i (multiple regression without intercept) 113

15 Example: [III] We have E(u i ) = 1 xi E(u i ) = 0 Var(u i ) = ( 1 xi ) 2 Var(u i ) = 1 x i σ 2 x i = σ 2 model is homoskedastic 114

16 Summary: Transformed model is homoskedastic Transformed model satisfies all #A-, #B-, #C-assumptions y i, z i, x i can be obtained from the original (x i, y i )-values OLS estimator of the transformed model is obtainable 115

17 Generalization: [I] Consider the heteroskedastic model where (cf. Slide 107) y = Xβ + u Cov(u) = E [ uu ] = σ 2 Ω All elements of the diagonal matrix Ω = are positive σ1 2/σ σ2 2/σ σn 2 /σ2 116

18 Generalization: [II] Ω is a positively definite matrix there is at least one regular (N N) matrix P such that P P = Ω 1 (vgl. Econometrics I, Slide 49) 117

19 Generalization: [III] We transform the heteroskedastic model y = Xβ + u via the matrix P into Py = PXβ + Pu Using the notation y Py, X PX, u Pu, we obtain y = X β + u 118

20 Generalization: [IV] For the vector u of the transformed model we have E ( u ) = E (Pu) = PE (u) = 0 N 1 Cov ( u ) = E { [Pu E(Pu)] [Pu E(Pu)] } = E [ Puu P ) ] = PE [ uu ] P = σ 2 PΩP = σ 2 I N transformed model is homoskedastic 119

21 Remark: The validity of follows from the equation PΩP = I N P P = Ω 1 after left-hand and right-hand-side multiplication with PΩ and P 1 : PΩP PP 1 = PΩΩ 1 P 1 120

22 Summary: There always exists a transformation matrix P that transforms the heteroskedastic model y = Xβ + u with Cov(u) = σ 2 Ω into the homoskedastic model y = X β + u with Cov(u ) = σ 2 I N The homoskedastic model satisfies all #A-, #B-, #C-assumptions β remains unaffected by the transformation 121

23 Now: Potential estimator of β: (cf. Slide 112) OLS estimator of the transformed model β GLS = [ X X ] 1 X y = [ (PX) PX ] 1 (PX) Py = [ X P PX ] 1 X P Py = [ X Ω 1 X ] 1 X Ω 1 y 122

24 Definition 6.2: (Generalized Least Squares estimator) The OLS estimator of β obtained from the transformed homoskedastic model β GLS = [ X Ω 1 X ] 1 X Ω 1 y is called the Generalized Least Squares (GLS) estimator (also: Aitken-estimator). Theorem 6.3: (Properties of the GLS estimator) The GLS estimator β GLS = [ X Ω 1 X ] 1 X Ω 1 y is linear and unbiased. Its covariance matrix is given by ( Cov β GLS) = σ 2 [ X Ω 1 X ] 1. (Proof: see class) 123

25 Remark: Under homoskedasticity it follows that and thus Ω = I N β GLS = [ X Ω 1 X ] 1 X Ω 1 y = [ X I N X ] 1 X I N y = [ X X ] 1 X y = β (GLS and OLS estimators coincide) 124

26 Obviously: Both, the GLS estimator β GLS = [ X Ω 1 X ] 1 X Ω 1 y and the OLS estimator β = [ X X ] 1 X y are linear and unbiased estimators Question: Which estimator is more efficient? 125

27 Answer: The transformed model y = X β + u satisfies all #A-, #B-, #C-assumptions The GLS estimator is BLUE of β (Gauß-Markov-Theorem) The OLS estimator cannot be efficient β GLS = [ X Ω 1 X ] 1 X Ω 1 y β = [ X X ] 1 X y 126

28 Question: What are the consequences of erroneously using the OLS estimator and its associated formulae for the standard errors in the presence of heteroskedasticity? Answer: We compare Cov( β) under heteroskedasticity (the true situation) with Cov( β) as computed under homoskedasticity (the untrue situation) 127

29 Comparison: [I] Under heteroskedasticity we have Cov ( β ) { [ = E β E ( β )] [ β E ( β )] } = E { [ β β ] [ β β ] } { (X = E X ) [ 1 (X X u X ) ] } 1 X u = E { (X X ) 1 X uu X ( X X ) 1 } = ( X X ) 1 X E [ uu ] X ( X X ) 1 = σ 2 ( X X ) 1 X ΩX ( X X ) 1 128

30 Comparison: [II] If we neglect heteroskedasticity we have Cov ( β ) = σ 2 ( X X ) 1 Similarly, in the estimation of σ 2 we have: Under heteroskedasticity: ˆσ 2 = û û N K 1 = (Pû) Pû N K 1 Under the neglection of heteroskedasticity: ˆσ 2 = û û N K 1 129

31 Summary: Under the neglection of heteroskedasticity, estimation of β via the OLS estimator ˆβ = ( X X ) 1 X y is unbiased, but inefficient Under the neglection of heteroskedasticity, the ordinary estimators of the covariance matrix Cov( β) the variance σ 2 of the error term are biased statistics of t- and F -tests are based on biased estimators t- and F -tests are very likely to be misleading 130

32 6.2 Diagnostics Now: Statistical tests for heteroskedasticity Basic structure of all tests: H 0 : Homoskedasticity vs. H 1 : Heteroskedasticity 131

33 Consequence: Non-rejection of H 0 OLS results are unsuspicious Rejection of H 0 problematic OLS results (cf. Section 6.1) application of alternative estimation procedures (cf. Section 6.3) 132

34 Problem of all tests: Tests are based on different patterns of heteroskedasticity (e.g. σ 2 i = σ2 x ki, σ 2 i = σ2 x 2 ki etc.) alternative tests for heteroskedasticity 1. The Goldfeld-Quandt test (special case) Assumed pattern of heteroskedasticity: Variances of the u i are split into two groups: σ 2 i = σ 2 A σ 2 i = σ 2 B for all i belonging to group A (i A) for all i belonging to group B (i B) 133

35 Hypothesis test: H 0 : σ 2 A = σ2 B versus H 1 : σ 2 A σ2 B Test statistic: [I] Notation N A is the number of observations in group A N B is the number of observations in group B S A ûû = i A û 2 i S B ûû = i B û 2 i is the sum of squared residuals in group A is the sum of squared residuals in group B 134

36 Test statistic: [II] Under the Assumption #B4 it follows that 1 N A K 1 SA ûû /σ2 A 1 N B K 1 SB ûû /σ2 B F NA K 1,N B K 1 Under H 0 : σ 2 A = σ2 B we have T = SA ûû /(N A K 1) S B ûû /(N B K 1) F N A K 1,N B K 1 Reject H 0 at the significance level α if T [0, F NA K 1,N B K 1;α/2 ] [F N A K 1,N B K 1;1 α/2, + ] 135

37 Remarks: [I] We can also test the one-sided alternative H 1 : σa 2 > σ2 B via the statistic T The critical region of this test at the α-level is given by [F NA K 1,N B K 1;1 α, + ] We test the reverse alternative H 1 : σ 2 A < σ2 B by interchanging the role of the groups A and B 136

38 Remarks: [II] The general Goldfeld-Quandt test can be used to test wether the σi 2 -values depend in a monotone way on a single exogenous variable x ki (cf. Gujarati, 2003) 137

39 2. The Breusch-Pagan test Assumed pattern of heteroskedasticity: [I] Consider J K exogenous variables z 1,..., z J and J coefficients α 1,..., α J For i = 1,..., N consider the transformation h(α 1 z 1i + α 2 z 2i α J z Ji ) with h : R R + satisfying the following properties: h is continuously differentiable h(0) = 1 138

40 Assumed pattern of heteroskedasticity: [II] We assume that the variances of the u i are given by σ 2 i = σ2 h(α 1 z 1i + α 2 z 2i α J z Ji ) Example: For h : R R + mit h(x) = exp(x) we have σ 2 i = σ2 exp(α 1 z 1i + α 2 z 2i α J z Ji ) (multiplicative heteroskedasticity) 139

41 Heteroskedasticity test: Defining α [α 1... α J ], the testing problem is H 0 : α = 0 J 1 versus H 1 : α 0 J 1 Test statistic: [I] There is a test that does not depend on the function h (Breusch-Pagan test) 140

42 Test statistic: [II] Derivation (in its simplest form): Estimate the model by OLS y = Xβ + u Compute the residuals û = y û = y X β Estimate the model û 2 i = α 0 + α 1 z 1i α J z Ji + u i (i = 1,..., N) and find the coefficient of determination R 2 141

43 Test statistic: [III] Test statistic: We have T = NR 2 T asmp. χ 2 J (chisquare distribution with J degrees-of-freedom) Under H 0 : α = 0 J 1 the impact of z 1,..., z J on û 2 i be equal to zero should decision rule: Reject H 0 at the α-level if T = NR 2 > χ 2 J;1 α 142

44 Remark: The Breusch-Pagan test is a Lagrange-Multiplier test (cf. the lecture Advanced Statistics ) 143

45 3. The White test Special feature of previous tests: Explicit structural form of heteroskedasticity White test: Allows for entirely unknown patterns of heteroskedasticity Best-known heteroskedasticity test Theoretical foundation: Eicker (1967) White (1980) 144

46 Preliminaries: [I] Covariance matrix of OLS estimator under heteroskedasticity Cov ( β ) = σ 2 ( X X ) 1 X ΩX ( X X ) 1 (cf. Slide 128) Question: Can we consistently estimate Cov ( β ) without any structural assumption on the σ 2 i? (i.e. in the presence of heteroskedasticity of unknown form) Answer: Yes, see White (1980) 145

47 Preliminaries: [II] Consider the following partitioning of the X matrix: where X = 1 x 11 x 21 x K1 1 x 12 x 22 x K x 1N x 2N x KN = x i = [ 1 x 1i x 2i x Ki ] x 1 x 2. x N Estimate the model by OLS y = Xβ + u 146

48 Preliminaries: [III] Compute the residuals û = y ŷ = y X β A consistent estimator of Cov ( β ) under heteroskedasticity of unknown form is given by Ĉov ( β ) = ( X X ) 1 N i=1 û 2 i x ix i ( X X ) 1 147

49 Definition 6.4: (Heteroskedasticity-robust standard errors) The standard errors of the OLS estimators β = ( X X ) 1 X y, which are given by the square root of the diagonal elements of the estimated covariance matrix Ĉov ( β ) = ( X X ) 1 N i=1 û 2 i x ix i ( X X ) 1, are called heteroskedasticity-robust standard errors or White standard errors. 148

50 Remarks: White standard errors are available in EViews White standard errors should be reported in empirical studies 149

51 Dependent Variable: RENT Method: Least Squares Date: 11/06/04 Time: 14:25 Sample: 1 12 Included observations: 12 Variable Coefficient Std. Error t-statistic Prob. C DISTANCE R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criterion Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) Dependent Variable: RENT Method: Least Squares Date: 11/12/04 Time: 12:23 Sample: 1 12 Included observations: 12 White Heteroskedasticity-Consistent Standard Errors & Covariance Variable Coefficient Std. Error t-statistic Prob. C DISTANCE R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criterion Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic)

52 Now: White test for heteroskedasticity of unknown form Basis of the test: [I] Comparison of the estimated covariance matrices of the OLS estimator β = ( X X ) 1 X y under homoskedasticity: Ĉov ( β ) = ˆσ 2 ( X X ) 1, where ˆσ 2 = heteroskedasticity: û û N K 1 Ĉov ( β ) = ( X X ) 1 N i=1 û 2 i x ix ( i X X ) 1 (the estimated White covariance matrix) 151

53 Basis of the test: [III] Under homoskedasticity (H 0 ) both estimators should not differ substantially test statistic of the White test 152

54 Test statistic of the White test: [I] Estimate the model y = Xβ + u by OLS and compute the residuals û = y ŷ = y X β Use the squared residuals û 2 i, the exogenous variables x 1i,..., x Ki, their squared values x 2 1i,..., x2 Ki and all cross products x kix li (k = 1,..., K, l = 1,..., K, k = l) to specify the model û 2 i = γ 0 + γ 1 x 1i γ K x Ki + K K k=1 l=1 δ kl x ki x li + u i 153

55 Test statistic of the White test: [II] Estimated the model by OLS and find the coefficient of determination R 2 Under H 0 we have T = NR 2 asmp. χ 2 K(K+1) Reject H 0 at the significance level α if T > χ 2 K(K+1);1 α 154

56 Example: Test the data set on Slide 102 for heteroskedasticity via the Goldfeld-Quandt test the Breusch-Pagan test the White test (see Class) Interesting question: Which test should be preferred? 155

57 Remarks: The White test is the most general test often has low power Whenever we realistically conjecture an explicit pattern of heteroskedasticty (e.g. σi 2 = σ2 xki 2 ), we should use the alternative tests (Goldfeld-Quandt test, Breusch-Pagan test) use graphical tools to analyze residuals in order to detect potential patterns of heteroskedasticity 156

58 6.3 Feasible Estimation Procedures Result from Section 6.1: For the heteroskedastic model where the GLS estimator is BLUE of β (vgl. Folie 126) y = Xβ + u, Cov(u) = E [ uu ] = σ 2 Ω, β GLS = [ X Ω 1 X ] 1 X Ω 1 y 157

59 Problem: Frequently, the diagonal matrix Ω is not known GLS estimator β GLS cannot be computed Resort: Replace the unknown (true) Ω by an unbiased and/or consistent estimate Ω feasible GLS estimator (FGLS) 158

60 Definition 6.5: (FGLS estimator) Let Ω be an unbiased and/or consistent estimator of the unknown covariance matrix Ω of the heteroskedastic model where y = Xβ + u, Cov(u) = σ 2 Ω. The estimator β FGLS = [ X Ω 1 X] 1 X Ω 1 y is called feasible generalized-least-squares estimator of β. 159

61 Example: [I] Consider the data set on Slide 102 Classify the u i variances for the central (i = 1,..., 5) and the periphery quarters (i = 6,..., 12) Consider the following model: y i = α + β x i + u i, where σ 2 i = σ 2 A for i = 1,..., 5 σ 2 i = σ 2 B for i = 6,...,

62 Example: [II] Transformation of the model: y i σ = α 1 A σ + β x i A σ + u i A σ for i = 1,..., 5 A y i σ B = α 1 σ B + β x i σ B + u i σ B for i = 6,..., 12 variances of the error terms Var ( ) u i σ = 1 A σa 2 Var(u i ) = 1 σa 2 σa 2 Var ( ) u i σ = 1 B σb 2 Var(u i ) = 1 σb 2 σb 2 = 1 for i = 1,..., 5 = 1 for i = 6,...,

63 Example: [III] Summary: where y i = α z i + β x i + u i for i = 1,..., 12 y i = y i σ A, z i = 1 σ A, x i = x i σ A, u i = u i σ A for i = 1,..., 5 y i = y i σ B, z i = 1 σ B, x i = x i σ B, u i = u i σ B for i = 6,...,

64 Example: [IV] Variances σ 2 A, σ2 B are unknown estimation of the variances via the respective regressions y i = α + β x i + u i for i = 1,... 5 and i = 6, with the respective estimators ˆσ 2 = û û N K 1 163

65 Dependent Variable: RENT Method: Least Squares Date: 11/15/04 Time: 10:41 Sample: 1 5 Included observations: 5 Variable Coefficient Std. Error t-statistic Prob. C DISTANCE R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criterion Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic) Dependent Variable: RENT Method: Least Squares Date: 11/15/04 Time: 10:41 Sample: 6 12 Included observations: 7 Variable Coefficient Std. Error t-statistic Prob. C DISTANCE R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criterion Sum squared resid Schwarz criterion Log likelihood F-statistic Durbin-Watson stat Prob(F-statistic)

66 Example: [IV] Estimated variances and standard deviations ˆσ A 2 = = , ˆσ A = = ˆσ B 2 = = , ˆσ A = = (estimated) transformation of the model (cf. Slides 161, 162) FGLS estimates of the coefficients: ˆα FGLS = , ˆβ FGLS = Estimate of the error-term variance of the transformed model: Var(u i ) = =

67 Dependent Variable: RENT_TR Method: Least Squares Date: 11/15/04 Time: 10:51 Sample: 1 12 Included observations: 12 Variable Coefficient Std. Error t-statistic Prob. Z_TR DISTANCE _TR R-squared Mean dependent var Adjusted R-squared S.D. dependent var S.E. of regression Akaike info criterion Sum squared resid Schwarz criterion Log likelihood Durbin-Watson stat

68 Example: [V] However, we know that Var(u i ) = 1 Corrected standard errors of ˆα FGLS, ˆβ FGLS (via covariance matrix σ 2 (X X) 1 of the OLS estimator) SE(ˆα FGLS ) = 0.243, SE(ˆβ FGLS ) = corrected t-values of ˆα FGLS, ˆβ FGLS : t-value of ˆα FGLS : , t-value of ˆβ FGLS :=

69 Properties of FGLS estimators: FGLS estimators are unbiased Variances of FGLS estimators are lower than the variances of the OLS estimators FGLS estimators are asymptotically efficient (approximation to the GLS estimator for N ) 168

2. Linear regression with multiple regressors

2. Linear regression with multiple regressors 2. Linear regression with multiple regressors Aim of this section: Introduction of the multiple regression model OLS estimation in multiple regression Measures-of-fit in multiple regression Assumptions

More information

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

More information

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling Jeff Wooldridge NBER Summer Institute, 2007 1. The Linear Model with Cluster Effects 2. Estimation with a Small Number of Groups and

More information

Multiple Linear Regression in Data Mining

Multiple Linear Regression in Data Mining Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple

More information

Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions

Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions What will happen if we violate the assumption that the errors are not serially

More information

SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

More information

Chapter 6: Multivariate Cointegration Analysis

Chapter 6: Multivariate Cointegration Analysis Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration

More information

3.1 Least squares in matrix form

3.1 Least squares in matrix form 118 3 Multiple Regression 3.1 Least squares in matrix form E Uses Appendix A.2 A.4, A.6, A.7. 3.1.1 Introduction More than one explanatory variable In the foregoing chapter we considered the simple regression

More information

Simple Linear Regression Inference

Simple Linear Regression Inference Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

More information

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software STATA Tutorial Professor Erdinç Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software 1.Wald Test Wald Test is used

More information

Forecasting the US Dollar / Euro Exchange rate Using ARMA Models

Forecasting the US Dollar / Euro Exchange rate Using ARMA Models Forecasting the US Dollar / Euro Exchange rate Using ARMA Models LIUWEI (9906360) - 1 - ABSTRACT...3 1. INTRODUCTION...4 2. DATA ANALYSIS...5 2.1 Stationary estimation...5 2.2 Dickey-Fuller Test...6 3.

More information

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

More information

1 Teaching notes on GMM 1.

1 Teaching notes on GMM 1. Bent E. Sørensen January 23, 2007 1 Teaching notes on GMM 1. Generalized Method of Moment (GMM) estimation is one of two developments in econometrics in the 80ies that revolutionized empirical work in

More information

Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2

ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2 University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages

More information

Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

More information

Competition as an Effective Tool in Developing Social Marketing Programs: Driving Behavior Change through Online Activities

Competition as an Effective Tool in Developing Social Marketing Programs: Driving Behavior Change through Online Activities Competition as an Effective Tool in Developing Social Marketing Programs: Driving Behavior Change through Online Activities Corina ŞERBAN 1 ABSTRACT Nowadays, social marketing practices represent an important

More information

The relationship between stock market parameters and interbank lending market: an empirical evidence

The relationship between stock market parameters and interbank lending market: an empirical evidence Magomet Yandiev Associate Professor, Department of Economics, Lomonosov Moscow State University mag2097@mail.ru Alexander Pakhalov, PG student, Department of Economics, Lomonosov Moscow State University

More information

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression

Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction

More information

Air passenger departures forecast models A technical note

Air passenger departures forecast models A technical note Ministry of Transport Air passenger departures forecast models A technical note By Haobo Wang Financial, Economic and Statistical Analysis Page 1 of 15 1. Introduction Sine 1999, the Ministry of Business,

More information

Introduction to Regression and Data Analysis

Introduction to Regression and Data Analysis Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Vector Time Series Model Representations and Analysis with XploRe

Vector Time Series Model Representations and Analysis with XploRe 0-1 Vector Time Series Model Representations and Analysis with plore Julius Mungo CASE - Center for Applied Statistics and Economics Humboldt-Universität zu Berlin mungo@wiwi.hu-berlin.de plore MulTi Motivation

More information

Clustering in the Linear Model

Clustering in the Linear Model Short Guides to Microeconometrics Fall 2014 Kurt Schmidheiny Universität Basel Clustering in the Linear Model 2 1 Introduction Clustering in the Linear Model This handout extends the handout on The Multiple

More information

1 Another method of estimation: least squares

1 Another method of estimation: least squares 1 Another method of estimation: least squares erm: -estim.tex, Dec8, 009: 6 p.m. (draft - typos/writos likely exist) Corrections, comments, suggestions welcome. 1.1 Least squares in general Assume Y i

More information

Penalized regression: Introduction

Penalized regression: Introduction Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood

More information

Solving Systems of Linear Equations

Solving Systems of Linear Equations LECTURE 5 Solving Systems of Linear Equations Recall that we introduced the notion of matrices as a way of standardizing the expression of systems of linear equations In today s lecture I shall show how

More information

Lecture 8: Gamma regression

Lecture 8: Gamma regression Lecture 8: Gamma regression Claudia Czado TU München c (Claudia Czado, TU Munich) ZFS/IMS Göttingen 2004 0 Overview Models with constant coefficient of variation Gamma regression: estimation and testing

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Lecture 15. Endogeneity & Instrumental Variable Estimation

Lecture 15. Endogeneity & Instrumental Variable Estimation Lecture 15. Endogeneity & Instrumental Variable Estimation Saw that measurement error (on right hand side) means that OLS will be biased (biased toward zero) Potential solution to endogeneity instrumental

More information

Testing for Granger causality between stock prices and economic growth

Testing for Granger causality between stock prices and economic growth MPRA Munich Personal RePEc Archive Testing for Granger causality between stock prices and economic growth Pasquale Foresti 2006 Online at http://mpra.ub.uni-muenchen.de/2962/ MPRA Paper No. 2962, posted

More information

Regression III: Advanced Methods

Regression III: Advanced Methods Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

More information

The Impact of Privatization in Insurance Industry on Insurance Efficiency in Iran

The Impact of Privatization in Insurance Industry on Insurance Efficiency in Iran The Impact of Privatization in Insurance Industry on Insurance Efficiency in Iran Shahram Gilaninia 1, Hosein Ganjinia, Azadeh Asadian 3 * 1. Department of Industrial Management, Islamic Azad University,

More information

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set.

MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. MATH 304 Linear Algebra Lecture 9: Subspaces of vector spaces (continued). Span. Spanning set. Vector space A vector space is a set V equipped with two operations, addition V V (x,y) x + y V and scalar

More information

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Review Jeopardy. Blue vs. Orange. Review Jeopardy Review Jeopardy Blue vs. Orange Review Jeopardy Jeopardy Round Lectures 0-3 Jeopardy Round $200 How could I measure how far apart (i.e. how different) two observations, y 1 and y 2, are from each other?

More information

Econometric Methods for Panel Data

Econometric Methods for Panel Data Based on the books by Baltagi: Econometric Analysis of Panel Data and by Hsiao: Analysis of Panel Data Robert M. Kunst robert.kunst@univie.ac.at University of Vienna and Institute for Advanced Studies

More information

Master s Theory Exam Spring 2006

Master s Theory Exam Spring 2006 Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

More information

Chapter 3: The Multiple Linear Regression Model

Chapter 3: The Multiple Linear Regression Model Chapter 3: The Multiple Linear Regression Model Advanced Econometrics - HEC Lausanne Christophe Hurlin University of Orléans November 23, 2013 Christophe Hurlin (University of Orléans) Advanced Econometrics

More information

Chapter 5: Bivariate Cointegration Analysis

Chapter 5: Bivariate Cointegration Analysis Chapter 5: Bivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie V. Bivariate Cointegration Analysis...

More information

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

More information

171:290 Model Selection Lecture II: The Akaike Information Criterion

171:290 Model Selection Lecture II: The Akaike Information Criterion 171:290 Model Selection Lecture II: The Akaike Information Criterion Department of Biostatistics Department of Statistics and Actuarial Science August 28, 2012 Introduction AIC, the Akaike Information

More information

IMPACT OF WORKING CAPITAL MANAGEMENT ON PROFITABILITY

IMPACT OF WORKING CAPITAL MANAGEMENT ON PROFITABILITY IMPACT OF WORKING CAPITAL MANAGEMENT ON PROFITABILITY Hina Agha, Mba, Mphil Bahria University Karachi Campus, Pakistan Abstract The main purpose of this study is to empirically test the impact of working

More information

Department of Economics

Department of Economics Department of Economics On Testing for Diagonality of Large Dimensional Covariance Matrices George Kapetanios Working Paper No. 526 October 2004 ISSN 1473-0278 On Testing for Diagonality of Large Dimensional

More information

Linear Threshold Units

Linear Threshold Units Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

More information

The Relationship between Life Insurance and Economic Growth: Evidence from India

The Relationship between Life Insurance and Economic Growth: Evidence from India Global Journal of Management and Business Studies. ISSN 2248-9878 Volume 3, Number 4 (2013), pp. 413-422 Research India Publications http://www.ripublication.com/gjmbs.htm The Relationship between Life

More information

Solución del Examen Tipo: 1

Solución del Examen Tipo: 1 Solución del Examen Tipo: 1 Universidad Carlos III de Madrid ECONOMETRICS Academic year 2009/10 FINAL EXAM May 17, 2010 DURATION: 2 HOURS 1. Assume that model (III) verifies the assumptions of the classical

More information

Least Squares Estimation

Least Squares Estimation Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

More information

Ridge Regression. Patrick Breheny. September 1. Ridge regression Selection of λ Ridge regression in R/SAS

Ridge Regression. Patrick Breheny. September 1. Ridge regression Selection of λ Ridge regression in R/SAS Ridge Regression Patrick Breheny September 1 Patrick Breheny BST 764: Applied Statistical Modeling 1/22 Ridge regression: Definition Definition and solution Properties As mentioned in the previous lecture,

More information

Factor analysis. Angela Montanari

Factor analysis. Angela Montanari Factor analysis Angela Montanari 1 Introduction Factor analysis is a statistical model that allows to explain the correlations between a large number of observed correlated variables through a small number

More information

Standard errors of marginal effects in the heteroskedastic probit model

Standard errors of marginal effects in the heteroskedastic probit model Standard errors of marginal effects in the heteroskedastic probit model Thomas Cornelißen Discussion Paper No. 320 August 2005 ISSN: 0949 9962 Abstract In non-linear regression models, such as the heteroskedastic

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

On the Degree of Openness of an Open Economy Carlos Alfredo Rodriguez, Universidad del CEMA Buenos Aires, Argentina

On the Degree of Openness of an Open Economy Carlos Alfredo Rodriguez, Universidad del CEMA Buenos Aires, Argentina On the Degree of Openness of an Open Economy Carlos Alfredo Rodriguez, Universidad del CEMA Buenos Aires, Argentina car@cema.edu.ar www.cema.edu.ar\~car Version1-February 14,2000 All data can be consulted

More information

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

More information

DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS

DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS DETERMINANTS OF CAPITAL ADEQUACY RATIO IN SELECTED BOSNIAN BANKS Nađa DRECA International University of Sarajevo nadja.dreca@students.ius.edu.ba Abstract The analysis of a data set of observation for 10

More information

Chapter 13 Introduction to Linear Regression and Correlation Analysis

Chapter 13 Introduction to Linear Regression and Correlation Analysis Chapter 3 Student Lecture Notes 3- Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing

More information

Algebra 1 Course Title

Algebra 1 Course Title Algebra 1 Course Title Course- wide 1. What patterns and methods are being used? Course- wide 1. Students will be adept at solving and graphing linear and quadratic equations 2. Students will be adept

More information

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES Advances in Information Mining ISSN: 0975 3265 & E-ISSN: 0975 9093, Vol. 3, Issue 1, 2011, pp-26-32 Available online at http://www.bioinfo.in/contents.php?id=32 ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

University of Ljubljana Doctoral Programme in Statistics Methodology of Statistical Research Written examination February 14 th, 2014.

University of Ljubljana Doctoral Programme in Statistics Methodology of Statistical Research Written examination February 14 th, 2014. University of Ljubljana Doctoral Programme in Statistics ethodology of Statistical Research Written examination February 14 th, 2014 Name and surname: ID number: Instructions Read carefully the wording

More information

Chapter 1 Introduction. 1.1 Introduction

Chapter 1 Introduction. 1.1 Introduction Chapter 1 Introduction 1.1 Introduction 1 1.2 What Is a Monte Carlo Study? 2 1.2.1 Simulating the Rolling of Two Dice 2 1.3 Why Is Monte Carlo Simulation Often Necessary? 4 1.4 What Are Some Typical Situations

More information

Notes on Applied Linear Regression

Notes on Applied Linear Regression Notes on Applied Linear Regression Jamie DeCoster Department of Social Psychology Free University Amsterdam Van der Boechorststraat 1 1081 BT Amsterdam The Netherlands phone: +31 (0)20 444-8935 email:

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

From the help desk: Swamy s random-coefficients model

From the help desk: Swamy s random-coefficients model The Stata Journal (2003) 3, Number 3, pp. 302 308 From the help desk: Swamy s random-coefficients model Brian P. Poi Stata Corporation Abstract. This article discusses the Swamy (1970) random-coefficients

More information

1 Introduction to Matrices

1 Introduction to Matrices 1 Introduction to Matrices In this section, important definitions and results from matrix algebra that are useful in regression analysis are introduced. While all statements below regarding the columns

More information

5. Linear Regression

5. Linear Regression 5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4

More information

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d.

These axioms must hold for all vectors ū, v, and w in V and all scalars c and d. DEFINITION: A vector space is a nonempty set V of objects, called vectors, on which are defined two operations, called addition and multiplication by scalars (real numbers), subject to the following axioms

More information

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra:

Partial Fractions. Combining fractions over a common denominator is a familiar operation from algebra: Partial Fractions Combining fractions over a common denominator is a familiar operation from algebra: From the standpoint of integration, the left side of Equation 1 would be much easier to work with than

More information

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1. MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

More information

Performing Unit Root Tests in EViews. Unit Root Testing

Performing Unit Root Tests in EViews. Unit Root Testing Página 1 de 12 Unit Root Testing The theory behind ARMA estimation is based on stationary time series. A series is said to be (weakly or covariance) stationary if the mean and autocovariances of the series

More information

Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

More information

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

More information

Orthogonal Diagonalization of Symmetric Matrices

Orthogonal Diagonalization of Symmetric Matrices MATH10212 Linear Algebra Brief lecture notes 57 Gram Schmidt Process enables us to find an orthogonal basis of a subspace. Let u 1,..., u k be a basis of a subspace V of R n. We begin the process of finding

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

More information

Module 3: Correlation and Covariance

Module 3: Correlation and Covariance Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

More information

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR) 2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came

More information

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could

More information

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013

Regression Analysis. Regression Analysis MIT 18.S096. Dr. Kempthorne. Fall 2013 Lecture 6: Regression Analysis MIT 18.S096 Dr. Kempthorne Fall 2013 MIT 18.S096 Regression Analysis 1 Outline Regression Analysis 1 Regression Analysis MIT 18.S096 Regression Analysis 2 Multiple Linear

More information

Time Series Analysis

Time Series Analysis Time Series Analysis Identifying possible ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos

More information

On Marginal Effects in Semiparametric Censored Regression Models

On Marginal Effects in Semiparametric Censored Regression Models On Marginal Effects in Semiparametric Censored Regression Models Bo E. Honoré September 3, 2008 Introduction It is often argued that estimation of semiparametric censored regression models such as the

More information

What drove Irish Government bond yields during the crisis?

What drove Irish Government bond yields during the crisis? What drove Irish Government bond yields during the crisis? David Purdue and Rossa White, September 2014 1. Introduction The Irish Government bond market has been exceptionally volatile in the seven years

More information

Multiple Regression: What Is It?

Multiple Regression: What Is It? Multiple Regression Multiple Regression: What Is It? Multiple regression is a collection of techniques in which there are multiple predictors of varying kinds and a single outcome We are interested in

More information

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance

More information

Premaster Statistics Tutorial 4 Full solutions

Premaster Statistics Tutorial 4 Full solutions Premaster Statistics Tutorial 4 Full solutions Regression analysis Q1 (based on Doane & Seward, 4/E, 12.7) a. Interpret the slope of the fitted regression = 125,000 + 150. b. What is the prediction for

More information

Predict the Popularity of YouTube Videos Using Early View Data

Predict the Popularity of YouTube Videos Using Early View Data 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Monitoring Structural Change in Dynamic Econometric Models

Monitoring Structural Change in Dynamic Econometric Models Monitoring Structural Change in Dynamic Econometric Models Achim Zeileis Friedrich Leisch Christian Kleiber Kurt Hornik http://www.ci.tuwien.ac.at/~zeileis/ Contents Model frame Generalized fluctuation

More information

individualdifferences

individualdifferences 1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general,

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 578 CHAPTER 1 NUMERICAL METHODS 1. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Module 5: Multiple Regression Analysis

Module 5: Multiple Regression Analysis Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College

More information

Statistical Models in R

Statistical Models in R Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova

More information

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries

Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma siersma@sund.ku.dk The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association

More information

Determinants of Stock Market Performance in Pakistan

Determinants of Stock Market Performance in Pakistan Determinants of Stock Market Performance in Pakistan Mehwish Zafar Sr. Lecturer Bahria University, Karachi campus Abstract Stock market performance, economic and political condition of a country is interrelated

More information

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239

STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. Clarificationof zonationprocedure described onpp. 238-239 STATISTICS AND DATA ANALYSIS IN GEOLOGY, 3rd ed. by John C. Davis Clarificationof zonationprocedure described onpp. 38-39 Because the notation used in this section (Eqs. 4.8 through 4.84) is inconsistent

More information

SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

More information

Forecasting Using Eviews 2.0: An Overview

Forecasting Using Eviews 2.0: An Overview Forecasting Using Eviews 2.0: An Overview Some Preliminaries In what follows it will be useful to distinguish between ex post and ex ante forecasting. In terms of time series modeling, both predict values

More information

15.062 Data Mining: Algorithms and Applications Matrix Math Review

15.062 Data Mining: Algorithms and Applications Matrix Math Review .6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

More information

Poisson Models for Count Data

Poisson Models for Count Data Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

More information