AUTOCORRELATION IN REDUCED FORM RESIDUALS AND CORRECTION OF SIMULATION PATH OF A DYNAMIC MODEL*

Similar documents
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

16 : Demand Forecasting

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

SYSTEMS OF REGRESSION EQUATIONS

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

Chapter 4: Vector Autoregressive Models

Is the Forward Exchange Rate a Useful Indicator of the Future Exchange Rate?

Minimum LM Unit Root Test with One Structural Break. Junsoo Lee Department of Economics University of Alabama

Wooldridge, Introductory Econometrics, 3d ed. Chapter 12: Serial correlation and heteroskedasticity in time series regressions

1 Teaching notes on GMM 1.

Please follow the directions once you locate the Stata software in your computer. Room 114 (Business Lab) has computers with Stata software

Are the US current account deficits really sustainable? National University of Ireland, Galway

Practical Guide to the Simplex Method of Linear Programming

Chapter 6: Multivariate Cointegration Analysis

Advanced Forecasting Techniques and Models: ARIMA

Testing for Granger causality between stock prices and economic growth

Note 2 to Computer class: Standard mis-specification tests

From the help desk: Bootstrapped standard errors

Department of Economics

Factor analysis. Angela Montanari

The Power of the KPSS Test for Cointegration when Residuals are Fractionally Integrated

Marketing Mix Modelling and Big Data P. M Cain

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

Testing The Quantity Theory of Money in Greece: A Note

Time Series Analysis

Simple Linear Regression Inference

7 Gaussian Elimination and LU Factorization

E 4101/5101 Lecture 8: Exogeneity

Forecasting the US Dollar / Euro Exchange rate Using ARMA Models

Least Squares Estimation

Centre for Central Banking Studies

Econometrics Simple Linear Regression

Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

11. Time series and dynamic linear models

FORECASTING DEPOSIT GROWTH: Forecasting BIF and SAIF Assessable and Insured Deposits

PARTIAL LEAST SQUARES IS TO LISREL AS PRINCIPAL COMPONENTS ANALYSIS IS TO COMMON FACTOR ANALYSIS. Wynne W. Chin University of Calgary, CANADA

y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where

Inner Product Spaces

Solución del Examen Tipo: 1

OPTIMAl PREMIUM CONTROl IN A NON-liFE INSURANCE BUSINESS

Introduction to Regression and Data Analysis

Regression Analysis (Spring, 2000)

Multivariate normal distribution and testing for means (see MKB Ch 3)

PITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU

Introduction to Fixed Effects Methods

Examining the Relationship between ETFS and Their Underlying Assets in Indian Capital Market

Integrating Financial Statement Modeling and Sales Forecasting

NOTES ON LINEAR TRANSFORMATIONS

FULLY MODIFIED OLS FOR HETEROGENEOUS COINTEGRATED PANELS

Mean squared error matrix comparison of least aquares and Stein-rule estimators for regression coefficients under non-normal disturbances

5. Multiple regression

Understanding and Applying Kalman Filtering

Data Mining: Algorithms and Applications Matrix Math Review

SPURIOUS REGRESSIONS IN ECONOMETRICS

Chapter 2. Dynamic panel data models

Predictability of Non-Linear Trading Rules in the US Stock Market Chong & Lam 2010

The following postestimation commands for time series are available for regress:

Mathematical Model of Income Tax Revenue on the UK example. Financial University under the Government of Russian Federation

Effect of working capital and financial decision making management on profitability of listed companies in Tehran s securities exchange

1 Short Introduction to Time Series

Time Series Analysis

Inequality, Mobility and Income Distribution Comparisons

Statistical Tests for Multiple Forecast Comparison

Forecasting of Paddy Production in Sri Lanka: A Time Series Analysis using ARIMA Model

Solving Mass Balances using Matrix Algebra

Equations, Inequalities & Partial Fractions

5 Numerical Differentiation

Trend and Seasonal Components

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

Variance Reduction. Pricing American Options. Monte Carlo Option Pricing. Delta and Common Random Numbers

3. Regression & Exponential Smoothing

Why the saving rate has been falling in Japan

Statistics in Retail Finance. Chapter 6: Behavioural models

Module 5: Multiple Regression Analysis

FORECASTING AND TIME SERIES ANALYSIS USING THE SCA STATISTICAL SYSTEM

The Engle-Granger representation theorem

Simple Regression Theory II 2010 Samuel L. Baker

Solving Linear Systems, Continued and The Inverse of a Matrix

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems

Do Supplemental Online Recorded Lectures Help Students Learn Microeconomics?*

Regression III: Advanced Methods

TIME SERIES ANALYSIS

Similarity and Diagonalization. Similar Matrices

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Increasing for all. Convex for all. ( ) Increasing for all (remember that the log function is only defined for ). ( ) Concave for all.

THE IMPACT OF EXCHANGE RATE VOLATILITY ON BRAZILIAN MANUFACTURED EXPORTS

Markov Chain Monte Carlo Simulation Made Simple

Handling attrition and non-response in longitudinal data

5.5. Solving linear systems by the elimination method

Matrix Differentiation

Mehtap Ergüven Abstract of Ph.D. Dissertation for the degree of PhD of Engineering in Informatics

Random Effects Models for Longitudinal Survey Data

STATISTICA Formula Guide: Logistic Regression. Table of Contents

**BEGINNING OF EXAMINATION** The annual number of claims for an insured has probability function: , 0 < q < 1.

Online Appendices to the Corporate Propensity to Save

Maximum likelihood estimation of mean reverting processes

Simulation Models for Business Planning and Economic Forecasting. Donald Erdman, SAS Institute Inc., Cary, NC

Analysis of Bayesian Dynamic Linear Models

Transcription:

AUTOCORRELATION IN REDUCED FORM RESIDUALS AND CORRECTION OF SIMULATION PATH OF A DYNAMIC MODEL* BY H. UENO AND H. TSURUMI I. Introduction One of the objects of a simultaneous equations model is to forecast the magnitudes of endogenous variables given the values of exogenous variables for the period of prediction. It is desirable that the forecasted values of endogenous variables are free from any systematic bias. As pointed out in studies such as Houthakker and Taylor [9], the presence of autocorrelation in error terms, when a model involves lagged endogenous variables, may imply a bias in the projection. A Monte Carlo study by Malinvaud [14], using distributed lag models, finds that the introduction of autocorrelation tends to increase forecasting variance. We should devise some way to remove a systematic bias caused by autocorrelation from projected values of endogenous variables. The present paper makes a preliminary experiment toward such a direction. Dealing with a simultaneous dynamic equations model,1) we first point out that the reduced form residuals between actual and projected values may have autocorrelation despite of consistent estimates of parameters. Then we experiment on a method to correct simulation paths by removing autocorrelation bias. II. The Problem of Autocorrelation in the Reduced Form of a Simultaneous Dynamic Model In a simultaneous dynamic equations model with lagged endogenous variables, the reduced form error terms may not be serially independent, even if the error term in each structural relationship is serially independent. This was pointed out as early as in 1944 by Hurwicz [10]. Let us assume, for the sake of convenience, only one period lag of endogenous variables appears in the system. Then we can write the system as (2.1) B0Yt+B1Yt-1+LZt=Ut, where Yt is a vector of endogenous variables, Yt-1, a vector of lagged endogenous variables, Zt, a vector of exogenous variables, and Ut, a vector of structural error * This paper was completed while the first author was a visiting research professor at the University of Pennsylvania. Grateful acknowledgement is made to a National Science Foundation grant. The authors are indebted to Professor L. R. Klein for his helpful comments on an earlier draft of this paper. They are also indebted to the referee for his suggestions. The computation for this paper was conducted at the Computer Center of the University of Pennsylvania. 1) Here by a dynamic model we mean a model involving lagged endogenous variables. -32-

December 1967 H. Ueno and H. Tsurumi: Autocorrelation in Reduced Form Residuals terms. B0, B1, and L are matrices of coefficients. or The reduced form of equation (2.1) is given by (2.2) Yt=-B0-1B1Yt-1-B0-1LZt+B0-1Ut, (2.3) Yt=II1Yt-1+II2Zt+Vt, where II1=-B0-1B1,II2=-B0-1L, and Vt=B0-1Ut. The reduced form (2.3) is usually used for a simulation experiment by the final method. Given parameter values, values for exogenous variables, and initial values for lagged endogenous variables, the final method generates the values for endogenous variables, and the values of lagged endogenous variables are accordingly replaced by their computed values. Suppose now that the structural independent, i.e. (2 E(UtU't')=0, for t_??_t'.4) E(UtUt')=D(*ii), disturbance terms, Ut, are serially and mutually where 0 is a null matrix, and D(*ii) is a diagonal matrix with *ii element in the i-th row and i-th column, and suppose further that the parameters of structural equations, B0, B1, and L are estimated by some existing consistent estimation method, say, the two-stage least squares method. Even in this case we will find that the reduced form disturbance terms may be autocorrelated and consequently this simulated values of the endogenous variables, Yt, may be subject to the systematic bias caused by autocorrelation. This can be demonstrated as follows: Equation (2.3) can be re-written as (2.5) Yt=II1Yt-1+II2Zt+Vt=II1(II1Yt-2+II2Zt-1+Vt-1)+II2Zt+Vt =II12Yt-2+II1II2Zt-1+II2Zt+II1Vt-1+Vt =II1tY0+(t*i=0II1iII2Zṯ =* i)+t*i=0ii1ivt-i. Let us denote the composite disturbance terms, t*i=0ii1ivt-1, as Rt, and we examine whether the first-order autocorrelation exists or not. Taking the expected value of RtR't-1, we find, by using equations (2.4), (2.6) E(RtR't-1)=E(Vt+II1Vt-1+II12Vt-2+*+II1tV0)(V't-1+V't-2II1'+*+V0II1t-1) =II1*+II12*II1' +*+II1t*II1t-1, where *=E(B0-1Ut)(B0-1Ut)'=B0-1D(*ii)B0-1', and thus the composite disturbance terms, Rt, are autocorrelated. III. Correction of Systematic Biases in Simulation In the previous section, we found that in a simultaneous dynamic equations system, the reduced form error terms in final method simulation, Rt, may be autocorrelated even if the structural disturbance terms, Ut, are not autocorrelated. The question now becomes how we can correct this. One way will be to estimate the parameters, B0, B1, and L, such that the variance Rt will be minimized over the -33-

whole sample period. If we assume that Ut is multinormally distributed with N(0, *2I), then we can show [cf. Anderson [2], p. 19] that Vt is also multinormally distributed with N(0, *2(B0'B0)-1). Consequently, estimation of unknown coefficients B0, B1 and L may be made by applying the maximum likelihood method to equation (2.5). However, equation (2.5) is the t-th order polynomials in coefficients and their solutions will be extremely complicated. Some roots of a coefficient may lie close to each other in value, and rounding errors in computational iterations may become significantly large. As an effort toward this direction, we may refer to Amemiya's work [1]. Another way to cope with the situation is to correct the parameters, B0, B1, and, which are already estimated by some existing consistent method. If we denote L the estimated parameters as B0, B1, and L, and their correcting factors as C0, C1, and C3, then from equation (2.1), (3.1) (B0+C0)Yt+(B1+C1)Yt-1+(L+C3)Zt=Ut*. C0, C1, and C3 will be found such that the residuals over the sample period will be made minimum. The search for C0, C1 and C3 will be based on the maximum likelihood principle which requires iterative computations. The utmost weakness of this method will be that it depends upon the stability of structural equation (2.1) and the significance of each estimated parameter. The third possibility is somewhat more practical than the first two methods, but it is essentially designed to accomplish the same objective. This method attempts a correction of a simulation path by detecting autocorrelation of the composite reduced from disturbance terms, Rt. Autocorrelation is assumed to be the first order Markov process. This method is presented here as an experiment. The experiment is carried out in the following fashion. Structural equations whose coefficients are already estimated by a suitable method of estimation are transformed to the reduced form. Setting some initial conditions, the final method test generates the projected values of endogenous variables. The projection residuals are first put through the Durbin-Watson test statistic, and then they are examined by the test of runs. The parallel use of the test of runs and the Durbin-Watson statistic is to supplement the test of autocorrelation. As discussed elsewhere [5, 9, 13, 15], the Durbin-Watson statistic has a bias if it is applied to an equation with a lagged dependent variable or to a simultaneous equations model. After these two tests, the coefficients of the first order autocorrelation are estimated by the method of ordinary least squares as well as by the scanning method. Then the correction of projection is made by using the estimated coefficients, *i. After the correction, we examine how much the Durbin-Watson statistic as well as the test of runs is improved. Furthermore, improvement in predictability is checked by the Theil inequality coefficients [16, p. 32]. As the materials for our experiment, Klein-Goldberger model [12] has been chosen mainly because of the availability of data which are given in the appendix of [12]. -34-

December 1967 H. Ueno and H. Tsurumi: Autocorrelation in Reduced Form Residuals Three structural equations are chosen on the basis of having reasonable Durbin- Watson test values, so that we can start with a case of no autocorrelation in the structural equations. These three structural equations are assumed to form an independent subset of Klein-Goldberger model, and any endogenous variables other than the three chosen ones are treated as if they are exogenous. The three equations are [12, pp. 51-52]: Comsumption function (3.2) Ct=-34.5-.62(W1+W2-T*)t+.46(P-Sp-Tp)t+.39(A-TA)t Investment function +.23Ct-1+.024L1, (7.7) (.04) (.03) t-1+.36np, (.025) t (.05) (.02) (0.8) d2/s2=2.2 Correction for Durbin-Watson (D-W) test: d2/s2x(n-1)/n=2.1 (3.3) It=-16.8+.76(P+A+D-Tp- TA)t-1-.14Kt-1+.14L2, t-1 (4.5) (.17) (.08) (.10) Corporate Savings function d2/s2=2.3 D-W=2.2 (3.4) St=-2.42+0.86(Pc-Tc)t-.30(Pc-Tc-Sp)t-1-.014Bt-1 (.81) (.04) (.20) (.016) d2/s2=1.9 D-W=1.8 In the investment equation above, the lagged capital, Kt-1, can be treated as a lagged endogenous variable of It, since Kt= K0 + t-1*i=0 (It-i-Dt-i), where Dt-i is the accumulated depreciation at period t-i. In the above equations, notations are the same as given in Klein-Goldberger [12]. The reduced form of the above equation is given by (3.5) Yt= -A-1DYt-1-A-1BXt+Vt, where the definitions and values of the coefficient matrices, A, D, and B, and variable vectors, Yt, Yt-1, and Xt are given in the appendix, and Vt is the reduced form disturbance terms. Using equation (3.5), we conduct the simulation test by the final method and the Durbin-Watson statistics (D-W) on the computed residuals are computed for each reduced form equation. The Durbin-Watson Statistics on the Computed Residuals D-W Consumption function.58 Investment function 1.44 Corporate savings function 1.42 We note that these values are considerably lower than those counterparts given -35-

in equations (3.2), (3.3), and (3.4). If we use the Durbin-Watson statistic tables (1951) as suggested there, D-W test values for consumption function,.58, suggests existence of autocorrelation, while D-W=1.4 for investment and corporate savings suggest indeterminate ranges. However, D-W test is biased when lagged endogenous variables are used and when we deal with a system of dynamic equations. As a parismoneous measure, therefore, we may increase the acceptance values of D-W test higher than those given in the 1951 tables.2) This consideration leads us to suspect the existence of autocorrelation in each of the three reduced form equations. The somewhat opaque recognisance of autocorrelation above should be supplemented by another kind of test. Consequently, we adopt a test of runs to examine the randomness of the projection residuals. Here, we use Wallis-Moore's test of runs [17], which is an approximate chi-square test based on expected and observed runs. As runs, we take runs of projected values above and below the actual values. We accept a consistent (3.6) g3* bias if where 01 is the number of runs of length 1, 02 the number of runs of length 2, 03, the number of runs of length greater than 2, e1=5(n-3)/12, e2=11(n-4)/60, e3=(4n - 21)/60; n is the number of observations and X*2(5/2), chi-square with the *% significance level with 5/2 degrees of freedom. If we carry out the test of runs at the significance level of *=5%, we obtain the following result: g Consumption function 9.40 Investment function 5.07 Corporate savings 6.77. Since the approximate value of X2. 05(5/2) is.227 by interpolation, the hypothesis of randomness for all of the three equations. we should reject With the recognisance of autocorrelation from the D-W as well as the test of runs, we estimate the coefficients of the autocorrelation from the projection residuals: (3.7) *it=*i*it-1+eit, i=c, I, Sp. These equations are estimated by the scanning method, which is similar to Dhyrmes' stepwise maximization [4]. We have also made parallel estimates by the ordinary least squares method. The scanning method can be proven to be the maximum likelihood method. Let us prove this in more general form than (3.7). Let us have (3.8) yt= where ut=*ut-1+et, and et is distributed with N(0, **2). Then, (3.9) et 2) Houthakker and Taylor [9] use D-W value of 1.6 to 2.5 as the "acceptable" region as a rule of thumb for a single equation with 3-4 independent variables and 29-60 number of observations. -36-

December 1967 H. Ueno and H. Tsurumi: Autocorrelation in Reduced Form Residuals for t=2, *, T. Since et and et'(t_??_t') are assumed to be independent, the likelihood function is given by (3.10) or (3.11) Maximizing (3.11) with respect to (*i, **2) for given *, we obtain the "concentrated" likelihood function solely as a function of *: (3.12) where and *i are also a function of *. Therefore, to maximize L*, we are to minimize,2. Thus, by setting * *1, a priori, we estimate *i for a given value of *0, *0 ** being changed by a given step. We choose *i so that **2 is minimized. In our problem *i is already given, and consequently, the iteration is only to choose *i, i=c, 1, Sp. The variance of *i can be estimated by (3.13) In estimating,*i, we have iterated by taking a step of.025 for {*i * *1}, for all of the three cases, we have observed a well behaving minimum for *e2. A result of iteration is illustrated in Table 1 for investment function around its neighbourhood of the minimum and its graph is given in Figure 1. Table 1 Illustration of Scanning: Investment Function and Figure 1 Search for Minimum Variance -37-

The results of the scanning procedure have yielded the estimation of *i as follows: Variance R2 *i C.675 1.8055.98 (6.360) I.175 9.9081.98 (3.348) Sp.925.7486.98 (11.644) Estimation of *i by the ordinary least squares method has yielded the following results: *i Variance R2 C.740 1.7652.85 (3.519) I.263 10.581.82 (1.049) Sp,.979.915.86 (11.342) Figures in parentheses are the values of t-test. The values estimated by the ordinary least squares method are fairly close to those by the scanning method. Using the values of *i estimated either by the method of ordinary least squares or by the scanning method, we correct the simulation path in the following manner. The reduced form is (3.14) Yt=-A-1DYt-1-A-1BXt+Vt. Multiplying by *, the diagonal matrix of autocorrelation with *i in the i-th diagonal element and lagging one period, equation (3.14) becomes (3.15) *Yt-1=-*A-1DYt-2-*A-1BXt+*Vt-1, and deducting (3.15) from (3.14), we obtain (3.16) Yt-*Yt-1=-A-1DYt-1-A-1BXt+*A-1DYt-2+A-'*BXt-1 +Vt-*Vt-1. Since Et=Vt-*Vt-1, and by assumption Et is serially independent, we can correct the projection path by plugging the estimated values of *i into *. Hence, the equation for correction is (3.17) Yt=-(A-1D-*)Yt-1-A-1BXt+*A-1DYt-2+*A-1BXt-1. The above equation can be used for an ex-ante extrapolation. Using (3.17), we have corrected the simulation paths over the sample period, and made the Durbin-Watson test and the test of runs. Table II below presents the results. Examining Table II, we note a remarkable improvement in the values of the Durbin-Watson test as well as in the test of runs. The values of the test of runs after correction are considerably reduced, and this is due to the fact that the numbers of runs of length one increased greatly. After the correction, the Durbin- Watson test for investment and corporate savings are large enough for accepting the hypothesis of no autocorrelation. However, the Durbin-Watson value for consumption is still low, although its improvement from.5 to 1.4-1.5 is remarkable. We find that there are no marked differences between the scanning method and ordinary least squares estimation. -38-

December 1967 H. Ueno and H. Tsurumi: Autocorrelation in Reduced Form Residuals Table II Durbin-Waston Test and the Test of Runs Conducted on the Corrected Simulation Residuals Our experiment shows that autocorellation was removed significantly from the final method residuals, but can we say about accuracy of prediction? Prediction accuracy over the sample period may be examined either by mean-square errors or by the Theil inequality coefficient. Since mean-square errors move in the same direction as the Theil inequality coefficient and the Theil inequality coefficient is invariant under change in unit of measurement, here we will choose the Theil inequality coefficient as a criterion to examine accuracy. The Theil inequality coefficient ti, for the i-th variable yi is given by (3.18) where yi denotes the computed value of yi. The value of ti is bound between zero and unity3), and the closer it is to zero, the more accurate is the prediction. Table III gives the Theil inequality coefficients before and after the correction of Table III Comparison of Theil Inequality Coefficiennts Before and After Correction of Autocorrelation 3) This can be easily proven by using the Minkowski inequality as follows: It is obvious that each term in the above inequality is nonnegative, autocorrelation. We note that in all three variables, the coefficients improved after the correction of autocorrelation, and especially compared to investment and cor- -39-

porate savings, consumption indicates a large improvement. This may be due to the fact that the Durbin-Watson statistic for consumption before correction strongly suggests a positive autocorrelation, whereas the suspicion of autocorrelation is much less in the case of investment and corporate savings. The correction by the SCAN estimates of *i; and that by the ordinary least squares (OLS) estimates of *i yield almost the same Theil inequality coefficients. The correction of the simulation paths discussed here is based on the assumption of a linear model. Also, it is assumed that the reduced form composite disturbances follow the first order autocorrelation, while the error terms of the structural equations are assumed to be free from serial correlation. If we relax this assumption, autocorrelation of the reduced form disturbances will bear more complicated relationships. Furthermore, this is also the case where the error terms of the structural equations are mutually correlated. Seikei University and Queen's University APPENDIX The reduced form (3.5) Yt=-A-1DYt-1-A-1BXt+Vt consists of the following values and variables: and -40-

December 1967 H. Ueno and H. Tsurumi: Autocorrelation in Reduced Form Residuals REFERENCES [1] Amemiya, T., "Specification Analysis in the Estimation of Parameters of a Simultaneous Equation Model with Autoregressive Residuals," Econometrica, Vol. 34, No. 2, April, 1966. [2] Anderson, T. W., An Introduction to Multivariate Analysis, Wiley, New York, 1958. [3] Chow, G. "Tests of Equality between Sets of Coefficients in Two Linear Regressions," Econometrica, July, 1960. [4] Dhrymes, P. J., "A Model of Short Run Labour Adjustment," mimeographed paper, 1965. [5] Durbin, J., "Testing for Serial Correlation in Systems of Simultaneous Regression Equations," Biometrika, December, 1957. [6] Durbin, J., and G. S. Watson, "Testing for Serial Correlation in the Least Squares Regression," Biometrika, Vol. 37, 1950. 7] Durbin, J., and G. [ S. Watson, "Testing for Serial Correlation in the Least Squares Regression, Biometrika, Vol. 38, 1951. [8] Goldberger, A. S., Econometric Theory, Wiley, New York, 1964. 9] Houthakker, H. S., and L. D. Taylor, Consumer Demand [ in the United States, 1929-1970, Harvard, 1966. [10] Hurwicz, L., "Stochastic Models of Economic Fluctuations," Econometrica, 1944. [11] Johnston, J. Econometric Methods, McGraw-Hill, 1960. [12] Klein, L. R., and A. S. Goldberger, An Econometric Model of the United States, 1929-1952, North-Holland, Amsterdam, 1955. [13] Malinvaud, D., Methodes Statistiques de l'econometrie, Dunod, Paris, 1964. [14] Malinvaud, D., "Estimation et Prevision dans les Modeles Autoregressifs," Revue de L'Institut International de Statistique, Vol. 29, 1961. [15] Nerlove M., and K. F. Wallis, "Use of the Durbin-Watson Statistic in Inappropriate Situations," Econometrica, January, 1966. [16] Theil, H. Economic Forecasts and Policy, North-Holland, Amsterdam, 1961. [17] Walsh, J., Handbook for Nonparametric Statistics, Princeton, 1962.