Unversty of Vctora Department of Economcs ECON 546: Themes n Econometrcs Lab. Exercse Introducton The purpose of ths lab. exercse s to show you how to use EVews to estmate the parameters of a regresson model by Maxmum Lkelhood, when the model s of some non-standard type. Specfcally, you wll learn how to estmate models of types that are not bult n as a standard opton n EVews. Ths wll nvolve settng up the log-lkelhood functon for the model, based on the assumpton of ndependent observatons; and then maxmzng ths functon numercally wth respect to the unknown parameters. Frst, to ntroduce the concepts and commands that are nvolved, we ll consder the standard lnear multple regresson model wth normal errors, for whch we know that the MLE of the coeffcent vector s just the same as the OLS estmator. Ths wll gve us a bench-mark aganst whch to check our understandng of what s gong on. Then we can move on to some more general models. Only part of ths handout wll be covered n the lab. class tself, whch s why there s plenty of EVews output embedded n ths document so that you can explore the rest of the materal by yourself. Part So, suppose that we have a lnear multple regresson model, satsfyng all of the usual assumptons: y = Xβ+ ε ; ε ~ N[ 0, σ I ] n where the regressors are non-random. The MLE for β s s ~ ~ ~ σ = ( y Xβ )'( y Xβ ) / n. ~ β = ( X ' X ) X ' y, and the MLE for σ (a) (b) Open the EVews workfle, S:\Socal Scencces\Economcs\ECON546\lab.wf. Estmate an OLS regresson model wth Y as the dependent varable, and an ntercept and X as the regressors. Save the results as EQ0. So, our smple model s y = β + β x + ε ; ε ~ d N[0, σ ] and the margnal data densty for the th observaton s ; =,, 3,, n p( y β, β, σ ) = exp{ ( y ) } β β x. () σ π σ
Gven the ndependence of the data, to get the jont data densty, and hence the Lkelhood Functon, we need to multply each of the expressons of the form () together, for all : n, β, σ y) = p( y β, β, σ ) = p( y β, β, ) = L( β σ. So, to get the Log-Lkelhood Functon, we need to add the logarthms of the margnal data denstes: Log[ L( β, β, σ y)] = Log[ p( y β, β, σ )] = Log[ n = p( y β, β, σ )] = n = Log[ p( y β, β, σ )] () So, a typcal term n the summaton that appears n equaton () s obtaned by takng the logarthm of (): Log[ p( y β, β, σ )] = log( σ ) log(π ) / ε, where ε = ( y β β x ). (3) σ (c) To get EVews to perform MLE, we have to supply a typcal term of the form (3). Ths s done va the so-called LOGL object. In your workfle, clck on the Object button, and choose the New Object opton. Then, hghlght LogL as, shown below. You can supply a name for ths object, ether now, or later on when you save t t may be a good dea to call ths new object LOGL0 at ths stage.
When you clck OK, ths s what you wll see next: (d) You can now enter the formula for the th term of the Log-Lkelhood Functon nto the empty Object Box: @logl LL eps = y-c()-c()*x ll = -log(c(3))-(eps^)/(*c(3)^)-0.5*log(*3.459) The frst lne of code declares that we are constructng a log-lkelhood functon, and are gong to call t LL. (You can use any name you lke.) The second lne of code s ntroduced merely to make the expresson n the thrd lne a lttle smpler. Note that we are supplyng the expresson for just a sngle log-densty. EVews wll assume that the data are ndependent, and do the summng that we see n equaton () above for us. Here, the coeffcents c(), c() and c(3) correspond to β, β, and σ respectvely. The object box wll now look lke ths: 3
(e) Now, press the Estmate button, and ths s what you wll see: 4
Notce that you have a choce of algorthms for maxmzng the Log-Lkelhood Functon. In evaluatng the dervatves you should always choose accuracy over speed. The followng results then emerge when you clck OK : The OLS results you saved as EQ0 are as follows: 5
(f) (g) (h) Why s the estmate of c(3) n the MLE output dfferent from the standard error of regresson n the OLS output? Why are the standard errors dfferent? Verfy that everythng has actually been calculated correctly. Notce that the Log lkelhood values are the same n each output ths s the value of the Log-Lkelhood Functon when the MLE s for the parameters are substtuted nto equaton () above. It s the maxmzed value of the Log-Lkelhood Functon. Check that the Log-Lkelhood Functon has been properly maxmzed. In the LOGL0 output box, clck on Vew, Gradents, then Summary : The gradents n each drecton of the parameter space are evaluated at each pont n the sample. These values are summarzed by takng the mean and sum of each gradent across the sample values. We see that the gradents are essentally zero, as they should be. Note: In practce, you may need to edt the elements of the coeffcent vector before you estmate a model by MLE to make sure that you don t slly startng values for the maxmzaton algorthm. For example, n ths exercse, f we had not already altered the coeffcent values by runnng the ntal OLS regresson, we would have had to make sure that c(3) dd not start of wth the (default) value of zero can you see why? If you need to modfy the code for the Log-Lkelhood specfcaton n the LOGL0 object box, just select Vew, then Lkelhood Specfcaton n that box s header bar, and then edt accordngly. Now you are ready to estmate a non-standard model by MLE. Part Suppose that nstead of assumng Normally dstrbuted errors, you want to allow for fat tals (.e., a hgher probablty of outlers) n the error dstrbuton. Recall that the Student-t dstrbuton has a densty functon wth ths property f the assocated degrees of freedom are relatvely small. The need to allow for fatter tals n the densty may arse, for example, when modelng fnancal returns. The Student-t dstrbuton has a fnte frst moment only f ν >, where ν s the degrees of 6
freedom parameter. It has a fnte second moment only f ν >, so probably the smallest value for the degrees of freedom that we should consder s ν = 3. Also, recall from a class example, that f the errors of our standard multple lnear regresson model follow a multvarate Student-t dstrbuton, then the MLE for the coeffcent vector s just the OLS estmator. However, ths result does not arse f the ndvdual errors are ndependent Student-t dstrbuted! Ths s the specfcaton that we wll follow next. To set up the Log-Lkelhood functon we need to know the formula for the densty functon for a random varable that s Student-t dstrbuted, wth ν degrees of freedom. Ths densty takes the form: / hε ( ν + ) / p ( ε ν ) = const( h )[ + ] ; < ε < (4) ν where h s a scale parameter and const s the normalzng constant that ensures that the densty s proper that s, that t ntegrates to unty. (In the case of the normal densty, ths s the role that the / π term plays.) For the Student-t densty, ths normalzng constant s: where the Gamma Functon s defned as: Γ[( ν + ) / ] const = (5) νπ Γ( ν / ) Γ t x ( x) = e t dt 0. (6) 7
Fortunately, EVews can evaluate Γ(x) for us va the @gamma(x) functon. So, usng (4), (5) and (6), we can buld up a typcal th term n the Log-Lkelhood Functon. Note from (4) that Log [ p( ε ν )] = log( const) + 0.5log( h) (( ν + ) / ) log[ + hε / ν ]. (a) (b) Create a new object and name t LOGL0. Use the followng code to set up the Log-Lkelhood Functon for our smple regresson model wth ndependent Student-t errors: @logl LL eps = y-c()-c()*x v=3 const=@gamma((v+)/)/(@sqrt(v*3.459)*@gamma(v/)) LL = log(const)+0.5*log(c(3))-((v+)/)*log(+c(3)*(eps^/v)) (c) You should now obtan the followng MLE output: (d) Check the gradents to make sure that the Log-Lkelhood Functon has been properly maxmzed. (e) Are your results at all senstve to the choce of ntal values for the coeffcents? (f) Compare the estmates of the two coeffcents wth those obtaned when normally dstrbuted errors were assumed. (g) Recall that as ν the Student-t densty becomes a normal densty. So, what do you thnk wll happen f you keep ncreasng the value assgned to ν n the LOGL0 code? When ν = 300, you should get the followng results: 8
(h) The varance of a Student-t dstrbuton s ( ν / h ) / ( ν ), whch s defned f ν >. Usng the results of ths last output, verfy that the estmated error varance s approxmately the same as the error varance (not standard devaton) estmate that was obtaned wth normal errors. Part 3 Let s now suppose that we want to generalze our last model even further. As well as allowng for an error dstrbuton wth fat tals, let s suppose that we want to allow for a partcular form of heteroskedastcty: var( ε ) = exp{ α + α z ) ; =,, 3,, n where α and α are unknown parameters, and z s another varable for whch data are avalable. Note that the specal case of homoskedastc errors arses f α = 0. If we equate ths varance expresson wth the Student-t varance gven at the top of ths page, we obtan: or, ( ν / h) /( ν ) = exp{ α + α z ), h = ν [( ν ) exp{ α + α z }] ; =,, 3,, n. / (a) Create a new object and name t LOGL03. (b) Use the followng code to set up the Log-Lkelhood Functon for our smple regresson model wth ndependent but heteroskedastc Student-t errors: @logl LL3 eps = y-c()-c()*x v=3 h=v/((v-)*@exp(c(3)+c(4)*z)) const=@gamma((v+)/)/(@sqrt(v*3.459)*@gamma(v/)) LL3 = log(const)+0.5*log(h)-((v+)/)*log(+h*(eps^/v)) 9
(c) You should now obtan the followng MLE output: (d) Compare your results wth those when homoskedastcty s assumed. (e) Check the gradents to see that we have effectvely maxmzed the Log-Lkelhood Functon. (f) Lookng at the last estmaton results, s there any evdence of sgnfcant heteroskedastcty? 0