1 Another method of estimation: least squares


 Belinda Brooke Rogers
 2 years ago
 Views:
Transcription
1 1 Another method of estimation: least squares erm: estim.tex, Dec8, 009: 6 p.m. (draft  typos/writos likely exist) Corrections, comments, suggestions welcome. 1.1 Least squares in general Assume Y i is some rv with nite mean yi might not, know the form of f Yi (y : ). and variance y, where we might, or If one has a sample of n observations from this population, the leastsquares estimator(s) of are those,, that minimize 1 (yi E[y i : x i ; ]) where the x i is a vector of observed explanatory variables, not random variables ( xed in repeated samples). Finding the leastsquares estimate of requires that we specify the form of E[y i : x i ; ] but does not require that we specify f Yi (y i ; x i ; ). Note that maximum likelihood estimation typically requires that we specify f Yi (y i ; x i ; ), which implies E[y i : x i ; ]. For example, consider the following common additive speci cation for y i y i g(x i : ) + i i 1; ; :::; n and is a rv with zero mean (E[] 0) and nite variance, y. 3 We have a data set that consists of n fy i ; x i g pairs. Since E[y i : x i ] g(x i : ) the leastsquares estimator(s) of are those that minimize (Yi g(x i : )) SSR where SSR denotes the sum of squared residua. Some books call it RSS (for example, Gujarati, page 171) 1 Note that I did not say random sample. While a random sample would be nice, leastsquares estimation is well de ned even if the sample is not random. That said, the leastsquares estimators might lack desirable properties if the sample is not random. For a given x i, all the randomness in y i is invoked by the randomness in 3 Note that here I am not being completely general. I am assuming the random component is additive, which is not required for l.s. estimation. 1
2 Things to note about leastsquares estimators if one is willing to assume y i g(x i : ) + i i 1; ; :::; n where i is a rv with zero mean (E[] 0) and nite variance, y: One does not need to assume a speci c distribution for (normal or otherwise), but one needs to put the above few restrictions on. g(x i : ) does not have to be linear in the, but that is the speci cation that you are most accustomed to. Some of the properties of the estimators of the will depend on what one assumes about the disribution of (normal or otherwise) and/or whether one assumes the Y i in the sample are independent of one another An aside: Note that while we are not accustomed to thinking this way, all that one needs to do least squares is to assume the rv of interest Y, has a density function such that E[Y ] exists. For example, one could assume Y has the Poisson distribution, f Y (y) e y y! for y 0; 1; ; 3; ::::, and use least squares to estimate, the expected value of Y (and ao its variance). 4 The leastsquares estimator of,, is that that minimizes (Yi ). 5 A fun, and instructive exercise would be to nd the least squares estimator(s) for a rv Y assuming a few di erent forms for f Y (y : ). For example, could one proceed with leastsquare assuming Y has a Bernoulli distribution? Try it and see what happens. 1. Revert to the standard assumption that y i g(x i : ) + i, but now be more restrictive: assume linearity and x i a scalar g(x i : ) + x i In which case y i + x i + i i 1; ; :::; n 4 Note that here the random term is not additive. While assuming an additive term (y i g(x i : ) + i i 1; ; :::; n) is typical in leastsquares, it, as I noted above, it is not necessary. 5 We know from earlier, that the maximium likelihood estimator of, ml, is the sample average. Is the leastsquares estimator of ao the sample average?
3 where E[] 0 and has nite variance. This model is called the linear regression model (MGB 485, 486). It has three parameters:, and Y. Contrast the linear regression model with the classical linear regression model, which adds the assumption N(0; ). The leastsquares estimates of and are those estimates, and, that minimize (yi E(y : x i )) (yi ( + x i )) 1..1 Let s nd these estimates. Minimize (let me know if you nd an typos in the following derivations) SSR (yi ( + x i )) wrt and. Since, we have put no restrictions on the ranges of and, we are looking for an interior solution in terms of these (y i x i )( 1) (y i x i ) X n y i n [ny n nx] # x i Set this equal to zero, and solve for to obtain y x (y i x i )( x i ) n # X y i x i x i x i 3
4 Set this equal to zero and solve for 6 which Implies 0 y i x i x i x i y i x i x i y i x i nx y ix i nx x i x i x i Substitute in y x for to obtain which implies y ix i nx(y x) x i y ix i nxy + nx P n x i y ix i nx x i x i nxy y ix i + x n x i x i nxy Note the following rearrangement of the lhs, nx x i nx x i x i x i nx x i x i nx x i 6 ) ) ) ) (y i x i )( x i ) 0 (y i x i )(x i ) 0 (y i x i )(x i ) 0 (y i y i )(x i ) 0 (^ i )(x i ) 0 One could check that one s least squares estimates imply this. It is a good check on your math. 4
5 so, replacing nx x i with [ x i nx ] x i, one obtains. x i nx x i y ix i x i nxy multiplying through one gets n # X x i nx y i x i nxy which implies that y ix i nxy P n y ix i nxy P x i nx n (x i x) This is the leastsquares estimate for assuming g(x i : ) + x i. By substitution, the leastsquares estimate for,, is y x Note that, in this case, and ml ml where ml and ml are the maximum likelihood estimates assuming the classical linear regression model. That is, if one assumes a classic linearregression model, the ml estimators exist and are equal to the estimators, but if one assumes only the linear regression model (don t add the assumption that N(0; )), the estimators exist, but not the ml estimators. There are a number of di erent ways to write, they are all equal. 5
6 y ix i nxy x i nx y ix i nxy (x i x) ~x iy i ~x i ~x i~y i ~x i where ~x i x i x and ~y i y i y. One uses di erent characterizations in di erent situations  depending on what one wants to demonstrate. 1.. There is no leastsquares estimate of y Note that since (y i x i )) is not a function of y, there is not a leastsquares estimator for y. That is, what one minimizes to obtain the estimates is not a function of y. However, given and, one can estimate y with ^ y ^ (y i x i )) n The intuition for dividing by n the calculation of and. is that one loses two degrees of freedom in ^ is not a leastsquares estimator, but is based on the leastsquares estimators of and. It is possible to show that E[^ ]. See, for example, Gujarati Basic Econometrics, the appendix to chapter Remember Leastsquares estimators exist even if g(x i : ) 6 + x i That is, g(x i : ) can be nonlinear in. For example, one could assume g(x i : ) e xi 6
7 so which is highly nonlinear x ie xi In which case, y i e xi + i i 1; ; :::; n where E[] 0 and has nite variance. and the leastsquares estimates of is that estimate,, that minimizes SSR (yi e xi ) This is an example of nonlinear least squares Some properties of leastsquares estimators of the form y i + x i + i where E[] 0 and has nite variance. i 1; ; :::; n Assume that the the Y i in the sample are independent of one another (we have a random sample) From above, and assuming the above linear form ~x iy i ~x i ~x i k y i where k ~x i w i y i where w i ~xi k. In words, is a linear combination (weighted sum) of the n random variables, y 1 ; y ; :::; y n, where the weights are a function of the x 0 s. We call estimators with this property linear estimators. 8. Note that determining it was a linear estimator did not require that f() have a particular form. 7 In contrast, note that if one assumed y i x i + i it would still be linear leastsquares because the function is linear in. 8 Looking ahead, this is part of the famous, GaussMarkov theorem. 7
8 Given that w iy i, and given the x i the y i are independent E[ ] E[ w i y i ] w i E[y i ] since the w i are constants: they vary with x but the x are assumed xed in repeated samples. Since E[y i ] + x i Since w i ~xi k k since ~x i (x i x) 0 Because ~x i x i Because ~x i 0 And because k ~x i w i ( + x i ) w i + ~x i + k k x ) x i ~x i + x w i x i ~x i x i ~x i x i ~x i (~x i + x) k n # X ~x i + x ~x i k k ~x i ~x i ~x i That is E[ ] 8
9 In words, is an unbiased estimator of. Note that this proof did not require that we assume a speci c distribution for. We need only the assumptions of the linear regression model, and a random sample (independent y i ). Note that at this point we have demonstrated that is a linear unbiased estimator, and this result does not depend on a normality assumption. It is ao possible to show that E[ ] I leave that as an exercise for you. In summary, the leastsquares estimators of the parameters are linear and unbiased estimators. It is possible to show that E[^ ], but remember that ^ not a leastsquares estimate. (yi x i)) n is 9
10 So, assuming the linear regression model and a random sample, and are linear estimators and unbiased estimators. This is good. It is possible to further show that in the class of linear unbiased estimators, the leastsquares estimators have minimum variance. This earns them the adjective best. So, assuming the linear regression model and a random sample, and are BLUE (best linear unbiased estimators). This is the GaussMarkov theorem. If one assumes E[] N(0; ) the estimators gain more desirable properties because they are ao the ml estimators. 10
11 1.4.1 One can use and to predict values of y j conditional on x j y j + x j y j is a random variable that, for xed x s, will vary from sample to sample. Since and are both unbiased estimates, y j is an unbiased estimate of y j ; that is E[y j ] y j. Think about the sampling distribution of y j, which is conditioned on x j 11
12 1.5 The variances of the leastsquares estimators The variance of The leastsquares estimate of is a statistic and will vary from sample to sample, so has a sampling distribution, f (v). An issue at hand is determining the variance of this sampling distribution. 9 An important issue is whether we proceed assuming a knowledge of, or only knowledge of its estimate, b. We will start assuming knowledge of, and afterwards discuss how the variance of di ers when it is expressed as a function of b rather than. Knowing is atypical, but easier, so we start there. To emphasize that we are conditioning on, in the shortrun, denote f (v) more speci cally as f (v ) and write var( ) ( ). 10 Above we showed that is a linear estimator. That is, it can be written w i y i where the w i can be treated as constants. We ao know that var(ax) a var(x) if a is a constant. Combining these two pieces of information, along with knowledge of : var( ) wi y wi Recollect that y i + x i + i where E[] 0 and has a nite variance so y. 9 More generally, we would like to know the form of f (v). 10 In contrast to f (v b ) and var( b ) (b ) 1
13 Proceeding, var( ) wi ~x i k ~x i k ~x i ( ~x i ) ~x i (x i x) since k ~x i, and the standard error of is se of ( ) [var( )] :5 Notice that var( ) decreases as (x i x) increases What did we assume to derive var( ) ( ) P n? We ~x i assumed that y i + x i + i where E[] 0 and has a nite variance, and the Y i are independent. We did not need to assume that has a speci c distribution, such as the normal. It is ao possible to derive the var( ) as a function of P n var( ) ( ) ( n )( x i ) ~x i Note again that we cannot calculate var( ) or var( ) unless we assume a speci c value for Note that one can ao calculate cov ;. It is not 0 because both and are a function of. cov ; E [( E[ ]) ( E[ ])] E [( ) ( )] 13
14 Note that var( ) ( )) ~x i is a function of the x s in the sample, but not the y s. Therefore, if one makes the typical assumption that the x s are xed in repeated samples, ( ) is not a random variable. By the same argument, neither is ( ). This is because we are assuming knowledge of. This is an important point. ( ) and ( ) are not statistics and not things that are estimated; they are calculated given knowledge of and knowledge of the x leve in the data. Said another way, while the leastsquares estimates of and will vary from sample to sample, ( ) and ( ) do not vary from sample to sample (assuming the x s are xed in repeated samples). Soon, we we will consider the problem of estimating var( b ) and var( b ). But rst, 1.5. The variance of y j as a function of One can ao show that (for example, Gujarati, Essentia page 185) # var y j yj ( ) 1 n + (x j x) ~x j Note that this is a function of and the x s but not the y s, so not something that is estimated. It is not a random variable. But, y x and y x, so cov ; E [y x (y x)) ( )] E [( x + x) ( )] xe h( ) i x (x i x) x ~x i The covariance decreases as the variation in the x s increases. 14
15 1.6 What is implied if one adds to the above the assumption that ~N(o; ). If y i + x i + i where ~N(o; ), then y i ~N( + x i ; ). From earlier we know that w iy i, so is a linear combination of normally distributed random variables, so is normally distributed. Speci cally ~N(; ) ~x i By the same logic ~N(; ( n )( x i )) ~x i When is known, neither or has a t distribution, both are normally distributed. I say this here because some people incorrectly believe that and always have a t distribution If ~N(; ) then ( ~x i ):5 ~N(0; 1) 15
16 So, if we assumed a value for, we could calculate (not estimate it) and then calculate a con dence interval for and test the null hypothesis that takes some speci c value such as zero. For example, since ~N (0; 1) Pr 1:96 1:96 :95 ) Pr 1:96 1:96 :95 ) Pr 1:96 1:96 :95 ) Pr 1:96 + 1:96 :95 1:96 + 1:96 is the 95% con dence interval for based on, and the assumption that is normally distributed. How do we interpret this interval? Note that this interval depends on which is a random variable, so the con dence interval is a random variable. 95% 1 13 of these interva will contain. Assuming that ~N(o; ), it follows that y j is ao normally distributed (it is a linear function of two normally distributed random variables ( and ). Speci cally, #! y j ~N + x j ; 1 n + (x j x) ~x j So one can ao get a con dence interval for y i 1 Note that one cannot say that there is a 95% chance that the true is between 1:96 ) and ( + 1:96. Further note that since is not a random variable if the x s are xed in repeated sample, the position of this con dence interval is a random variable, but not its length. 13 Note that none of the above has anything to do with the t distribution. 16
17 1.7 However, we don t typically assume a value for but estimate it with ^ Continue, for now, to assume that ~N(o; ), so assume the CLR model, but that we do not know, so have to estimate it ^ (y i x i ) (n ) and note the important distinction between ^ and, the rst is a rv, the second is a constant. The rst thing to note, as we demonstrate below, is even though ~N(o; ) where N (0; 1) ^ ^ ^ ~x i Toooooo bad Note that ~N(; ) because we are assuming ~N(o; ) 17
18 Since it is not normal, what distribution does ^ have? Let s try and demonstrate that it has a t distribution. The following is a bit di cult  think of it as walking backwards from the end of the trail back to your car, forgetting where you started. What I am doing is deriving the distribution of ^. Remember that that ~N (0; 1). 18
19 Now de ne another random variable, G, remember we are going backwards, such that G (n )^ note that I have de ned a function that is a linear function of the ratio ^ P (n ) (Y x i) (n ) the reason for (n ) above was so it would cancel here P (Yi x i ) P (Yi E[y j jx i ]) P (Yi E[Y i jx i ]) P (Yi E[Y i jx i ]) y (Yi E[Y i jx i ]) y because y j jx i is an unbiased estimate of E[Y i jx i ] Note that ^ does not explicitly appear in this last term  we started with it, but it disappeared. Further note that (yi E[y i jx i ]) ~N(0; 1) because y i ~N(E[y i jx i ]; y). y 19
20 Note, this is critical, our created random variable, G P n (Yi E[Y ijx i ]), y is the sum of the squares of a bunch of standard normal random variables. That means it has a distribution. 15 The important thing to remember at this point is that we have created a random variable G that is a linear function of the ratio ^ and we know its density function. You want to learn what you can about the Chisquared distribution (keep in mind, saying k is the degrees of freedom of the distribution is just another way of saying the density function has one parameter, k). Speci cally, G (Yi E[Y i jx i ]) y (n )^ ~ n It is (n ) because of the parameter (number of degrees of freedom) is not the number of terms in the sum, but the number of independent terms in the sum, which is (n ) because we lose two degrees of freedom to get E[Y i jx i ] + x i. That is, G (n )^ parameter (n ). is a rv with a Chisquared distribution with (The bottom line is someone worked backward and gured out a rv that was a function of ^ and, and that had a Chisquare distribution.) Note that neither ^ nor is a parameter in the Chisquare, which is important. 15 See Gujarati page 114 and MGB pages 4143). Theorem 7 (MGB page 4) states that If random variables X i, i 1; ; ::; k, are normally distributed with means i and variances i, then U P n Xi i has a chisquare distribution with parameter k (k degress of i freedom). A collarary is that if X 1 ; X ; ::; X n is a random sample from a normal distribution with mean mean and variance then P n Xi has a chisquare distribution with n degrees of freedom. A special case is that Xi has a chisquare distribution with 1 degree of freedom. 0
21 So what do we know at this point? and ( ~x i (n )^ ~ n ):5 ~N(0; 1) So, now let s mention the t distribution. MGB tell us n N(0; 1) :5 ~t n (n ) That is, a standard normal rv, e.g., divided by the square root of a rv (divided by its parameter) has a t distribution with that parameter Theorem 10 (MGB page 50) states that If the rv Z has a standard normal distribution, if the rv U has a chisquared distribution with (degrees of freedom k), and if Z and U are independent, Z (Uk) :5 has a Student t distribution with parameter k (degrees of freedom). A relevant Corollary is on page 50. 1
22 So, let s divide and see what simpli es. De ne the rv W W N(0; 1) :5 (n ) n (n ) :5 n ( ~x i ):5 n ( ~x i ):5 (n )^ (n ) :5 ( ~x i ):5 (^ :5 (n ) :5 ( ~x i ):5 ^ : ^ Note that cance out; ( P n ~x i ):5 this is critical since we don t know it. ^ ( ~x i ):5 ~t n ^ if y i + x i + i where ~N(0; ).
23 So, to say it explicitly, we have determined that ^ has a t distribution with parameter (n ) 17 It took a lot of what we have learned to derive this. Consider an example. If 18 n 3 ~t 30 ^ In which case Pr(t 30 > :04) :05 and Pr(t 30 < :04) :05 from the t table. So, Pr :04 < < :04 :95 ^ () Pr :04^ < < + :04^ :95 The interval :04^ to + :04^ is the 95% con dence interval for based on ^ rather than. This interval is a random variable; 95% of these interva will include. Contrast this con dence interval with which we derived earlier. Pr 1:96 < < + 1:96 :95 A hypothesis test How would one determine whether they can reject the null hypothesis that 4? One can derive the con dence interval for and see if its includes 4. Alternatively, one can directly use ~t n ^ If 4, the null is correct 4 ^ ~t n 17 This is close but di erent from saying that has a t distribution. 18 If ~t ^ n, E[ n ] 0 and it variance is ^ (n ) n. In explanation, all t (n 4) distributions have a mean of zero, and n is the variance of all t distributions. (n 4) 3
24 Note that since a value of is assumed, this is a calculable number. For example, if n 3, 8, and ^ then 4 ^ 8 4 If one chooses a twotailed test (:05 in each tail), the critical value of t is :04. In this case, :0 < :045 and one fai to reject the null hypothesis that f() (B B)/sigmahatB ( )b has a t distribution Most basic OLS regression packages print out the t values corresponding to the null hypothesis 0. Be aware that these t statistics don t mean much unless you are willing to assume that is normally distributed. 19 Note that these t values make no sense if one does not assume ~N 0;. That is, if one does not adopt this assumption, the random variable ^ does not have a t distribution. Said a di erent way, if you are unwilling to assume ~N 0; you better not be paying any attention to the t values your OLS package printed out. 4
25 Now derive the 95% con dence interval for assuming n 3 We are still assuming the CLR model and no knowledge of. Earlier we showed that G (n )^ ~ n Using the table one can determine that Pr 30 > 46:98 :05 and Pr 30 < 16:79 :05 Below is the density function for 30; :5% of the area is to the left of 16:79 and :5% is to the right of 46:98. f(g) G has a ChiSquared distribution g We are still assuming the CLR model and no knowledge of. Earlier we showed that G (n )^ ~ n ) ) So Pr 16:79 30^ 46:98 :95 16:79 Pr 30^ 1 46:98 30^ :95 30^ Pr 16:79 30^ :95 46:98 5
26 ) ) 30^ Pr 46:98 30^ :95 16:79 Pr :638^ 1:786^ :95 So, we have derived a con dence interval on the population parameter as a function of ^. Note that the con dence interval, :638^ 1:786^, is a random variable; 95% of these interva will include. If one wanted to test the null hypothesis that takes some speci c value, e.g. 4, one can either see if 4 is in the interval :638^ 1:786^. Or one can directly use the fact that Plugging in the 4 and n 3 (n )^ ~ n (30)^ 4 7:5^ ~ n From above, for a twotailed test at the :05 signi cance level, the critical values of 30 are 16:79 and 46:98. So if ^ 46:98 7:5 6:6, one would reject the null hypothesis that 4. One would ao reject this null hypothesis if ^ 16:798 7:5 : 6
27 How about a con dence interval for y i, conditional on x i, assuming ~N(0; ) and no knowldege of? From above we know that #! y j ~N + x j ; 1 n + (x j x) ~x j If we replace with ^ it no longer has a normal distribution. But, by the same argument as above y j E[y j jx j ] ^ yj ~t n This implies, still assuming n 3, Pr :04 < y! j E[y j jx j ] < :04 ^ yj :95 ) Pr y j :04^ yj < y j jx j < y j + :04^ yj :95 So, 95% of the interva, y j :04^ yj < y j jx j < y j + :04^ yj, will contain the true y j conditional on x j. Note that this interval takes it minimum value when x j x, decreases as x j! x. How do I know this? 7
28 1.7.1 What if I don t know the distribution of but am willing to assume E[] 0 and that has a nite but unknown variance? We are now assuming a LRM, but not a CLRM. y j In this case we still can do OLS estimation, and, as we saw,,, and are BLUE estimators. We can ao calculate ^ (y i x i ) (n ) and ^ ^ ~x i To do hypothesis tests or interval estimation on, we need to determine the distribution of Note that we cannot assume that ^ ~N (0; 1). If it were normal, one can determine (above) that ^ ~t n, but now we can t determine the distribution. To do so we need to know the distribution of, which we do not. ^ 8
29 1.7. What if we know the distribution of and it is not normal? Now we are assuming a LRM and knowledge of f (), which is not normal. So we are not assuming the CLR model. Assume for example, ~S 0; where S denotes the Snerd distribution, where the Snerd is not the normal distribution  to start, assume you know. In this case, is ~S (0; 1)? That is, does it have a standardized Snerd distribution? The answer is sometimes but not always. 0 If you could show that ~S (0; 1) one could do con dence interva and hypothesis tests for assumed values of. If one replaces with ^, ^ will de nitely not have a Snerd distribution or a student t distribution. In theory, one could gure out the distribution of this rv (along the lines we did it assuming normality) and then do hypothesis tests and con dence interva. This could be tough. Now again assume you know, continuing to assume has a Snerd distribution To simulate estimated con dence interva for and, one might proceed as follows: Assume the datagenerating process for your realworld population of interest is the LRM with ~S 0;, where the value of is known  S 0; is completely speci ed. Estimate and for this sample. Then assume, and are the population parameters; that is, your suedo datagenerating process is y i + x i + i where ~S 0;. For the vector of x, x 1 ; x ; :::x i ; :::; x n generate S di erent random samples of size n based on the suedo datagenerating process; make S a large number. For each sample s, estimate s and s. Plot the distribution of the S s and the distribution of the S s. The former is an estimated sampling distribution for, centered on, the latter an estimated sampling distribution for, centered on. A 95% con dence for each can be estimated by lopping o the top and bottom :5% of each of these estimated distributions. 0 For example if had a t distribution, would not have a t distribution. But we know that if is normal then is normal. 9
30 Note, these estimated con dence interva are a function of the initial random sample from your population, the assumption that one has a LRM, the assumption that ~S 0;, that is known, and n: it is de nitely a function of the Snerd assumption and. The larger n the shorter the con dence interval. Note, one does not need to theoretically derive either f( ) or f( ): the latter was derived by simulation. Now continue to assume has a Snerd distribution but now assum the value of is unknown. To simulate estimated con dences interva for,, b one might proceed as follows: Assume the datagenerating process for your realworld population of interest is the LRM with ~S 0;, where the value of is unknown. Estimate and for this sample, and use these to estimate, b. Then assume, and b are the population parameters; that is, your suedo datagenerating process is y i + x i + i where ~S 0; b. For the vector of x, x 1 ; x ; :::x i ; :::; x n generate S di erent random samples of size n based on the suedo datagenerating process; make S a large number. For each sample s, estimate s and s, and then use them to estimate s b Plot the distribution of the S s, the distribution of the S s, and the distribution of the S s b. The rst is an estimated sampling distribution for, centered on, the second is an estimated sampling distribution for, centered on, and the third is the sampling distribution of b, centered on b. A 95% con dence for each can be estimated by lopping o the top and bottom :5% of each of these estimated distributions. Note, these estimated con dence interva are a function of the initial random sample from your population, the assumption that one has a LRM, the assumption that ~S 0; and n: it is de nitely a function of the Snerd assumption. It is not a function of the value of. The larger n the shorter these con dence interva. Note, one does not need to theoretically derive either f( b ) or f( ): the latter was derived by simulation. 30
Chapter 2. Dynamic panel data models
Chapter 2. Dynamic panel data models Master of Science in Economics  University of Geneva Christophe Hurlin, Université d Orléans Université d Orléans April 2010 Introduction De nition We now consider
More informationMaster s Theory Exam Spring 2006
Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem
More informationIDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS. Steven T. Berry and Philip A. Haile. March 2011 Revised April 2011
IDENTIFICATION IN A CLASS OF NONPARAMETRIC SIMULTANEOUS EQUATIONS MODELS By Steven T. Berry and Philip A. Haile March 2011 Revised April 2011 COWLES FOUNDATION DISCUSSION PAPER NO. 1787R COWLES FOUNDATION
More informationIntroduction to Regression and Data Analysis
Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it
More informationBias in the Estimation of Mean Reversion in ContinuousTime Lévy Processes
Bias in the Estimation of Mean Reversion in ContinuousTime Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA
More informationVeri cation and Validation of Simulation Models
of of Simulation Models mpressive slide presentations Faculty of Math and CS  UBB 1st Semester 20102011 Other mportant Validate nput Hypothesis Type Error Con dence nterval Using Historical nput of
More informationInstitute of Actuaries of India Subject CT3 Probability and Mathematical Statistics
Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in
More informationMultivariate Normal Distribution
Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #47/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues
More informationPanel Data Econometrics
Panel Data Econometrics Master of Science in Economics  University of Geneva Christophe Hurlin, Université d Orléans University of Orléans January 2010 De nition A longitudinal, or panel, data set is
More informationRegression Analysis: A Complete Example
Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty
More informationLOGNORMAL MODEL FOR STOCK PRICES
LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as
More informationA Comparison of Option Pricing Models
A Comparison of Option Pricing Models Ekrem Kilic 11.01.2005 Abstract Modeling a nonlinear pay o generating instrument is a challenging work. The models that are commonly used for pricing derivative might
More informationOneWay Analysis of Variance
OneWay Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We
More informationStatistics in Retail Finance. Chapter 2: Statistical models of default
Statistics in Retail Finance 1 Overview > We consider how to build statistical models of default, or delinquency, and how such models are traditionally used for credit application scoring and decision
More informationChapter 6: Multivariate Cointegration Analysis
Chapter 6: Multivariate Cointegration Analysis 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie VI. Multivariate Cointegration
More informationA degreesoffreedom approximation for a tstatistic with heterogeneous variance
The Statistician (1999) 48, Part 4, pp. 495±506 A degreesoffreedom approximation for a tstatistic with heterogeneous variance Stuart R. Lipsitz and Joseph G. Ibrahim Harvard School of Public Health
More informationStat 704 Data Analysis I Probability Review
1 / 30 Stat 704 Data Analysis I Probability Review Timothy Hanson Department of Statistics, University of South Carolina Course information 2 / 30 Logistics: Tuesday/Thursday 11:40am to 12:55pm in LeConte
More informationCHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression
Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the
More informationSAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
More informationA note on the impact of options on stock return volatility 1
Journal of Banking & Finance 22 (1998) 1181±1191 A note on the impact of options on stock return volatility 1 Nicolas P.B. Bollen 2 University of Utah, David Eccles School of Business, Salt Lake City,
More informationPITFALLS IN TIME SERIES ANALYSIS. Cliff Hurvich Stern School, NYU
PITFALLS IN TIME SERIES ANALYSIS Cliff Hurvich Stern School, NYU The t Test If x 1,..., x n are independent and identically distributed with mean 0, and n is not too small, then t = x 0 s n has a standard
More informationA statistical test for forecast evaluation under a discrete loss function
A statistical test for forecast evaluation under a discrete loss function Francisco J. Eransus, Alfonso Novales Departamento de Economia Cuantitativa Universidad Complutense December 2010 Abstract We propose
More informationHow To Run Statistical Tests in Excel
How To Run Statistical Tests in Excel Microsoft Excel is your best tool for storing and manipulating data, calculating basic descriptive statistics such as means and standard deviations, and conducting
More informationGLM I An Introduction to Generalized Linear Models
GLM I An Introduction to Generalized Linear Models CAS Ratemaking and Product Management Seminar March 2009 Presented by: Tanya D. Havlicek, Actuarial Assistant 0 ANTITRUST Notice The Casualty Actuarial
More informationA Primer on Forecasting Business Performance
A Primer on Forecasting Business Performance There are two common approaches to forecasting: qualitative and quantitative. Qualitative forecasting methods are important when historical data is not available.
More informationChapter 9. TwoSample Tests. Effect Sizes and Power Paired t Test Calculation
Chapter 9 TwoSample Tests Paired t Test (Correlated Groups t Test) Effect Sizes and Power Paired t Test Calculation Summary Independent t Test Chapter 9 Homework Power and TwoSample Tests: Paired Versus
More informationRegression III: Advanced Methods
Lecture 4: Transformations Regression III: Advanced Methods William G. Jacoby Michigan State University Goals of the lecture The Ladder of Roots and Powers Changing the shape of distributions Transforming
More informationNormalization and Mixed Degrees of Integration in Cointegrated Time Series Systems
Normalization and Mixed Degrees of Integration in Cointegrated Time Series Systems Robert J. Rossana Department of Economics, 04 F/AB, Wayne State University, Detroit MI 480 EMail: r.j.rossana@wayne.edu
More informationMultiple Linear Regression in Data Mining
Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple
More informationStatistics in Retail Finance. Chapter 6: Behavioural models
Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics: Behavioural
More informationRecall this chart that showed how most of our course would be organized:
Chapter 4 OneWay ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical
More informationMultivariate Analysis of Variance (MANOVA): I. Theory
Gregory Carey, 1998 MANOVA: I  1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the
More informationData Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools
Data Mining Techniques Chapter 5: The Lure of Statistics: Data Mining Using Familiar Tools Occam s razor.......................................................... 2 A look at data I.........................................................
More informationChapter 5: The Cointegrated VAR model
Chapter 5: The Cointegrated VAR model Katarina Juselius July 1, 2012 Katarina Juselius () Chapter 5: The Cointegrated VAR model July 1, 2012 1 / 41 An intuitive interpretation of the Pi matrix Consider
More informationMINITAB ASSISTANT WHITE PAPER
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. OneWay
More informationA POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
CHAPTER 5. A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING 5.1 Concepts When a number of animals or plots are exposed to a certain treatment, we usually estimate the effect of the treatment
More informationMultivariate Analysis of Ecological Data
Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology
More informationIntroduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.
Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative
More informationBusiness Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.
Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGrawHill/Irwin, 2008, ISBN: 9780073319889. Required Computing
More information2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)
2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came
More information**BEGINNING OF EXAMINATION** The annual number of claims for an insured has probability function: , 0 < q < 1.
**BEGINNING OF EXAMINATION** 1. You are given: (i) The annual number of claims for an insured has probability function: 3 p x q q x x ( ) = ( 1 ) 3 x, x = 0,1,, 3 (ii) The prior density is π ( q) = q,
More informationStatistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 16233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova
More informationValuing Stock Options: The BlackScholesMerton Model. Chapter 13
Valuing Stock Options: The BlackScholesMerton Model Chapter 13 Fundamentals of Futures and Options Markets, 8th Ed, Ch 13, Copyright John C. Hull 2013 1 The BlackScholesMerton Random Walk Assumption
More informationANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES
Advances in Information Mining ISSN: 0975 3265 & EISSN: 0975 9093, Vol. 3, Issue 1, 2011, pp2632 Available online at http://www.bioinfo.in/contents.php?id=32 ANALYSIS OF FACTOR BASED DATA MINING TECHNIQUES
More informationSection 1.5 Linear Models
Section 1.5 Linear Models Some reallife problems can be modeled using linear equations. Now that we know how to find the slope of a line, the equation of a line, and the point of intersection of two lines,
More informationExample. A casino offers the following bets (the fairest bets in the casino!) 1 You get $0 (i.e., you can walk away)
: Three bets Math 45 Introduction to Probability Lecture 5 Kenneth Harris aharri@umich.edu Department of Mathematics University of Michigan February, 009. A casino offers the following bets (the fairest
More informationSection 6: Model Selection, Logistic Regression and more...
Section 6: Model Selection, Logistic Regression and more... Carlos M. Carvalho The University of Texas McCombs School of Business http://faculty.mccombs.utexas.edu/carlos.carvalho/teaching/ 1 Model Building
More informationMaximum likelihood estimation of mean reverting processes
Maximum likelihood estimation of mean reverting processes José Carlos García Franco Onward, Inc. jcpollo@onwardinc.com Abstract Mean reverting processes are frequently used models in real options. For
More informationClass Notes, Econ 8801 Lump Sum Taxes are Awesome
Class Notes, Econ 8801 Lump Sum Taxes are Awesome Larry E. Jones 1 Exchange Economies with Taxes and Spending 1.1 Basics 1) Assume that there are n goods which can be consumed in any nonnegative amounts;
More informationChapter 13 Introduction to Linear Regression and Correlation Analysis
Chapter 3 Student Lecture Notes 3 Chapter 3 Introduction to Linear Regression and Correlation Analsis Fall 2006 Fundamentals of Business Statistics Chapter Goals To understand the methods for displaing
More informationChapter 4: Vector Autoregressive Models
Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...
More informationEmployer Learning, Productivity and the Earnings Distribution: Evidence from Performance Measures Preliminary and Incomplete
Employer Learning, Productivity and the Earnings Distribution: Evidence from Performance Measures Preliminary and Incomplete Lisa B. Kahn and Fabian Lange y Yale University January 8, 2009 Abstract Two
More informationOnline Appendix to Impatient Trading, Liquidity. Provision, and Stock Selection by Mutual Funds
Online Appendix to Impatient Trading, Liquidity Provision, and Stock Selection by Mutual Funds Zhi Da, Pengjie Gao, and Ravi Jagannathan This Draft: April 10, 2010 Correspondence: Zhi Da, Finance Department,
More informationSections 2.11 and 5.8
Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and
More informationPopulation Mean (Known Variance)
Confidence Intervals Solutions STATUB.0103 Statistics for Business Control and Regression Models Population Mean (Known Variance) 1. A random sample of n measurements was selected from a population with
More informationMeasuring Rationality with the Minimum Cost of Revealed Preference Violations. Mark Dean and Daniel Martin. Online Appendices  Not for Publication
Measuring Rationality with the Minimum Cost of Revealed Preference Violations Mark Dean and Daniel Martin Online Appendices  Not for Publication 1 1 Algorithm for Solving the MASP In this online appendix
More informationUniversity of Ljubljana Doctoral Programme in Statistics Methodology of Statistical Research Written examination February 14 th, 2014.
University of Ljubljana Doctoral Programme in Statistics ethodology of Statistical Research Written examination February 14 th, 2014 Name and surname: ID number: Instructions Read carefully the wording
More informationPricing Cloud Computing: Inelasticity and Demand Discovery
Pricing Cloud Computing: Inelasticity and Demand Discovery June 7, 203 Abstract The recent growth of the cloud computing market has convinced many businesses and policy makers that cloudbased technologies
More information( ) = 1 x. ! 2x = 2. The region where that joint density is positive is indicated with dotted lines in the graph below. y = x
Errata for the ASM Study Manual for Exam P, Eleventh Edition By Dr. Krzysztof M. Ostaszewski, FSA, CERA, FSAS, CFA, MAAA Web site: http://www.krzysio.net Email: krzysio@krzysio.net Posted September 21,
More informationNetwork quality control
Network quality control Network quality control P.J.G. Teunissen Delft Institute of Earth Observation and Space systems (DEOS) Delft University of Technology VSSD iv Series on Mathematical Geodesy and
More informationDegrees of Freedom and Model Search
Degrees of Freedom and Model Search Ryan J. Tibshirani Abstract Degrees of freedom is a fundamental concept in statistical modeling, as it provides a quantitative description of the amount of fitting performed
More informationThe VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.
Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium
More informationECE302 Spring 2006 HW3 Solutions February 2, 2006 1
ECE302 Spring 2006 HW3 Solutions February 2, 2006 1 Solutions to HW3 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in
More informationSome useful concepts in univariate time series analysis
Some useful concepts in univariate time series analysis Autoregressive moving average models Autocorrelation functions Model Estimation Diagnostic measure Model selection Forecasting Assumptions: 1. Nonseasonal
More informationLogistic Regression (1/24/13)
STA63/CBB540: Statistical methods in computational biology Logistic Regression (/24/3) Lecturer: Barbara Engelhardt Scribe: Dinesh Manandhar Introduction Logistic regression is model for regression used
More informationData analysis in supersaturated designs
Statistics & Probability Letters 59 (2002) 35 44 Data analysis in supersaturated designs Runze Li a;b;, Dennis K.J. Lin a;b a Department of Statistics, The Pennsylvania State University, University Park,
More informationInternational Statistical Institute, 56th Session, 2007: Phil Everson
Teaching Regression using American Football Scores Everson, Phil Swarthmore College Department of Mathematics and Statistics 5 College Avenue Swarthmore, PA198, USA Email: peverso1@swarthmore.edu 1. Introduction
More informationCase Study in Data Analysis Does a drug prevent cardiomegaly in heart failure?
Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure? Harvey Motulsky hmotulsky@graphpad.com This is the first case in what I expect will be a series of case studies. While I mention
More informationAccounting for a Shift in Term Structure Behavior with No Arbitrage and Macro Finance Models
Accounting for a Shift in Term Structure Behavior with No Arbitrage and Macro Finance Models Glenn D. Rudebusch y Tao Wu z November 2004 Revised November 2005 and July 2005 Abstract This paper examines
More informationDefinition: Suppose that two random variables, either continuous or discrete, X and Y have joint density
HW MATH 461/561 Lecture Notes 15 1 Definition: Suppose that two random variables, either continuous or discrete, X and Y have joint density and marginal densities f(x, y), (x, y) Λ X,Y f X (x), x Λ X,
More informationFactors affecting online sales
Factors affecting online sales Table of contents Summary... 1 Research questions... 1 The dataset... 2 Descriptive statistics: The exploratory stage... 3 Confidence intervals... 4 Hypothesis tests... 4
More informationMath 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 5 Solutions
Math 370/408, Spring 2008 Prof. A.J. Hildebrand Actuarial Exam Practice Problem Set 5 Solutions About this problem set: These are problems from Course 1/P actuarial exams that I have collected over the
More informationMONT 107N Understanding Randomness Solutions For Final Examination May 11, 2010
MONT 07N Understanding Randomness Solutions For Final Examination May, 00 Short Answer (a) (0) How are the EV and SE for the sum of n draws with replacement from a box computed? Solution: The EV is n times
More informationProblem sets for BUEC 333 Part 1: Probability and Statistics
Problem sets for BUEC 333 Part 1: Probability and Statistics I will indicate the relevant exercises for each week at the end of the Wednesday lecture. Numbered exercises are backofchapter exercises from
More informationVaRx: Fat tails in nancial risk management
VaRx: Fat tails in nancial risk management Ronald Huisman, Kees G. Koedijk, and Rachel A. J. Pownall To ensure a competent regulatory framework with respect to valueatrisk (VaR) for establishing a bank's
More informationA Simpli ed Axiomatic Approach to Ambiguity Aversion
A Simpli ed Axiomatic Approach to Ambiguity Aversion William S. Neilson Department of Economics University of Tennessee Knoxville, TN 379960550 wneilson@utk.edu March 2009 Abstract This paper takes the
More informationCourse Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics
Course Text Business Statistics Lind, Douglas A., Marchal, William A. and Samuel A. Wathen. Basic Statistics for Business and Economics, 7th edition, McGrawHill/Irwin, 2010, ISBN: 9780077384470 [This
More informationEDUCATION AND EXAMINATION COMMITTEE SOCIETY OF ACTUARIES RISK AND INSURANCE. Copyright 2005 by the Society of Actuaries
EDUCATION AND EXAMINATION COMMITTEE OF THE SOCIET OF ACTUARIES RISK AND INSURANCE by Judy Feldman Anderson, FSA and Robert L. Brown, FSA Copyright 25 by the Society of Actuaries The Education and Examination
More informationHerding, Contrarianism and Delay in Financial Market Trading
Herding, Contrarianism and Delay in Financial Market Trading A Lab Experiment Andreas Park & Daniel Sgroi University of Toronto & University of Warwick Two Restaurants E cient Prices Classic Herding Example:
More informationChapter 7: Simple linear regression Learning Objectives
Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) 
More informationIndices of Model Fit STRUCTURAL EQUATION MODELING 2013
Indices of Model Fit STRUCTURAL EQUATION MODELING 2013 Indices of Model Fit A recommended minimal set of fit indices that should be reported and interpreted when reporting the results of SEM analyses:
More informationLife Table Analysis using Weighted Survey Data
Life Table Analysis using Weighted Survey Data James G. Booth and Thomas A. Hirschl June 2005 Abstract Formulas for constructing valid pointwise confidence bands for survival distributions, estimated using
More informationLinear regression methods for large n and streaming data
Linear regression methods for large n and streaming data Large n and small or moderate p is a fairly simple problem. The sufficient statistic for β in OLS (and ridge) is: The concept of sufficiency is
More informationLinear Codes. Chapter 3. 3.1 Basics
Chapter 3 Linear Codes In order to define codes that we can encode and decode efficiently, we add more structure to the codespace. We shall be mainly interested in linear codes. A linear code of length
More informationSome Polynomial Theorems. John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.
Some Polynomial Theorems by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 rkennedy@ix.netcom.com This paper contains a collection of 31 theorems, lemmas,
More informationChapter 7. Oneway ANOVA
Chapter 7 Oneway ANOVA Oneway ANOVA examines equality of population means for a quantitative outcome and a single categorical explanatory variable with any number of levels. The ttest of Chapter 6 looks
More informationThe Method of Least Squares
The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the
More informationBusiness Analytics. Methods, Models, and Decisions. James R. Evans : University of Cincinnati PEARSON
Business Analytics Methods, Models, and Decisions James R. Evans : University of Cincinnati PEARSON Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London
More informationRegression analysis of probabilitylinked data
Regression analysis of probabilitylinked data Ray Chambers University of Wollongong James Chipperfield Australian Bureau of Statistics Walter Davis Statistics New Zealand 1 Overview 1. Probability linkage
More informationOn closedform solutions of a resource allocation problem in parallel funding of R&D projects
Operations Research Letters 27 (2000) 229 234 www.elsevier.com/locate/dsw On closedform solutions of a resource allocation problem in parallel funding of R&D proects Ulku Gurler, Mustafa. C. Pnar, Mohamed
More informationCurriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools 20092010
Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools 20092010 Week 1 Week 2 14.0 Students organize and describe distributions of data by using a number of different
More informationMULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL. by Michael L. Orlov Chemistry Department, Oregon State University (1996)
MULTIPLE LINEAR REGRESSION ANALYSIS USING MICROSOFT EXCEL by Michael L. Orlov Chemistry Department, Oregon State University (1996) INTRODUCTION In modern science, regression analysis is a necessary part
More informationMath 370/408, Spring 2008 Prof. A.J. Hildebrand. Actuarial Exam Practice Problem Set 2 Solutions
Math 70/408, Spring 2008 Prof. A.J. Hildebrand Actuarial Exam Practice Problem Set 2 Solutions About this problem set: These are problems from Course /P actuarial exams that I have collected over the years,
More informationMathematical Expectation
Mathematical Expectation Properties of Mathematical Expectation I The concept of mathematical expectation arose in connection with games of chance. In its simplest form, mathematical expectation is the
More informationAutomated Biosurveillance Data from England and Wales, 1991 2011
Article DOI: http://dx.doi.org/10.3201/eid1901.120493 Automated Biosurveillance Data from England and Wales, 1991 2011 Technical Appendix This online appendix provides technical details of statistical
More informationUniversity of Chicago Graduate School of Business. Business 41000: Business Statistics
Name: University of Chicago Graduate School of Business Business 41000: Business Statistics Special Notes: 1. This is a closedbook exam. You may use an 8 11 piece of paper for the formulas. 2. Throughout
More informationVoluntary Voting: Costs and Bene ts
Voluntary Voting: Costs and Bene ts Vijay Krishna y and John Morgan z November 7, 2008 Abstract We study strategic voting in a Condorcet type model in which voters have identical preferences but di erential
More informationGeneralized Linear Models
Generalized Linear Models We have previously worked with regression models where the response variable is quantitative and normally distributed. Now we turn our attention to two types of models where the
More informationThe Best of Both Worlds:
The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk Jacob Boudoukh 1, Matthew Richardson and Robert F. Whitelaw Stern School of Business, NYU The hybrid approach combines the two most
More information2013 MBA Jump Start Program. Statistics Module Part 3
2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just
More information