University of British Columbia Department of Economics, Macroeconomics (Econ 502) Prof. Amartya Lahiri Handout 7: Business Cycles We now use the methods that we have introduced to study modern business cycle theory. The prevailing paradigm in business cycle theory is known as Real Business Cycle (RBC). The name owes its origins to the fact that the original proponents of this view asserted the importance of real shocks to the production function itself as the primary driver of business cycle uctuations. This stood in sharp contrast to the traditional focus on monetary and scal policies as big determinants of business cycle movements. This distinctive aspect of RBC models has become more blurry in recent years as economists have begun to reintroduce monetary shocks, scal shocks as well as preference shocks in order to understand business cycle uctuations. A Simple RBC Model We shall present the basic RBC theory by using the simplest possible model. Thus, consider the same closed economy model that we studied earlier with an in nitely lived representative agent. As before, the agent s preferences are given by X W = E t u (c t ) t=0 The key modi cation to the previous structure is now we add a stochastic element (a shock) to technology. In particular, suppose the production technology is now given by y t = z t f (k t ) where z t = exp " t with " t being distributed i.i.d with mean zero and variance 2. Hence, z is a random shock that hits the production function every period. Since E (" t ) = 0, the expected value of z is one.
To make matters even simpler and concrete let us make three additional assumptions. First, let u (c) = ln c. Second, let f (k) = k. Lastly, assume that =, i.e., there is full depreciation. These simpli cations will allow us to analytically characterize the solution and make progress in illustrating the typical tests of this model. Under these assumptions the agent maximizes her welfare subject to c t + k t+ = z t kt : The timing of events in any period t is as follows: agents start the period with some inherited k t : At the beginning of the period the shock to z t is revealed to everyone. Armed with this information agents then make their optimal consumption and investment decisions. The rst step is to set up the value function for this problem using the same methods and steps we learnt earlier. In this problem there are two state variables: k and z. Of course, k evolves endogenously as a response to individual saving and investment choices while z evolves exogenously. The Bellman equation for this problem is V (k t ; z t ) = max fln c t + E t V (k t+ ; z t+ )g where the expectation is calculated based on the probability distribution of z. The key optimality condition for this problem is (and you should verify this by putting pen on pad) = E t z t+ kt+ c t c t+ This is the just the stochastic version of our old friend the Euler equation. It says the same thing as always; at an optimum the agent will equate the marginal utility cost of a foregone unit of consumption today on account of saving in capital with the discounted expected marginal utility gain from the resulting additional output produced by this additional capital. So how do we proceed from here. Our goal is to characterize the solution to the model completely. One way to do so is to guess a solution form for the policy functions for c t and k t+. Suppose we use the guesses that c t = z t k t 2
k t+ = ( ) z t k t Substituting these guesses into the Euler equation above gives = E z t kt t z t+ (( ) z t kt ) z t+ (( ) z t kt ) Solving this gives which implies that z t k t = (( ) z t kt ) = : Hence, the optimal policy functions (or decision rules as they are sometimes called) are c t = ( ) z t k t k t+ = z t kt : Our next point of interest is to evaluate the potential of this model in matching the business cycle properties that are observed in the data. The standard moments that people studying business cycles are interested in are the volatility of the variables of interest, their persistence, and their cross-correlations with other variables. Let us examine the persistence issue rst. This model has one state variable. So, the fundamental di erence equation that governs all the dynamics of this model is represented by ln k t+ = ln + ln k t + " t () This is a rst-order stochastic di erence equation with an autocorrelation coe cient. Hence, the persistence of any shock " is directly related to. As we have pointed out before, under our assumed production technology is the share of income accruing to capital. The standard estimates of this share range between 0.3 and 0.4. Hence, is not very large. The direct implication of this is that business cycles will not be very persistent in this model as shocks will tend to die quite quickly. 3
What about the other moments of interest? In order to calculate the variances of the endogenous variables we need to represent them as functions of past and present shocks and then compute the theoretical moments from them. Thus, ignoring the constant term (which leaves variances unchanged) one can solve backwards the equilibrium di erence equation of this model as follows. Let L denote the lag operator so that L i x t+i = x t for all i 0: Using this we can write the di erence equation for ln k as Hence, ln k t+ = L ln k t+ + " t ln k t+ = It is straightforward then to see that var(ln k t+ ) = " t L = X i=0 i " t 2 2 Since consumption and output both depend on capital, their variances follow directly from the volatility of k and z. Note that i ln c t = ln ( ) + ln k t + " t Hence, With respect to output note that var(ln c t ) = 2 var ln k t + 2 = 2 : 2 ln y t = ln k t + " t Hence, the volatility of the log of output is identical to the variance of ln k t and ln c t. They all equal 2 2. At this point you might be worried that the model predicts that the variance of investment, consumption and output are identical whereas in the data they are not. In particular, 4
investment is more volatile while consumption is less volatile than output in the data. This however is not too damning an indictment of this model. All we need is to move away from 00 percent depreciation of capital to more realistic numbers like 5 percent and the model shall begin to reproduce this feature of the data. Intuitively, the higher investment induced by a positive shock to output now has a long-lived e ect on capital as the additional capital doesn t disappear in a period. This raises output and investment in future periods as well. Hence, the overall volatility of investment rises. For the same given volatility of output this implies that volatility of consumption must decline, relative to the benchmark case we studied above. A method that is often used to illustrate the implications of a model is to plot the impulse response functions for the variables of interest. An impulse response function for k for example would plot k t ; k t+ ; k t+2 ; :: on the vertical axis and time t; t + ; t + 2; :: on the horizontal axis. The way to do so would be to use equation () and hit it with, say, a one standard deviation shock to " at some initial date t. We would then use the di erence equation to recursively derive successive solutions for k over time assuming no further shocks to ". This method is often extremely useful to illustrate the persistence and response pattern in general of variables to shocks at business cycle frequencies. Clearly, the business cycle properties of the model depend crucially on the properties of the shocks hitting the production technology as well as key parameters such as the capital share. So, how do we go about determining the properties of say the technology shock z? One way of representing the shock process is to write it as a rst-order autoregressive process: " t = " t + u t ; u t iid(0; 2 u) (2) where the persistence coe cient is constrained to be less than one in order for the series to be convergent. Clearly, " t = u t L = X i u t i=0 i 5
Hence, var (" t ) = 2 2 u We could estimate the properties of the shock process directly by estimating the Solow residuals from the production function (which you can do if you have data on y; k and l). One can then get estimates for both and by regressing current values of the Solow residual on its rst lag. OLS regressions like that yield pretty high estimates for the persistence parameter. You should check to see the e ect of a productivity process like " t = 0:9" t + u t where u is white noise on the persistence and volatility properties of the key variables of our model. Instead of estimating the shock process from the Solow residuals you could also choose to identify it from the output process observed in the data. To see how to do this, suppose the technology shock process is given by equation 2. Hence, output is given by ln y t = ln k t + " t = ln k t + " t + u 2 3 t = 4ln + ln k t + " {z t } 5 + [ln y t ln k t ] + u t =ln y t = ln + ( + ) ln y t ln k t + u t Since ln k t+ = ln + ln k t +" t and ln y t = ln k t +" t, we must have ln = ln k t+ ln y t. Hence, ln k t = ln y t 2 + ln. Substituting it in the above gives ln y t = ( ) ln + ( + ) ln y t ln y t 2 + u t Note that if the shock process has no persistence then = 0 and the reduced form representation for the log of output reduces to a simple rst-order autoregressive process with persistence parameter. Clearly, one can estimate the income process described above and the coe cients on the rst and second lags of log output can be used to theoretically identify and. If one proceeds with a prior estimate for (say from the capital income shares) then the coe cients would identify. 6
So, how good is the RBC model in explaining business cycles uctuations? It predicts that investment is more volatile than output (as long as depreciation is less than 00 percent) which squares with the data. However, the model doesn t generate much persistence. The entire persistence properties of the model are controlled by (and ) since the model doesn t have a strong endogenous mechanism for propagating shocks. This is typically viewed as a negative. Lastly, the model predicts that real wages should be procyclical. This can be seen from the fact that real wages are just ( ) y. Hence, real wages co-move positively with output. In the data real wages are acyclical (or very weakly procyclical, at best). This is another negative of the model. How have economists gone about reformulating the model in order to address these problems? First, people have introduced elastic labor supply. This feature implies labor supply can respond to changes in incentives. Second, people have added scal shocks like government spending shocks. An increase in government spending typically raises the interest rate while also raising expected future labor taxes. The latter e ect can arise due to the necessity for the government to balance the budget. The higher future taxes can raise current labor supply. Since an increase in labor supply will, ceteris paribus, reduce wages, a combination of standard productivity shocks and scal shocks could make wages relatively acyclical. 2 Testing the model with GMM Having described the basic RBC model we now turn to the equally important issue of testing the model. Of course, one way to evaluate the strengths/weaknesses of the model is to compare the moments from the model with the corresponding moments in the data. While informative, this has the drawback that it is not always clear which moments one should compare and which to leave out. Moreover, there is no well de ned statistical sense of how good the t is either. An alternative way to evaluate a model is called the Generalized Method of Moments (GMM). This method uses the optimality conditions to estimate the 7
parameters of the model and then tests the t of the model to the data. Recall that with log utility preferences for our representative agent the Euler equation is = E t ( + r t+ ) c t Hence, any forecast of the right hand side using a set of regressors should equal the left hand c t+ side. To make this formal, de ne an error term by the following relations = ( + r t+ ) + " t c t c t+ where " t is an error term. If expectations are rational then we should have two additional restrictions on the error term E" t = 0 E (" t x t ) = 0 where x t is any variable known at time t, i.e., the error should be zero on average and it should be uncorrelated with known variables at date t. These x s are known as instruments and are the variables that are used for forecasting the right hand side. The GMM method uses the sample counterparts of these two restrictions. In particular, with a nite length of data we would have TX " t = 0; t=0 TX (" t x t ) = 0: t=0 The method works by rst picking say till the rst condition holds exactly. The second condition can now be used as a test since it is overidenti ed. The test involves checking whether the same value of can satisfy the second condition. This method is often called identi cation based on overidenti cation since there are more moment conditions than parameters. Since there are more conditions than parameters the moment conditions cannot be satis ed exactly. Instead we examine the closeness of the 8
moments to zero with the closeness being made precise through the use of standard errors. Since both c and r are endogenous variables, note that the regression to forecast the right hand side will be an instrumental variable regression with the instruments being the vector x. GMM allows one to test the model without necessarily solving the entire model or having to derive the entire time paths of the endogenous variables of the model. This permits the researcher to diagnose problems with a model speci cation relatively quickly and thereby facilitates quick model respeci cation. 9