# Predicting the Present with Bayesian Structural Time Series

Save this PDF as:

Size: px
Start display at page:

## Transcription

4 Figure 2: The interface to Google Correlate, showing the top 10 correlates to the data in Figure 1, and a plot of the initial claims series against the query for Michigan Unemployment. are available for any time series one might wish to nowcast. The 600 verticals from Google Trends will be the same regardless of the target series being modeled. The 100 individual queries from Google Correlate will be specific to the time series being modeled. 3 The Model Our nowcasting model has two components. A time series component captures the general trend and seasonal patterns in the data. A regression component captures the impact of the Google search query data. Because both components are additive, we can easily estimate the joint model using Bayesian methods. Section 3.1 explains the time series component of our model. Section 3.2 discusses the regression component. 3.1 Structural time series Let y t denote observation t in a real-valued time series. A structural time series model can be described by a pair of equations relating y t to a vector of latent state variables α t. y t = Z T t α t + ɛ t ɛ t N (0, H t ) (1) 4

5 α t+1 = T t α t + R t η t η t N (0, Q t ). (2) Equation (1) is called the observation equation, because it links the observed data y t with the unobserved latent state α t. Equation (2) is called the transition equation because it defines how the latent state evolves over time. The model matrices Z t, T t, and R t typically contain a mix of known values (often 0 and 1), and unknown parameters. The transition matrix T t is square, but R t can be rectangular if a portion of the state transition is deterministic. Having R t in equation (2) allows the modeler to work with a full rank variance matrix Q t because any linear dependencies in the state vector can be moved from Q t to R t. In our application H t is a positive scalar. A model that can be described by equations (1) and (2) is said to be in state space form. A very large class of models can be expressed in state space form, including all ARIMA and VARMA models. State space models are attractive in part because they are modular. Independent state components can be combined by concatenating their observation vectors Z t, and arranging the other model matrices as elements in a block diagonal matrix. This provides the modeler with considerable flexibility choose components for modeling trend, seasonality, regression effects, and potentially other state components that may be necessary. For example, one useful model can be obtained by adding a regression component to the popular basic structural model. This model can be written y t = µ t + τ t + β T x t + ɛ t µ t = µ t 1 + δ t 1 + u t δ t = δ t 1 + v t, S 1 τ t = τ t s + w t s=1 (3) where η t = (u t, v t, w t ) contains independent components of Gaussian random noise. Though the model matrices in a structural time series model are permitted to depend on t, in this case Q t is a constant diagonal matrix with diagonal elements σu, 2 σv, 2 and σw, 2 and H t is a constant σɛ 2. This model contains trend, seasonal, and regression components. The current level of the trend is µ t, the current slope of the trend is δ t. The seasonal component τ t can be thought of as a set of S dummy variables with dynamic coefficients constrained to have zero expectation over a full cycle of S seasons. We will provide more details on the regression component in Section 3.2, but for the remainder of this Section one may consider β to be known. The parameters in equation (3) are the variances σɛ 2, σu, 2 σv, 2 σw 2 and the regression coefficients β. In our context, the vector x t is a contemporaneous set of search queries or trends verticals, including any desired lags or other transformations. Of course x t can be extended to include other factors as well. The primary tools for working with state space models are the Kalman filter (Kalman, 1960; Harvey, 1989), the Kalman smoother, and Bayesian data augmentation. Denote y 1:t = y 1,..., y t. The 5

7 coefficients that are constant through time Prior specification and elicitation Because there are so many potential Google search queries, and economic time series are typically short, the set of predictors can be much larger than the set of observations. However we expect a high degree of sparsity, in the sense that the coefficients for the vast majority of predictors will be zero. The natural way to represent sparsity in the Bayesian paradigm is through a spike and slab prior on the regression coefficients. Let γ k = 1 if β k 0, and γ k = 0 if β k = 0. Let β γ denote the subset of elements of β where β k 0. A spike-and-slab prior may be written p(β, γ, σ 2 ɛ ) = p(β γ γ, σ 2 ɛ )p(σ 2 ɛ γ)p(γ). (4) The marginal distribution p(γ) is the spike, so named because it places positive probability mass at zero. In principle, p(γ) can be specified to control for best practices like the hierarchical principle (lower order terms must be present if higher order interactions are included). In practice it is convenient to simply use an independent Bernoulli prior γ K k=1 π γ k k (1 π k) 1 γ k. (5) Equation (5) is often further simplified by assuming all the π k are the same value π. This is a practical consideration, because setting a different value for each π k can be a burden, but it can be justified on the basis of prior exchangeability. A natural way to elicit π is to ask the analyst for an expected model size, so that if one expects p nonzero predictors then π = p/k, where K is the dimension of x t. In certain instances it can also be useful to set π k = 0 or 1, for specific values of k, to force certain variables to be excluded or included. Another strategy that one could pursue (but we have not) is to subjectively segment predictors into groups based on how likely they would be to enter the model. One could then assign all the elements of each group the same subjectively determined prior inclusion probability. For a symmetric matrix Ω 1, let Ω 1 γ denote the rows and columns of Ω 1 corresponding to γ k = 1. Then the conditional priors p(1/σɛ 2 γ) and p(β γ σ ɛ, γ) can be expressed as the conditionally conjugate pair β γ σ 2 ɛ, γ N ( b γ, σ 2 ɛ ( ) ) Ω 1 1 γ 1 ( ν γ Ga 2, ss ), (6) 2 where Ga(r, s) denotes the gamma distribution with mean r/s and variance r/s 2. Equation (6) is the slab because one can choose the prior parameters to make it only very weakly informative (close to flat), conditional on γ. σ 2 ɛ As with equation (5) there are reasonable default values that can be used to simplify equation (6). It is very common to assume the vector of prior means b is 7

8 zero. However it is easy to specify an informative prior if one believes that certain predictors will be particularly helpful. The values ss and ν (interpretable as a prior sum of squares, and prior sample size) can be set by asking the user for an expected R 2 from the regression, and a number of observations worth of weight (ν) to be given to their guess. Then ss/ν = (1 R 2 )s 2 y, where s 2 y is the marginal standard deviation of the response. Scaling by s 2 y is a minor violation of the Bayesian paradigm because it means our prior is data-determined. However, it is a useful device, and we have identified no practical negative consequences from using it. The highest dimensional parameter in equation (6) is the full-model prior information matrix Ω 1. Let X denote the design matrix, obtained in the standard way by stacking the predictors so that x t is row t. The likelihood for an ordinary regression model has information matrix X T X/σɛ 2, so taking Ω 1 = κx T X/n would place κ observations worth of weight on the prior mean b. This is known as Zellner s g prior (Zellner, 1986; Chipman et al., 2001; Liang et al., 2008). Some care must be exercised in practice to guard against perfect collinearity in the columns of X. Full rank can be guaranteed by setting Ω 1 = κ(wx T X + (1 w)diag(x T X))/n, where diag(x T X) is the diagonal matrix with diagonal elements matching those of X T X. By default we set w = 1/2 and κ = 1. To recap, the spike and slab prior developed here offers ample opportunity to express prior opinions through the prior parameters π k, b, Ω 1, ss, and ν. For the analyst who prefers simplicity at the cost of some reasonable assumptions, useful prior information can be reduced to an expected model size, an expected R 2, and a sample size ν determining the weight given to the guess at R 2. For the analyst who wishes avoid thinking about priors altogether our software supplies default values R 2 =.5, ν =.01 and π k = The conditional posterior of β and σ 2 ɛ given γ Let y t = y t Z T t α t, where Z t is the observation matrix from equation (1) with β T x t set to zero. Let y = y 1:n, so that y is y with the time series component subtracted out. Conditional on γ the joint posterior distribution for β and σɛ 2 is available from standard conjugacy formulas (e.g. Gelman et al., 2002) β γ σ ɛ, γ, y N where the sufficient statistics can be written ( βγ, σɛ 2 (Vγ 1 ) 1) ( N 1/σɛ 2 γ, y Ga 2, SS ) γ 2 (7) V 1 γ = ( X T X ) γ + Ω 1 γ N = ν + n β γ = (Vγ 1 ) 1 (X T γ y + Ω 1 γ b γ ) SS γ = ss + y T y + b T γ Ω 1 γ b γ β γ T V 1 β γ. γ 8

9 The expression for SS γ can be written in several ways. The preceding form is computationally convenient. Other equivalent representations emphasize its interpretation as a sum of squared residuals that is obviously positive The marginal posterior of γ Because of conjugacy, one can analytically marginalize over β γ and 1/σ 2 ɛ to obtain γ y C(y ) Ω 1 γ 1 2 V 1 γ 1 2 p(γ) SS N 2 1 γ, (8) where C(y ) is a normalizing constant that depends on y but not on γ. The MCMC algorithms used to fit the model do not require C(y ) to be computed explicitly. Equation (8) is inexpensive to evaluate, because the only matrix that needs to be inverted is Vγ 1, which is of low dimension if the models being investigated are sparse. Also notice that, unlike L 1 based methods for sparse modeling (e.g. the lasso), equation (8) places positive probability on coefficients being zero (as opposed to probability density). Thus the sparsity in this model is a feature of the full posterior distribution, and not simply the value at the mode. 4 Markov chain Monte Carlo 4.1 Parameter learning Let θ denote the set of model parameters other than β and σɛ 2. The posterior distribution of the model described in Section 3 can be simulated by a straightforward Markov chain Monte Carlo algorithm with the following steps. 1. Simulate the latent state α from p(α y, θ, β, σɛ 2 ) using the simulation smoother from Durbin and Koopman (2002). 2. Simulate θ p(θ y, α, β, σɛ 2 ). 3. Simulate β and σɛ 2 from a Markov chain with stationary distribution p(β, σɛ 2 y, α, θ). Let φ = (θ, β, σɛ 2, α). Repeatedly cycling through the three steps given above yields a sequence of draws φ (1), φ (2),... from a Markov chain with stationary distribution p(φ y), the posterior distribution of φ given y. The draw of θ in step 2 depends on which state components are present in the model, but it is often trivial. For example, the model in equation (3) has three variance parameters, all of which have conditionally independent inverse Gamma full conditional distributions (assuming independent inverse Gamma priors). 9

10 The draw in step 3 can be done using the stochastic search variable selection (SSVS) algorithm from George and McCulloch (1997). SSVS is a Gibbs sampling algorithm in which each element of γ is drawn from its full conditional distribution, proportional to equation (8). The full conditional distributions are straightforward because each γ k has only two possible values. After one sweep through all the variables (which we do in random order), β γ and σɛ 2 are drawn from their closed form full conditional distribution given in equation (7). Improvements to SSVS have been proposed by several authors, notably Ghosh and Clyde (2011). However, the basic SSVS algorithm has performed adequately for our needs. 4.2 Forecasting As is typical in Bayesian data analysis, forecasts from our model are based on the posterior predictive distribution. It is trivial to simulate from the posterior predictive distribution given draws of model parameters and state from their posterior distribution. Let ỹ denote the set of values to be forecast. The posterior predictive distribution of ỹ is p(ỹ y) = p(ỹ φ)p(φ y) dφ. (9) If φ (1), φ (2),... are a set of random draws from p(φ y), then one samples from p(ỹ y) by sampling from p(ỹ (g) φ (g) ), which is done by simply iterating equations (1) and (2) forward from α n (g), with parameters θ (g), β (g), and σɛ 2(g). Because different elements of β will be zero in different Monte Carlo draws, the draws from the posterior predictive distribution automatically account for sparsity and model uncertainty, and thus benefit from Bayesian model averaging. This method of forecasting produces a sample of draws from the posterior predictive distribution p(ỹ y). The draws can be summarized, for example by their mean (which is a Monte Carlo estimate of E(ỹ y)). Multivariate summaries, like sets of interesting quantiles, are also appropriate. Our preference is to avoid summarizing the draws, instead reporting them using graphical methods such as histograms, kernel density estimates, or the dynamic density plots used in Section 5. One can forecast an arbitrary number of periods ahead. The examples presented below focus on one-step-ahead forecasts because they are the most relevant to the nowcasting problem. 5 Examples 5.1 Weekly initial claims for unemployment We used Google Correlate to produce the 100 search terms with values most correlated to the initial claims data from Figure 1. We then ran the MCMC algorithm for 5000 iterations, and discarded the first 1000 as burn in. The computation took about 105 seconds on a 3.4GHz Linux workstation with ample memory. The software was coded in C++, with an R interface. We used the default 10

11 priors in our software, except we set the expected model size to 5, which corresponds to setting the prior inclusion probability to 5/100 =.05. The time series component of our model matched equation (3), in that it included both a local linear trend component and an annual seasonal component with S = 52 weeks. Initial claims for unemployment benefits for the previous week are released on Thursdays, while Google Trends data is available with a two-day lag. This allows us to estimate the previous week s values 4-5 days before the data release. Initial claims data are subsequently revised, and we use only the final numbers in this analysis. Figure 3 shows the predictors to which the MCMC algorithm assigned posterior inclusion probability greater than.1. The posterior inclusion probability is a margin of equation (8), which cannot be computed directly because the sum over model space is intractable. However, marginal inclusion probabilities can be trivially estimated from the Monte Carlo sample by computing the proportion of Monte Carlo draws with β k 0. The three predictors with the highest posterior inclusion probabilities were economically meaningful, and the difference in inclusion probabilities between the other predictors is substantial. Of the 100 top predictors from Google Correlate, 14 were queries for unemployment for a specific state. This model run settled on Idaho, but the other states are all highly correlated with one another (correlations range between.67 and.92). Repeated runs with different lengths and seeds confirmed that the model does prefer the information from Idaho to that from other states, all of which are selected for inclusion much less often (the second most predictive state was California, which only appeared in about 2% of the models). The sampling algorithm did generate sparse models, with 67% of models having at most 3 predictors, and 98% having at most 5. Figures 4 and 5 show the posterior distribution of the latent state at each time point. Figure 4 plots the combined state (Zt T α t ), representing the smoothed value of the series in the absence of observation noise. Figure 5 shows the contribution of each component. Figures 4 and 5 represent the posterior distribution using a dynamic distribution plot. The pointwise posterior median is colored black, and each 1% quantile away from the median is shaded slightly lighter, until the 99th and 1st percentiles are shaded white. Figure 5 is a visual representation of how much variation in initial clams is being explained by the trend, seasonal and regression components. The trend component shows a double dip in initial claims. Both the seasonal and the regression components exhibit substantially more variation than the trend. To better understand the value of the Google Trends data, we fit a pure time series model equivalent to equation (3), but with no regression component. Figure 6 plots the cumulative sum of the absolute values of the one-step-ahead prediction errors for the two models. The most striking feature of the plot is the jump in cumulative absolute error that the pure time series model experiences during the financial crisis of The model that uses Google Trends data generally accumulates errors at a slightly lower rate than the pure time series model, but the pure time series model experiences large errors near the start of the recession. The model based on the 11

12 unemployment.office filing.for.unemployment idaho.unemployment sirius.internet.radio sirius.internet Scaled Value unemployment.office 0.94 filing.for.unemployment 0.47 idaho.unemployment 0.14 sirius.internet.radio 0.11 sirius.internet Inclusion Probability (a) (b) Figure 3: Panel (a) shows the posterior inclusion probabilities for all the variables with inclusion probability greater than 0.1. Bars are shaded on a continuous [0, 1] scale in proportion to the probability of a positive coefficient, so that negative coefficients are black, positive coefficients are white, and gray indicates indeterminate sign. Panel (b) shows a time series plot comparing the standardized initial claims series with the standardized search term series for the terms with inclusion probabilities of at least 0.1. The shading of the lines in panel (b) is weighted by the marginal inclusion probability. distribution time Figure 4: Posterior state for the model based on initial claims data. The dots represent the raw observations. 12

14 cumulative absolute error scaled values pure time series with Google Trends Jan Jul Jul Jul Jul Jul Jul Jul Jul Figure 6: The top panel shows cumulative absolute errors for equivalent time series models with and without Google Trends data. The models are based on the initial claims data in the bottom panel. 14

16 cumulative absolute error all all.deseasonal econ econ.deseasonal correlate correlate.deseasonal pure.time.series scaled values Jan 2004 Jan 2005 Jan 2006 Jan 2007 Jan 2008 Jan 2009 Jan 2010 Jan 2011 Jan 2012 Figure 7: Cumulative absolute one-step-ahead forecast errors for the suite of models described in Section 5.2, based on the retail sales data in the bottom panel. 16

17 Histogram of size X.Real.Estate.Home.Financing X.Society.Social.Services.Welfare...Unemployment X.Entertainment.Movies.Movie.Rentals...Sales X.Beauty...Personal.Care.Weight.Loss X.Computers...Electronics.Electronics...Electrical X.Real.Estate.Rental.Listings...Referrals msn.horoscope X.Shopping.Apparel.Watches...Accessories X.Automotive.Vehicle.Brands.Mazda X.Business.Business.Schools...Training X.Real.Estate.Home.Insurance X.Automotive.Hybrid...Alternative.Vehicles X.Automotive.Off.Road...Recreational.Vehicles X.Automotive.Motorcycles X.Automotive.Vehicle.Brands.Nissan X.Finance...Insurance.Insurance wellsfargocom X.Automotive.Vehicle.Brands.Chevrolet Frequency Inclusion Probability size (a) (b) Figure 8: (a) Posterior inclusion probabilities for the most likely predictors of the retail sales data. The bars are shaded according to the probability the coefficient is positive. White bars correspond to positive coefficients, and black to negative coefficients. Predictors starting with X. are Trends verticals. (b) Posterior distribution of model size. trend regression distribution distribution time time Figure 9: Components of state for the retail sales data. 17

18 Scaled Value Retail Sales Home Financing Unemployment Scaled Value Retail Sales msn.horoscope wellsfargo time time (a) (b) Figure 10: Retails sales and the predictors with the highest marginal inclusion probabilities from (a) the economically relevant subset of Google Trends verticals, and (b) Google Correlate. nearly as well, but they do exhibit violent shocks near the same time points as the target series. 6 Conclusion We have presented a system useful for nowcasting economic time series based on Google Trends and Google Correlate data. The system combines structural time series models with Bayesian spikeand-slab regression to average over a subset of the available predictors. The model averaging that automatically comes with spike-and-slab priors and Markov chain Monte Carlo helps hedge against selecting the wrong set of predictors. The spike-and-slab prior helps protect against spurious regressors, but no such protection can be perfect. For example, one of the strongest predictors in the full data set for retail sales application was the Science/Scientific Equipment vertical. The vertical exhibited a spike near the beginning of the financial crisis, attributable to a series of news stories about the Large Hadron Collider that went online at that time, and its potential for creating black holes that could destroy the earth. This is a spurious predictor in that it would be unlikely to exhibit similar behavior in periods of future economic distress. Some amount of subjective judgment is needed to remove such egregious anomalies from the training data, and restrict the candidate variables to those that have at least a plausible economic justification. One open question that we plan to investigate further is whether and how the predictor variables should be adjusted, either prior to entering the candidate pool, or as part of the MCMC. It is common to whiten predictors by removing a seasonal pattern, and possibly a trend, but 18

19 which seasonal pattern and which trend (its own, or that of the response), is a matter for further investigation. References Banbura, M., Giannone, D., and Reichlin, L. (2011). Nowcasting. In M. P. Clements and D. F. Hendry, eds., Oxford Handbook of Economic Forecasting, chap. 7. Oxford University Press, Oxford. Carter, C. K. and Kohn, R. (1994). On Gibbs sampling for state space models. Biometrika 81, Chan, J. and Jeliazkov, I. (2009). Efficient simulation and integrated likelihood estimation in state space models. International Journal of Mathematical Modelling and Numerical Optimisation 1, Chipman, H., George, E., McCulloch, R., Clyde, M., Foster, D., and Stine, R. (2001). The practical implementation of bayesian model selection. Lecture Notes-Monograph Series Choi, H. and Varian, H. (2009). Predicting the present with Google Trends. Tech. rep., Google. Choi, H. and Varian, H. (2012). Predicting the present with Google Trends. Economic Record 88, 2 9. Cleveland, R. B., Cleveland, W. S., McRae, J. E., and Terpenning, I. (1990). STL: A seasonal-trend decomposition procedure based on loess. Journal of Official Statistics 6, Cowell, R. G., Dawid, A. P., Lauritzen, S. L., and Spiegelhalter, D. J. (1999). Probabilistic Networks and Expert Systems. Springer. de Jong, P. and Shepard, N. (1995). The simulation smoother for time series models. Biometrika 82, Dukic, V., Polson, N. G., and Lopes, H. (2012). Tracking flu epidemics using Google Flu Trends data and a state-space SEIR model. Journal of the American Statistical Association. Durbin, J. and Koopman, S. J. (2001). Time Series Analysis by State Space Methods. Oxford University Press. Durbin, J. and Koopman, S. J. (2002). A simple and efficient simulation smoother for state space time series analysis. Biometrika 89, Forni, M. and Reichlin, L. (1998). Let s get real: A factor analytical approach to disaggregated business cycle dynamics. Review of Economic Studies 65,

20 Forni, M. M., Hallin, M., Lippi, and Reichlin, L. (2000). The generalized dynamicfactor model: Identification and estimation. Review of Economics and Statistics 82, Frühwirth-Schnatter, S. (1995). Bayesian model discrimination and Bayes factors for linear Gaussian state space models. Journal of the Royal Statistical Society, Series B: Methodological 57, Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2002). Bayesian Data Analysis (2nd ed). Chapman & Hall. George, E. I. and McCulloch, R. E. (1997). Approaches for Bayesian variable selection. Statistica Sinica 7, Ghosh, J. and Clyde, M. A. (2011). Rao-blackwellization for Bayesian variable selection and model averaging in linear and binary regression: A novel data augmentation approach. Journal of the American Statistical Association 106, Google.org (2012). Harvey, A. C. (1989). Forecasting, structural time series models and the Kalman filter. Cambridge University Press. Hoeting, J. A., Madigan, D., Raftery, A. E., and Volinsky, C. T. (1999). Bayesian model averaging: A tutorial (disc: P ). Statistical Science 14, Kalman, R. (1960). A new approach to linear filtering and prediction problems. Journal of Basic Engineering 82, Liang, F., Paulo, R., Molina, G., Clyde, M. A., and Berger, J. O. (2008). Mixtures of g-priors for Bayesian variable selection. Journal of the American Statistical Association 103, Madigan, D. and Raftery, A. E. (1994). Model selection and accounting for model uncertainty in graphical models using Occam s window. Journal of the American Statistical Association 89, pp McCausland, W. J., Miller, S., and Pelletier, D. (2011). Simulation smoothing for state-space models: A computational efficiency analysis. Computational Statistics and Data Analysis 55, Rue, H. (2001). Fast sampling of Gaussian Markov random fields. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63, Shaman, J. and Karspeck, A. (2012). Forecasting seasonal outbreaks of influenza. Proceedings of the National Academy of Science 20

### 11. Time series and dynamic linear models

11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

### Bayesian Statistics: Indian Buffet Process

Bayesian Statistics: Indian Buffet Process Ilker Yildirim Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 August 2012 Reference: Most of the material in this note

### Centre for Central Banking Studies

Centre for Central Banking Studies Technical Handbook No. 4 Applied Bayesian econometrics for central bankers Andrew Blake and Haroon Mumtaz CCBS Technical Handbook No. 4 Applied Bayesian econometrics

### C: LEVEL 800 {MASTERS OF ECONOMICS( ECONOMETRICS)}

C: LEVEL 800 {MASTERS OF ECONOMICS( ECONOMETRICS)} 1. EES 800: Econometrics I Simple linear regression and correlation analysis. Specification and estimation of a regression model. Interpretation of regression

### BayesX - Software for Bayesian Inference in Structured Additive Regression

BayesX - Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, Ludwig-Maximilians-University Munich

### State Space Time Series Analysis

State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

### STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! rsalakhu@utstat.toronto.edu! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

### Marketing Mix Modelling and Big Data P. M Cain

1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored

### 7 Time series analysis

7 Time series analysis In Chapters 16, 17, 33 36 in Zuur, Ieno and Smith (2007), various time series techniques are discussed. Applying these methods in Brodgar is straightforward, and most choices are

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Density nowcasts of euro area real GDP growth: does pooling matter?

Density nowcasts of euro area real GDP growth: does pooling matter? Marta Bańbura 1 and Lorena Saiz 2 Abstract In this paper we review different strategies to combine and evaluate individual density forecasts

### Handling attrition and non-response in longitudinal data

Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein

### A Regime-Switching Model for Electricity Spot Prices. Gero Schindlmayr EnBW Trading GmbH g.schindlmayr@enbw.com

A Regime-Switching Model for Electricity Spot Prices Gero Schindlmayr EnBW Trading GmbH g.schindlmayr@enbw.com May 31, 25 A Regime-Switching Model for Electricity Spot Prices Abstract Electricity markets

### Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach

Modeling and Analysis of Call Center Arrival Data: A Bayesian Approach Refik Soyer * Department of Management Science The George Washington University M. Murat Tarimcilar Department of Management Science

### Lecture 2: Descriptive Statistics and Exploratory Data Analysis

Lecture 2: Descriptive Statistics and Exploratory Data Analysis Further Thoughts on Experimental Design 16 Individuals (8 each from two populations) with replicates Pop 1 Pop 2 Randomly sample 4 individuals

### The basic unit in matrix algebra is a matrix, generally expressed as: a 11 a 12. a 13 A = a 21 a 22 a 23

(copyright by Scott M Lynch, February 2003) Brief Matrix Algebra Review (Soc 504) Matrix algebra is a form of mathematics that allows compact notation for, and mathematical manipulation of, high-dimensional

### Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts

### Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I

Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting

Lecture 5: Linear least-squares Regression III: Advanced Methods William G. Jacoby Department of Political Science Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Simple Linear Regression

### Tracking Algorithms. Lecture17: Stochastic Tracking. Joint Probability and Graphical Model. Probabilistic Tracking

Tracking Algorithms (2015S) Lecture17: Stochastic Tracking Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Deterministic methods Given input video and current state, tracking result is always same. Local

### Least Squares Estimation

Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

### Tutorial on Markov Chain Monte Carlo

Tutorial on Markov Chain Monte Carlo Kenneth M. Hanson Los Alamos National Laboratory Presented at the 29 th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Technology,

### Module 6: Introduction to Time Series Forecasting

Using Statistical Data to Make Decisions Module 6: Introduction to Time Series Forecasting Titus Awokuse and Tom Ilvento, University of Delaware, College of Agriculture and Natural Resources, Food and

### CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES. From Exploratory Factor Analysis Ledyard R Tucker and Robert C.

CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES From Exploratory Factor Analysis Ledyard R Tucker and Robert C MacCallum 1997 180 CHAPTER 8 FACTOR EXTRACTION BY MATRIX FACTORING TECHNIQUES In

### Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

### Calculating Interval Forecasts

Calculating Chapter 7 (Chatfield) Monika Turyna & Thomas Hrdina Department of Economics, University of Vienna Summer Term 2009 Terminology An interval forecast consists of an upper and a lower limit between

### NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

### Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

### , then the form of the model is given by: which comprises a deterministic component involving the three regression coefficients (

Multiple regression Introduction Multiple regression is a logical extension of the principles of simple linear regression to situations in which there are several predictor variables. For instance if we

: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

### More details on the inputs, functionality, and output can be found below.

Overview: The SMEEACT (Software for More Efficient, Ethical, and Affordable Clinical Trials) web interface (http://research.mdacc.tmc.edu/smeeactweb) implements a single analysis of a two-armed trial comparing

### ANNUITY LAPSE RATE MODELING: TOBIT OR NOT TOBIT? 1. INTRODUCTION

ANNUITY LAPSE RATE MODELING: TOBIT OR NOT TOBIT? SAMUEL H. COX AND YIJIA LIN ABSTRACT. We devise an approach, using tobit models for modeling annuity lapse rates. The approach is based on data provided

### Penalized regression: Introduction

Penalized regression: Introduction Patrick Breheny August 30 Patrick Breheny BST 764: Applied Statistical Modeling 1/19 Maximum likelihood Much of 20th-century statistics dealt with maximum likelihood

### Analysis of algorithms of time series analysis for forecasting sales

SAINT-PETERSBURG STATE UNIVERSITY Mathematics & Mechanics Faculty Chair of Analytical Information Systems Garipov Emil Analysis of algorithms of time series analysis for forecasting sales Course Work Scientific

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +

### Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

### Big Data, Statistics, and the Internet

Big Data, Statistics, and the Internet Steven L. Scott April, 4 Steve Scott (Google) Big Data, Statistics, and the Internet April, 4 / 39 Summary Big data live on more than one machine. Computing takes

### Multiple regression - Matrices

Multiple regression - Matrices This handout will present various matrices which are substantively interesting and/or provide useful means of summarizing the data for analytical purposes. As we will see,

### Bayesian networks - Time-series models - Apache Spark & Scala

Bayesian networks - Time-series models - Apache Spark & Scala Dr John Sandiford, CTO Bayes Server Data Science London Meetup - November 2014 1 Contents Introduction Bayesian networks Latent variables Anomaly

### Statistical Machine Learning

Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

### Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression Saikat Maitra and Jun Yan Abstract: Dimension reduction is one of the major tasks for multivariate

### SYSTEMS OF REGRESSION EQUATIONS

SYSTEMS OF REGRESSION EQUATIONS 1. MULTIPLE EQUATIONS y nt = x nt n + u nt, n = 1,...,N, t = 1,...,T, x nt is 1 k, and n is k 1. This is a version of the standard regression model where the observations

### A Proven Approach to Stress Testing Consumer Loan Portfolios

A Proven Approach to Stress Testing Consumer Loan Portfolios Interthinx, Inc. 2013. All rights reserved. Interthinx is a registered trademark of Verisk Analytics. No part of this publication may be reproduced,

### Co-movement of State s Mortgage Default Rates: A Dynamic Factor Analysis Approach

Co-movement of State s Mortgage Default Rates: A Dynamic Factor Analysis Approach Yaw Owusu-Ansah 1 Latest Edition: June 10 th, 2012 Abstract Underestimated default correlations of the underlying assets

### Statistical Models in Data Mining

Statistical Models in Data Mining Sargur N. Srihari University at Buffalo The State University of New York Department of Computer Science and Engineering Department of Biostatistics 1 Srihari Flood of

### Combining information from different survey samples - a case study with data collected by world wide web and telephone

Combining information from different survey samples - a case study with data collected by world wide web and telephone Magne Aldrin Norwegian Computing Center P.O. Box 114 Blindern N-0314 Oslo Norway E-mail:

### MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group

MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could

### Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

### CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

### Linear Threshold Units

Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear

### Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration

Chapter 6: The Information Function 129 CHAPTER 7 Test Calibration 130 Chapter 7: Test Calibration CHAPTER 7 Test Calibration For didactic purposes, all of the preceding chapters have assumed that the

### Predict the Popularity of YouTube Videos Using Early View Data

000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

### Large Time-Varying Parameter VARs

Large Time-Varying Parameter VARs Gary Koop University of Strathclyde Dimitris Korobilis University of Glasgow Abstract In this paper we develop methods for estimation and forecasting in large timevarying

### Probabilistic Forecasting of Medium-Term Electricity Demand: A Comparison of Time Series Models

Fakultät IV Department Mathematik Probabilistic of Medium-Term Electricity Demand: A Comparison of Time Series Kevin Berk and Alfred Müller SPA 2015, Oxford July 2015 Load forecasting Probabilistic forecasting

### Gamma Distribution Fitting

Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

### Statistics in Retail Finance. Chapter 6: Behavioural models

Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics:- Behavioural

### Estimating Industry Multiples

Estimating Industry Multiples Malcolm Baker * Harvard University Richard S. Ruback Harvard University First Draft: May 1999 Rev. June 11, 1999 Abstract We analyze industry multiples for the S&P 500 in

### Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.

### Lecture - 32 Regression Modelling Using SPSS

Applied Multivariate Statistical Modelling Prof. J. Maiti Department of Industrial Engineering and Management Indian Institute of Technology, Kharagpur Lecture - 32 Regression Modelling Using SPSS (Refer

### Outline: Demand Forecasting

Outline: Demand Forecasting Given the limited background from the surveys and that Chapter 7 in the book is complex, we will cover less material. The role of forecasting in the chain Characteristics of

### Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

### CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS

Examples: Regression And Path Analysis CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS Regression analysis with univariate or multivariate dependent variables is a standard procedure for modeling relationships

### Time Series Analysis

Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)

### A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

### MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

### Introduction to Statistical Computing in Microsoft Excel By Hector D. Flores; hflores@rice.edu, and Dr. J.A. Dobelman

Introduction to Statistical Computing in Microsoft Excel By Hector D. Flores; hflores@rice.edu, and Dr. J.A. Dobelman Statistics lab will be mainly focused on applying what you have learned in class with

### Regression Using Support Vector Machines: Basic Foundations

Regression Using Support Vector Machines: Basic Foundations Technical Report December 2004 Aly Farag and Refaat M Mohamed Computer Vision and Image Processing Laboratory Electrical and Computer Engineering

### BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential

### Introduction to Regression and Data Analysis

Statlab Workshop Introduction to Regression and Data Analysis with Dan Campbell and Sherlock Campbell October 28, 2008 I. The basics A. Types of variables Your variables may take several forms, and it

### USE OF ARIMA TIME SERIES AND REGRESSORS TO FORECAST THE SALE OF ELECTRICITY

Paper PO10 USE OF ARIMA TIME SERIES AND REGRESSORS TO FORECAST THE SALE OF ELECTRICITY Beatrice Ugiliweneza, University of Louisville, Louisville, KY ABSTRACT Objectives: To forecast the sales made by

### Data Mining Chapter 6: Models and Patterns Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University

Data Mining Chapter 6: Models and Patterns Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Models vs. Patterns Models A model is a high level, global description of a

### Module 3: Correlation and Covariance

Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

### MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX

MAN-BITES-DOG BUSINESS CYCLES ONLINE APPENDIX KRISTOFFER P. NIMARK The next section derives the equilibrium expressions for the beauty contest model from Section 3 of the main paper. This is followed by

### Introduction to time series analysis

Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Bayesian Variable Selection in Normal Regression Models

Institut für Angewandte Statistik Johannes Kepler Universität Linz Bayesian Variable Selection in Normal Regression Models Masterarbeit zur Erlangung des akademischen Grades Master der Statistik im Masterstudium

### Parallelization Strategies for Multicore Data Analysis

Parallelization Strategies for Multicore Data Analysis Wei-Chen Chen 1 Russell Zaretzki 2 1 University of Tennessee, Dept of EEB 2 University of Tennessee, Dept. Statistics, Operations, and Management

### Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay

Lecture 4: Seasonal Time Series, Trend Analysis & Component Model Bus 41910, Time Series Analysis, Mr. R. Tsay Business cycle plays an important role in economics. In time series analysis, business cycle

### Forecast covariances in the linear multiregression dynamic model.

Forecast covariances in the linear multiregression dynamic model. Catriona M Queen, Ben J Wright and Casper J Albers The Open University, Milton Keynes, MK7 6AA, UK February 28, 2007 Abstract The linear

### Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### 3. Regression & Exponential Smoothing

3. Regression & Exponential Smoothing 3.1 Forecasting a Single Time Series Two main approaches are traditionally used to model a single time series z 1, z 2,..., z n 1. Models the observation z t as a

### A Semi-parametric Approach for Decomposition of Absorption Spectra in the Presence of Unknown Components

A Semi-parametric Approach for Decomposition of Absorption Spectra in the Presence of Unknown Components Payman Sadegh 1,2, Henrik Aalborg Nielsen 1, and Henrik Madsen 1 Abstract Decomposition of absorption

### Applications of R Software in Bayesian Data Analysis

Article International Journal of Information Science and System, 2012, 1(1): 7-23 International Journal of Information Science and System Journal homepage: www.modernscientificpress.com/journals/ijinfosci.aspx

### ECON4202/ECON6201 Advanced Econometric Theory and Methods

Business School School of Economics ECON4202/ECON6201 Advanced Econometric Theory and Methods (SIMULATION BASED ECONOMETRIC METHODS) Course Outline Semester 2, 2015 Part A: Course-Specific Information

### Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.

Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged

### Econometric Analysis of Cross Section and Panel Data Second Edition. Jeffrey M. Wooldridge. The MIT Press Cambridge, Massachusetts London, England

Econometric Analysis of Cross Section and Panel Data Second Edition Jeffrey M. Wooldridge The MIT Press Cambridge, Massachusetts London, England Preface Acknowledgments xxi xxix I INTRODUCTION AND BACKGROUND

### Multivariate Analysis of Ecological Data

Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology

### SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

### An EM algorithm for the estimation of a ne state-space systems with or without known inputs

An EM algorithm for the estimation of a ne state-space systems with or without known inputs Alexander W Blocker January 008 Abstract We derive an EM algorithm for the estimation of a ne Gaussian state-space

### Practical Numerical Training UKNum

Practical Numerical Training UKNum 7: Systems of linear equations C. Mordasini Max Planck Institute for Astronomy, Heidelberg Program: 1) Introduction 2) Gauss Elimination 3) Gauss with Pivoting 4) Determinants

### Bayesian modeling of inseparable space-time variation in disease risk

Bayesian modeling of inseparable space-time variation in disease risk Leonhard Knorr-Held Laina Mercer Department of Statistics UW May 23, 2013 Motivation Area and time-specific disease rates Area and

### Big Data, Socio- Psychological Theory, Algorithmic Text Analysis, and Predicting the Michigan Consumer Sentiment Index

Big Data, Socio- Psychological Theory, Algorithmic Text Analysis, and Predicting the Michigan Consumer Sentiment Index Rickard Nyman *, Paul Ormerod Centre for the Study of Decision Making Under Uncertainty,

### EC 6310: Advanced Econometric Theory

EC 6310: Advanced Econometric Theory July 2008 Slides for Lecture on Bayesian Computation in the Nonlinear Regression Model Gary Koop, University of Strathclyde 1 Summary Readings: Chapter 5 of textbook.

### A Introduction to Matrix Algebra and Principal Components Analysis

A Introduction to Matrix Algebra and Principal Components Analysis Multivariate Methods in Education ERSH 8350 Lecture #2 August 24, 2011 ERSH 8350: Lecture 2 Today s Class An introduction to matrix algebra

### Segmentation of 2D Gel Electrophoresis Spots Using a Markov Random Field

Segmentation of 2D Gel Electrophoresis Spots Using a Markov Random Field Christopher S. Hoeflich and Jason J. Corso csh7@cse.buffalo.edu, jcorso@cse.buffalo.edu Computer Science and Engineering University