# Forecast covariances in the linear multiregression dynamic model.

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Forecast covariances in the linear multiregression dynamic model. Catriona M Queen, Ben J Wright and Casper J Albers The Open University, Milton Keynes, MK7 6AA, UK February 28, 2007 Abstract The linear multiregression dynamic model (LMDM) is a Bayesian dynamic model which preserves any conditional independence and causal structure across a multivariate time series. The conditional independence structure is used to model the multivariate series by separate (conditional) univariate dynamic linear models, where each series has contemporaneous variables as regressors in its model. Calculating the forecast covariance matrix (which is required for calculating forecast variances in the LMDM) is not always straightforward in its current formulation. In this paper we introduce a simple algebraic form for calculating LMDM forecast covariances. Calculation of the covariance between model regression components can also be useful and we shall present a simple algebraic method for calculating these component covariances. In the LMDM formulation, certain pairs of series are constrained to have zero forecast covariance. We shall also introduce a possible method to relax this restriction. Keywords: Multivariate time series, dynamic linear model, conditional independence, forecast covariance matrix, component covariances 1 Introduction. A multiregression dynamic model (MDM) (Queen and Smith, 1993) is a non-gaussian multivariate state space time series model. The MDM has been designed for forecasting time series for which, at any one time period, a conditional independence structure across the time series and causal drive through the system can be hypothesized. Such series can be represented by a directed acyclic graph (DAG) which, in addition to giving a useful pictorial representation of the structure of the multivariate series, is used to decompose the multivariate time series model into simpler (conditional) univariate dynamic models (West and Harrison, 1997). As such, the MDM is a graphical model (see, for example, Cowell et al. (1999)) for multivariate time series, which simplifies computation in the model through local computation. Like simultaneous equation models, in the MDM each univariate series has contemporaneous variables as regressors in its model. When these 1

2 regressions are linear, we have a linear MDM (LMDM) and in this case we have a set of univariate regression dynamic linear models (DLM s) (West and Harrison, 1997, Section 9.2). Like a non time series graphical model, the conditional univariate models in an MDM are computationally simple to work with (in this case univariate DLM s), while the joint distribution can be arbitrarily complex. Also, because the model is a collection of univariate dynamic models, the MDM avoids the difficult problem of eliciting an observation covariance matrix for the multivariate problem. It also avoids the alternative of learning about the covariance matrix on-line via the matrix normal DLM (West and Harrison (1997), Section 16.4) whose use is restricted to problems in which all the individual series are similar. Many multivariate time series may be suitable for use with an LMDM. For example, Queen (1994) uses the LMDM to model monthly brand sales in a competitive market. In this application, the competition in the market is the causal drive within the system and is used to define a conditional independence structure across the time series. Queen (1997) and Queen et al. (1994) focus on how DAG s may be elicited for market models. DAG s are also used to represent brand relationships when forecasting time series of brand sales in Goldstein et al. (1993), Farrow et al. (1997) and Farrow (2003). As another example, Whitlock and Queen (2000) and Queen et al. (2007) use the LMDM to model hourly vehicle counts at various points in a traffic network. Here, as in Sun et al. (2006), the direction of traffic flow produces the causal drive in the system and the possible routes through the network are used to define a conditional independence structure across the time series. The LMDM then produces a set of regression DLM s where contemporaneous traffic flows at upstream links in the network are used as linear regressors. Tebaldi et al. (2002) also use regression DLM s when modelling traffic flows. Like Queen et al. (2007), they use traffic flows at upstream links in the network as linear regressors. However, whereas the vehicle counts in Queen et al. (2007) are for one-hour intervals, those in Tebaldi et al. (2002) are for one-minute intervals, and so regression on lagged flows (rather than contemporaneous flows) is required. Fosen et al. (2006) use similar ideas to the LMDM when proposing a dynamical graphical model combining the additive hazard model and classical path analysis to analyse a trial of cancer patients with liver cirrhosis. Like the LMDM, their model uses linear regression with parents from the DAG as regressors. The focus, however, is to investigate the treatment effects on the variates over time, rather than forecasts of the variables directly. 2

3 There are many other potential application areas for the LMDM, including problems in economics (modelling various economic indicators such as energy consumption and GDP), environmental problems (such as water and other resource management problems), industrial problems (such as product distribution flow problems) and medical problems (such as patient physiological monitoring). When using the LMDM it is important to be able to calculate the forecast covariance matrix for the series. Not only is this of interest in its own right, it is also required for calculating the forecast variances for the individual series. Queen and Smith (1993) presented a recursive form for the covariance matrix, but this is not in an algebraic form which is always simple to use in practice. In this paper we introduce a simple algebraic method for calculating forecast covariances in the LMDM. Following the superposition principle (see West and Harrison (1997), p98), each conditional univariate DLM in an LMDM can be thought of as the sum of individual model components. For example, a DLM may have a trend component, a regression component, and so on. In an LMDM it can be useful to calculate the covariance between individual DLM regression components for different series and here we introduce a simple algebraic method for calculating such component covariances. The paper is structured as follows. In the next section the LMDM is described and its use is illustrated using a bivariate time series of brand sales. Section 3 presents a simple algebraic method for calculating the LMDM one-step and k-step ahead forecast covariance matrices. Component covariances are introduced in Section 4, along with a simple method to calculate them. In the LMDM formulation, certain pairs of series are constrained to have zero forecast covariance and Section 5 introduces a possible method to relax this restriction. Finally, Section 6 gives some concluding remarks. 2 The Linear Multiregression Dynamic Model We have a multivariate time series Y t = (Y t (1),...,Y t (n)) T. Suppose that the series is ordered and that the same conditional independence and causal structure is defined across the series through time, so that at each time t = 1, 2,..., we have Y t (r) {{Y t (1),..., Y t (r 1)} \pa(y t (r))} pa(y t (r)) for r = 2,...n which reads Y t (r) is independent of {Y t (1),..., Y t (r 1)} \pa(y t (r)) given pa(y t (r)) (Dawid, 1979), where the notation \ reads excluding and pa(y t (r)) {Y t (1),...,Y t (r 1)}. 3

4 Each variable in the set pa(y t (r)) is called a parent of Y t (r) and Y t (r) is known as a child of each variable in the set pa(y t (r)). We shall call any series for which pa(y t ( )) =, a root node and list all root nodes before any children in the ordered series Y t. The conditional independence relationships at each time point t can be represented by a DAG, where there is a directed arc to Y t (r) from each of its parents in pa(y t (r)). To illustrate, Figure 1 shows a DAG for five time series at time t, where pa(y t (2)) =, pa(y t (3)) = {Y t (1), Y t (2)}, pa(y t (4)) = {Y t (3)} and pa(y t (5)) = {Y t (3), Y t (4)}. Note that both Y t (1) and Y t (2) are root nodes. Figure 1: DAG representing five time series at time t. Suppose further that there is a conditional independence and causal structure defined for the processes so that, if Y t (r) = (Y 1 (r),...,y t (r)) T, Y t (r) {{ Y t (1),...,Y t (r 1) } \pa(y t (r)) } pa(y t (r)), Y t 1 (r) for r = 2,...n. Denote the information available at time t by D t. An LMDM has the following system equation and n observation equations for all times t = 1, 2,.... Observation equations: Y t (r) = F t (r) T θ t (r) + v t (r), v t (r) N(0, V t (r)) 1 r n System equation: θ t = G t θ t 1 + w t, w t N(0, W t ) Initial Information: (θ 0 D 0 ) N(m 0, C 0 ). The vector F t (r) contains the parents pa(y t (r)) and possibly other known variables (which may include Y t 1 (r) and pa(y t 1 (r))); θ t (r) is the parameter vector for Y t (r); V t (1),... V t (n) are the scalar observation variances; θ T t = (θ t (1) T,...,θ t (n) T ); and the matrices G t, W t and C 0 are all block diagonal. The error vectors, v T t = {v t (1),..., v t (n)} and w T t = { w t (1) T,...,w t (n) T}, are such that v t (1),..., v t (n) and w t (1),...,w t (n) are mutually independent and {v t, w t } t 1 are mutually independent with time. The LMDM therefore uses the conditional independence structure to model the multivariate time series by n separate univariate models for Y t (1) and Y t (r) pa(y t (r)), 4

5 r = 2,...,n. For those series with parents, each univariate model is simply a regression DLM with the parents as linear regressors. For root nodes, any suitable univariate DLM may be used. For example, consider the DAG in Figure 1. As Y t (1) and Y t (2) are both root nodes, each of these series can be modelled separately in an LMDM using any suitable univariate DLMs. Y t (3), Y t (4) and Y t (5) all have parents and so these would be modelled by (separate) univariate regression DLMs with the two regressors Y t (1) and Y t (2) for Y t (3) s model, the single regressor Y t (3) for Y t (4) s model and the two regressors Y t (3) and Y t (4) for Y t (5) s model. As long as θ t (1), θ t (2),...,θ t (n) are mutually independent a priori, each θ t (r) can be updated separately in closed form from Y t (r) s (conditional) univariate model. A forecast for Y t (1) and the conditional forecasts for Y t (r) pa(y t (r)), r = 2,...,n, can also be found separately using established DLM results (see West and Harrison (1997) for details). For example, in the context of the DAG in Figure 1, forecasts can be found separately (using established DLM results) for Y t (1), Y t (2), Y t (3) Y t (1), Y t (2), Y t (4) Y t (3) and Y t (5) Y t (3), Y t (4). However, Y t (r) and pa(y t (r)) are observed simultaneously. So the marginal forecast for each Y t (r), without conditioning on the values of its parents, is required. Unfortunately, the marginal forecast distributions for Y t (r), r = 2,...,n, will not generally be of a simple form. However, (under quadratic loss), the mean and variance of the marginal forecast distributions for Y t (r), r = 2,...,n, are adequate for forecasting purposes, and these can be calculated. So, returning to the context of the DAG in Figure 1, this means that the marginal forecast means and variances for Y t (3), Y t (4) and Y t (5) need to be calculated. Note that as Y t (1) and Y t (2) do not have any parents, their marginal forecasts have already been calculated from DLM theory. For calculating the marginal forecast variance for Y t (r), the marginal forecast covariance matrix for pa(y t (r)) is required. Returning to the example DAG in Figure 1 to illustrate, this means that the marginal forecast variance of Y t (5), for example, requires the forecast covariance for Y t (3) and Y t (4), without conditioning on either of their parents Y t (1) and Y t (2) that is, the marginal covariance of Y t (3) and Y t (4). The marginal forecast covariance matrix also provides information about the structure of the multivariate series, and, as such, is of interest in its own right. Queen and Smith (1993) gave a recursive form for calculating the marginal forecast covariance matrix. However, this is not always 5

6 easy to use in practice. Section 3 presents a simple algebraic form for calculating marginal forecast covariances. First, we shall illustrate how the LMDM works in practice. 2.1 Example: forecasting a bivariate time series of brand sales We shall illustrate using a simple LMDM for a bivariate series of monthly brand sales. The series consist of 34 months of data (supplied by Unilever Research) for two brands, which we shall call B1 and B2, who compete with each other in a product market. Denote the sales at each time t for brands B1 and B2 by X t (1) and X t (2), respectively. Let Y t be the total number of sales of B1 and B2 in month t (so that Y t = X t (1)+X t (2)). Then Y t can be modelled by a Poisson distribution with some mean µ t, denoted Po(µ t ). Suppose that for each individual purchase of this product in month t, P(B1 purchased B1 or B2 purchased) = θ t. Then, X t (1), the number of B1 purchased in month t, can be modelled by a binomial distribution with sample size Y t, the total number of B1 and B2 purchased in month t, and parameter θ t that is, X t (1) Y t Bi(Y t, θ t ). These distributions for Y t and X t (1) can be represented by the DAG in Figure 2. X t (2) is a logical function of its parents and is known once its parents are known. Following the terminology of WinBUGS software ( we shall call this a logical variable and denote it on the DAG by a double oval. Note that if we had defined a conditional binomial model for X t (2) Y t instead, X t (1) would have been the logical variable. Figure 2: DAG representing monthly sales of brands B1 and B2 (X t (1) and X t (2), respectively) and their sum (Y t ). Approximating the Poisson distribution for Y t and the conditional binomial distribution for X t (1) Y t to normality, the observation equations in an LMDM for the DAG in 6

7 Figure 2 are of the following form: Y t = µ t + v t (y), v t (y) N(0, V t (y)) X t (1) = Y t θ t + v t (1), v t (1) N(0, V t (1)) and X t (2) = Y t X t (1). The exact form of the observation equation for Y t can be far more complicated to account for trend, seasonality and so on whatever its form, its mean at time t is µ t. The time series plot of Y t suggests that in fact a linear growth model is appropriate, and this has been implemented here. A plot of X t (1) against Y t is given in Figure 3. For these data, using Y t as a linear regressor for X t (1) clearly seems sensible. If there was a non-linear relationship between Y t and X t (1), then Y t and/or X t (1) could be transformed that so that the linear regression model implied by the LMDM is appropriate, or the more general MDM could be used. X t(1) Y t Figure 3: Plot of X t (1) against Y t. For simplicity, in this illustration the observation variances, V t (y) and V t (1), were fixed throughout and were simply estimated by fitting simple linear regression models for the 34 observations of Y t and X t (1) (with regressor Y t ), respectively. Discount factors of 0.8 and 0.71 were used for Y t and X t (1) s models, respectively, and were chosen so as to minimise the mean squared error (MSE) and the mean absolute deviation (MAD). The first two observations for each series were used to calculate initial values for the prior means for the parameters, while their initial prior variances were each set to be large (10800 for the two parameters for Y t and 1 for the parameter for X t (1)) to allow forecasts to adapt quickly. 7

8 To get an idea of how well the LMDM is performing for these series, a standard multivariate DLM (MV DLM) was also used, using linear growth models for both X t (1) and X t (2). In order to try to get a fair comparison, the observation covariance matrix was assumed fixed and was calculated in exactly the same way as the observation variances were for the LMDM. As with the LMDM, initial estimates of prior means for the parameters were calculated using the first two observations of the series, the associated prior variances were set to be large to allow quick adaptation in the model and discount factors were chosen (0.85 for X t (1), 0.70 for X t (2)) so as to minimise the MSE/MAD. Multivariate DLM Multivariate DLM X t(1) X t(2) month LMDM month LMDM X t(1) X t(2) month month Figure 4: Plots of the one-step forecasts (joined dots) and ±1.96 forecast standard deviation error bars (dotted lines) for X t (1) and X t (2) calculated using the LMDM, together with the data (filled dots). Figure 4 shows plots of the time series for X t (1) and X t (2), together with the (marginal) one-step forecasts and ±1.96 (marginal) forecast standard deviation error bars calculated using both the MV DLM and the LMDM. From these plots, it can be seen that the LMDM is performing better than the MV DLM. This is also clearly reflected in the MSE/MAD values for the one-step ahead forecasts for the two models, shown in Table 1. Also shown in Table 1 are the MSE/MAD values for the two- and three-step ahead forecasts using 8

9 both models. Clearly, the LMDM is also performing better with respect to these k-step ahead forecasts. MSE MAD MV DLM LMDM MV DLM LMDM One-step forecasts X t (1) X t (2) Two-step forecasts X t (1) X t (2) Three-step forecasts X t (1) X t (2) Table 1: MSE and MAD values for the MV DLM and LMDM. In addition to giving a better forecast performance, the LMDM also has the advantage that intervention can be simpler to implement. For example, suppose that total sales of the brands were expected to increase suddenly. Then to accommodate this information into the model, only intervention for Y t would be required in the LMDM, whereas the MV DLM would require intervention for both X t (1) and X t (2). The LMDM also has the advantage that only observation variances need to be elicited, whereas the MV DLM has the additional difficult task of eliciting the observation covariance between X t (1) and X t (2) as well. 3 Simple calculation of marginal forecast covariances From Queen and Smith (1993), the marginal forecast covariance between Y t (i) and Y t (r), i < r, r = 2,...,n, can be calculated recursively using, cov(y t (i), Y t (r) D t 1 ) = E (Y t (i) E (Y t (r) Y t (1),..., Y t (r 1), D t 1 ) D t 1 ) E(Y t (i) D t 1 )E(Y t (r) D t 1 ). (3.1) In this paper we shall use this to derive a simple algebraic form for calculating these forecast covariances. In what follows let a t (r) be the prior mean for θ t (r). Theorem 1 In an LMDM, let Y t (j 1 ),...,Y t (j mr ) be the m r parents of Y t (r). Then for i < r, the one-step ahead forecast covariance between Y t (i) and Y t (r) can be calculated using m r cov(y t (i), Y t (r) D t 1 ) = cov(y t (i), Y t (j l ) D t 1 )a t (r) (jl), l=1 9

10 where a t (r) (jl) is the element of a t (r) associated with the parent regressor Y t (j l ) ie a t (r) (jl) is the prior mean for the parameter for regressor Y t (j l ). Proof. Consider Equation 3.1. From the observation equations for the LMDM, E (Y t (r) Y t (1),...,Y t (r 1), D t 1 ) = F t (r) T a t (r). Also, using the result that for two random variables X and Y, E(Y ) = E(E(Y X)), E (Y t (r) D t 1 ) = E ( F t (r) T ) ( a t (r) D t 1 = E F t (r) T ) D t 1 at (r). So cov(y t (i), Y t (r) D t 1 ) = E(Y t (i) F t (r) T D t 1 )a t (r) E(Y t (i) D t 1 )E(F t (r) T D t 1 )a t (r) = cov(y t (i), F t (r) T D t 1 )a t (r). Now Y t (r) has the m r parents Y t (j 1 ),...,Y t (j mr ), so F t (r) T = ( Y t (j 1 ) Y t (j mr ) x t (r) T ) where x t (r) T is a vector of known variables. Then, cov(y t (i), x t (r) T D t 1 ) is simply a vector of zeros and so m r cov(y t (i), Y t (r) D t 1 ) = cov(y t (i), Y t (j l ) D t 1 )a t (r) (j l) l=1 as required. The marginal forecast covariance between Y t (i) and Y t (r) is therefore simply the sum of the covariances between Y t (i) and each of the parents of Y t (r). Consequently it is simple to calculate the one-step forecast covariance matrix recursively. Corollary 1 In an LMDM, let Y t (j 1 ),..., Y t (j mr ) be the m r parents of Y t (r). Suppose that the series have been observed up to, and including, time t. Define a t (r, k) = E(θ t+k (r) D t ) so that a t (r, k) = G t+k (r)a t (r, k 1) with a t (r, 0) = m t (r), the posterior mean for θ t (r) at time t. Denote the parameter associated with parent regressor Y t (j l ) by θ t (r) (jl), and E(θ t+k (r) (j l) D t ) by a t (r, k) (jl), the associated element of a t (r, k). 10

11 Then, for i < r and k 1, the k-step ahead forecast covariance between Y t+k (i) and Y t+k (r) can be calculating recursively using m r cov(y t+k (i), Y t+k (r) D t ) = cov(y t+k (i), Y t+k (j l ) D t )a t (r, k) (jl). l=1 Proof. Let X t+k (r) T = ( Y t+k (1)... Y t+k (r 1) ). Using the result that E(XY ) = E[X E(Y X)], cov (X t+k (r), Y t+k (r) D t ) = E(X t+k (r) E(Y t+k (r) Y t+k (1),..., Y t+k (r 1), D t ) D t ) E(X t+k (r) D t )E(Y t+k (r) D t ). (3.2) From DLM theory (see, for example, West & Harrison (1997), pp 106 7), E(Y t+k (r) Y t+k (1),...,Y t+k (r 1), D t ) = F t+k (r) T a t (r, k) (3.3) where a t (r, k) = G t+k (r)a t (r, k 1) with a t (r, 0) = m t (r), the posterior mean for θ t (r) at time t. Also, E(Y t+k (r) D t ) = E[E(Y t+k (r) Y t+k (1),...,Y t+k (r 1), D t )] = E[F t+k (r) T a t (r, k) D t ] by Equation 3.3. Thus Equation 3.2 becomes, cov(x t+k (r), Y t+k (r) D t ) = cov(x t+k (r), F t+k (r) T D t )a t (r, k) and so the ith row of this gives us cov(y t+k (i), Y t+k (r) D t ) = m r l=1 cov(y t+k (i), Y t+k (j l ) D t )a t (r, k) (j l), as required. Corollary 2 For two root nodes Y t (i) and Y t (r), under the LMDM, cov(y t (i), Y t (r) D t 1 ) = 0. Proof. From the proof of Theorem 1, cov(y t (i), Y t (r) D t 1 ) = cov(y t (i), F t (r) T D t 1 )a t (r). 11

12 Since Y t (r) is a root node, F t (r) only contains known variables so that, cov(y t (i), F t (r) T D t 1 ) = 0. The result then follows directly. To illustrate calculating a one-step forecast covariance using Theorem 1 and Corollary 2, consider the following example. Example 1 Consider the DAG in Figure 1. For r = 2,...,5 and j = 1,...,4, let a t (r) (j) be the prior mean for parent regressor Y t (j) in Y t (r) s model. The forecast covariance between Y t (1) and Y t (5), for example, is then calculated as follows. cov(y t (1), Y t (5) D t 1 ) = cov(y t (1), Y t (4) D t 1 )a t (5) (4) + cov(y t (1), Y t (3) D t 1 )a t (5) (3) cov(y t (1), Y t (4) D t 1 ) = cov(y t (1), Y t (3) D t 1 )a t (4) (3) cov(y t (1), Y t (3) D t 1 ) = cov(y t (1), Y t (1) D t 1 )a t (3) (1) + cov(y t (1), Y t (2) D t 1 )a t (3) (2). Y t (1) and Y t (2) are both root nodes and so their covariance is 0. So, cov(y t (1), Y t (5) D t 1 ) = var(y t (1) D t 1 )a t (3) (1) ( a t (4) (3) a t (5) (4) + a t (5) (3)). where var(y t (1) D t 1 ) is the marginal forecast variance for Y t (1). This is both simple and fast to calculate. When a DAG has a logical node, the logical node will be some (logical) function of its parents. If the function is linear, then the covariance between Y t (i) and a logical node Y t (r) will simply be a linear function of the covariance between Y t (i) and the parents of Y t (r). For example, for the DAG in Figure 5, suppose that Y t (4) = Y t (3) Y t (2). Then cov(y t (1), Y t (4) D t 1 ) = cov(y t (1), Y t (3) D t 1 ) cov(y t (1), Y t (2) D t 1 ). The covariance can then be found simply by applying Theorem 1. Figure 5: DAG with a logical node Y t (4). 12

13 4 Component covariances Suppose that Y t (r) has the m r parents Y t (r 1 ),...,Y t(r m r ) for r = 2,...,n. Write the observation equation for each Y t (r) as the sum of regression components as follows. m r Y t (r) = Y t (r, r l ) + Y t(r, x t (r)) + v t (r), v t (r) N(0, V t (r)) (4.1) with l=1 Y t (r, r l ) = Y t(r l )θ t(r) (r l ) Y t (r, x t (r)) = x t (r) T θ t (r) (xt(r)) where θ t (r) (r l ) is the parameter associated with the parent regressor Y t (r l ) and θ t(r) (xt(r)) is the vector of parameters for known variables x t (r). It can sometimes be helpful to find the covariance between two components Y t (i, i ) and Y t (r, r ) from the models for Y t (i) and Y t (r) respectively. Call cov(y t (i, i ), Y t (r, r )) the component covariance for Y t (i, i ) and Y t (r, r ). We shall illustrate why component covariances may be useful by considering two examples. Example 2 Component covariances in traffic networks Queen et al. (2007) consider the problem of forecasting hourly vehicle counts at various points in a traffic network. The possible routes through the network are used to elicit a DAG for use with an LMDM. It may be useful to learn about driver route choice probabilities in such a network i.e. the probability that a vehicle starting at a certain point A will travel to destination B. Unfortunately, these are not always easy to estimate from vehicle count data. However, component covariances could be useful in this respect, as illustrated by the following hypothetical example. Consider the simple traffic network illustrated in Figure 6. There are five data collection sites, each of which records the hourly count of vehicles passing that site. There are four possible routes through the system: A to C, A to D, B to C and B to D. Because all traffic from A and B to C and D is counted at site 3, it can be difficult to learn about driver route choices using the time series of vehicle counts alone. Let Y t (r) be the vehicle count for hour t at site r. A suitable DAG representing the conditional independence relationships between Y t (1),...,Y t (5) is given in Figure 7. (For details on how this DAG can be elicited see Queen et al. (2007).) Notice that all vehicles 13

14 Figure 6: Hypothetical traffic network for Example 2 with starting points A and B and destinations C and D. The grey diamonds are the data collection sites, each of which is numbered. White arrows indicate the direction of traffic flow. at site 3 flow to sites 4 or 5 so that conditional on Y t (3) and Y t (4), Y t (5) is a logical node with Y t (5) = Y t (3) Y t (4). Figure 7: DAG representing the conditional independence relationships between Y t (1),...,Y t (5) in the traffic network of Figure 6. From Figure 6, Y t (3) receives all its traffic from sites 1 and 2, while Y t (4) receives all its traffic from site 3. So possible LMDM observation equations for Y t (3) and Y t (4) are given by, Y t (3) = Y t (1)θ t (3) (1) + Y t (2)θ t (3) (2) + v t (3), v t (3) N(0, V t (3)), Y t (4) = Y t (3)θ t (4) (3) + v t (4), v t (4) N(0, V t (4)). where 0 θ t (3) (1), θ t (3) (2), θ t (4) (3) 1. Then Y t (1)θ t (3) (1) = Y t (3, 1) = number of vehicles travelling from site 1 to 3 in hour t, Y t (3)θ t (4) (3) = Y t (4, 3) = number of vehicles travelling from site 3 to 4 in hour t. So cov(y t (3, 1), Y t (4, 3)) is informative about the use of route A to C. A high correlation between Y t (3, 1) and Y t (4, 3) indicates a high probability that a vehicle at A travels to 14

15 C and a small correlation indicates a low probability. Of course, the actual driver route choice probabilities still cannot be estimated from these data. However having some idea of the relative magnitude of the choice probabilities can still be very useful. Example 3 Accommodating changes in the DAG In many application areas the DAG representing the multivariate time series may change over time either temporarily or permanently. For example, in a traffic network a temporary diversion may change the DAG temporarily, or a change in the road layout may change the DAG permanently. Because of the structure of the LMDM, much of the posterior information from the original DAG can be used to help form priors in the new DAG (see Queen et al. (2007) for an example illustrating this). In this respect, component covariances may be informative about covariances in the new DAG. We shall illustrate this using a simple example. Figure 8 shows part of a DAG representing four time series Y t (1),..., Y t (4) (the DAG continues with children of Y t (3) and Y t (4), but we are only interested in Y t (1),..., Y t (4) here). Using an LMDM and Equation 4.1, suppose we have the following observation equations for Y t (3) and Y t (4): Y t (3) = Y t (3, 1) + v t (3), v t (3) N(0, V t (3)), (4.2) Y t (4) = Y t (4, 1) + Y t (4, 2) + v t (4), v t (4) N(0, V t (4)). (4.3) Figure 8: Part of a DAG for Example 3 for four time series Y t (1),..., Y t (4). Now suppose that the DAG is changed so that a new series, X t, is introduced which lies between Y t (1) and Y t (4) in Figure 8. Suppose further that Y t (1) and Y t (4) are no longer observed. The new DAG is given in Figure 9 (again the DAG continues, this time with children of Y t (3), X t and Y t (2)). The component Y t (3, 1) from Equation 4.2 is informative about Y t (3) in the new model, and the component Y t (4, 1) from Equation 4.3 is informative about X t in the new model. Thus the component covariance cov(y t (3, 1), Y t (4, 1)) is informative about cov(y t (3), X t ) in the new model. 15

16 Figure 9: New DAG for Example 3 (adapting the DAG in Figure 8) where X t is introduced into the DAG and Y t (1) and Y t (4) are no longer observed. The following theorem presents a simple method for calculating component covariances. Theorem 2 Suppose that Y t (i ) is the parent of Y t (i) and Y t (r ) is a parent of Y t (r), with i < r. Then, cov(y t (i, i ), Y t (r, r ) D t 1 ) = cov(y t (i ), Y t (r ) D t 1 )a t (i) (i ) a t (r) (r ) where a t (i) (i ) and a t (r) (r ) are, respectively, the prior means for the regressor Y t (i ) in Y t (i) s model and the regressor Y t (r ) in Y t (r) s model. Proof. For each r, let Z (r) T ( t = Yt (r, r 1 ) Y t(r, r 2 )... Y t(r, r m r ) Y t (r, x t (r)) v t (r) ) and ( Z t (r) T = Z (1) T t Z (2) T t )... Z (r 1) T. t Using the result that for two random variables X and Y, E(XY ) = E(X E(Y X)), we have, for a specific r {r 1,...,r m r }, cov(z t (r), Y t (r, r ) D t 1 ) = E(Z t (r) E(Y t (r, r ) Z t (r), D t 1 ) D t 1 ) E(Z t (r) D t 1 )E(Y t (r, r ) D t 1 ). (4.4) Now, from Equation 4.1, E(Y t (r, r ) Z t (r), D t 1 ) = E(Y t (r, r ) Y t (1),..., Y t (r 1), D t 1 ) = Y t (r )a t (r) (r ). Also, E(Y t (r, r ) D t 1 ) = E[E(Y t (r, r ) Z t (r), D t 1 )] = E(Y t (r )a t (r) (r ) D t 1 ). 16

17 So, Equation 4.4 becomes, cov(z t (r), Y t (r, r ) D t 1 ) = E(Z t (r)y t (r )a t (r) (r ) D t 1 ) E(Z t (r) D t 1 )E(Y t (r )a t (r) (r ) D t 1 ) = cov(z t (r), Y t (r ) D t 1 )a t (r) (r ). The single row from Z t (r) corresponding to Y t (i, i ) then gives us, cov(y t (i, i ), Y t (r, r ) D t 1 ) = cov(y t (i, i ), Y t (r ) D t 1 )a t (r) (r ). (4.5) Let X t (r, i) T = ( Y t (1) Y t (2)... Y t (i 1) Y t (i + 1)... Y t (r 1) ). Then cov(x t (r, i), Y t (i, i ) D t 1 ) = E(X t (r, i) E(Y t (i, i ) X t (r, i), D t 1 ) D t 1 ) E(X t (r, i) D t 1 )E(Y t (i, i ) D t 1 ). (4.6) Now, E(Y t (i, i ) X t (r, i), D t 1 ) = Y t (i )a t (i) (i ), and Equation 4.6 becomes cov(x t (r, i), Y t (i, i ) D t 1 ) = cov(x t (r, i), Y t (i ) D t 1 )a t (i) (i ). So taking the individual row of X t (r, i) corresponding to Y t (r ) we get, cov(y t (r ), Y t (i, i ) D t 1 ) = cov(y t (r ), Y t (i ) D t 1 )a t (i) (i ). Substituting this into Equation 4.5 gives us cov(y t (i, i ), Y t (r, r ) D t 1 ) = cov(y t (i ), Y t (r ) D t 1 )a t (i) (i ) a t (r) (r ) as required. Theorem 2 allows the simple calculation of the component correlations. For example, in Example 2, cov(y t (3, 1), Y t (4, 3) D t 1 ) = cov(y t (1), Y t (3) D t 1 )a t (3) (1) a t (4) (3) and cov(y t (1), Y t (3) D t 1 ) is simple to calculate using Theorem 1. 17

18 Corollary 3 Suppose that Y t (i ) is the parent of Y t (i) and Y t (r ) is a parent of Y t (r), with i < r. Then, using the notation presented in Corollary 1, for i < r and k 1, the k-step ahead forecast covariance between components Y t+k (i, i ) and Y t+k (r, r ) can be calculating recursively using m r cov(y t+k (i, i ), Y t+k (r, r ) D t ) = cov(y t+k (i ), Y t+k (r ) D t )a t (i, k) (i ) a t (r, k) (r ). l=1 Proof. This is proved using the same argument to that used in the proof of Theorem 2, where t is replaced by t + k and noting the E(θ t+k (i) (i ) D t ) = a t (i, k) (i ) and E(θ t+k (r) (r ) D t ) = a t (r, k) (r ). 5 Covariance between root nodes Recall from Corollary 2 that the covariance between two root nodes is zero in the LMDM. However, this is not always appropriate. For example, consider the traffic network in Example 2. Both Y t (1) and Y t (2) are root nodes which may in fact be highly correlated they may have similar daily patterns with the same peak times, etc, and they may be affected in a similar way by external events such as weather conditions. One possible way to introduce non-zero covariances between root nodes is to add an extra node as a parent of all root nodes in the DAG. This extra node represents any variables which may account for the correlation between the root nodes. Example 4 In the traffic network in Example 2, suppose that X t is a vector of variables which can account for the correlation between Y t (1) and Y t (2). So, X t might include such variables as total traffic volume entering the system, hourly rainfall, temperature, and so on. Then X t can be introduced into the DAG as a parent of Y t (1) and Y t (2) as in Figure 10. Figure 10: DAG representing time series of vehicle counts in Example 4, with an extra node X t to account for any correlation between Y t (1) and Y t (2). The observation equations for Y t (1) and Y t (2) now both have X t as regressors and 18

19 applying Theorem 1 we get, cov(y t (1), Y t (2) D t 1 ) = var(x t D t 1 )a t (1) (Xt) a t (2) (Xt) where a t (2) (Xt) and a t (2) (Xt) are the prior mean vectors for regressors X t in Y t (1) and Y t (2) s model. 6 Concluding remarks In this paper we have presented a simple algebraic form for calculating the one step ahead covariance matrix in LMDMs. We have also introduced a simple method for calculating covariances between regression components of different DLMs within the LMDM. Component covariances may be of interest in their own right, and may also prove to be useful for forming informative priors following any changes in the DAG for the LMDM. Their use in practice now needs to be investigated in further research. One of the problems with the LMDM is the imposition of zero covariance between root nodes. To allow nonzero covariance between root nodes we have proposed introducing X t into the model as a parent of all the root nodes, where X t is a set of variables which may help to explain the correlation between parents. Further research is now required to investigate how well this might work in practice. Acknowledgements We would like to thank the reviewer for their very useful comments on an earlier version of this paper and also Unilever Research for supplying the data. References Cowell, R.G., Dawid, A.P., Lauritzen, S.L. and Spiegelhalter, D.J. (1999) Probabilistic Networks and Expert Systems. Springer Verlag: New York. Dawid, A.P. (1979) Conditional independence in statistical theory (with discussion). Journal of the Royal Statistical Society, 41, Farrow, M. (2003) Practical building of subjective covariance structures for large complicated systems. The Statistician, 52, No. 4, Farrow, M., Goldstein, M. and Spiropoulos, T. (1997) Developing a Bayes linear decision support system for a brewery. In The Practice of Bayesian Analysis (eds S. French and J. Q. Smith), pp London: Arnold. 19

20 Fosen, J., Ferkingstad, E., Borgan,., and AalenFosen, O.O. (2006) Dynamic path analysisa new approach to analyzing time-dependent covariates. Lifetime Data Analysis, 12 (2), Goldstein, M., Farrow, M. and Spiropoulos, T. (1993) Prediction under the influence: Bayes linear influence diagrams for prediction in a large brewery. The Statistician, 42, No. 4, Queen, C.M. (1994) Using the multiregression dynamic model to forecast brand sales in a competitive product market. The Statistician, 43, No. 1, Queen, C. M. (1997) Model elicitation in competitive markets. In The Practice of Bayesian Analysis (eds S. French and J. Q. Smith), pp London: Arnold. Queen, C.M., Smith, J.Q. and James, D.M. (1994) Bayesian forecasts in markets with overlapping structures. International Journal of Forecasting, 10, Queen, C.M. and Smith, J.Q. (1993) Multiregression dynamic models. Journal of the Royal Statistical Society, Series B, 55, Queen, C.M., Wright, B.J. and Albers, C.J. (2007) Eliciting a directed acyclic graph for a multivariate time series of vehicle counts in a traffic network. Australian and New Zealand Journal of Statistics To appear. Sun, S.L., Zhang, C.S. and Yu, G.Q. (2006) A Bayesian network approach to traffic flows forecasting. IEEE Transactions on Intelligent Transportation Systems, 7 (1), Tebaldi, C., West, M. and Karr, A.K. (2002) Statistical analyses of freeway traffic flows. Journal of Forecasting, 21, West, M. and Harrison, P.J. (1997) Bayesian Forecasting and Dynamic Models (2nd edition) Springer-Verlag. New York. Whitlock, M.E. and Queen, C.M. (2000) Modelling a traffic network with missing data. Journal of Forecasting, 19, 7,

### Forecasting traffic flows in road networks: A graphical dynamic model approach.

Forecasting traffic flows in road networks: A graphical dynamic model approach. Catriona M Queen 1 and Casper J Albers 2 Department of Mathematics and Statistics, The Open University, Milton Keynes, MK7

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Random Vectors and the Variance Covariance Matrix

Random Vectors and the Variance Covariance Matrix Definition 1. A random vector X is a vector (X 1, X 2,..., X p ) of jointly distributed random variables. As is customary in linear algebra, we will write

### 11. Time series and dynamic linear models

11. Time series and dynamic linear models Objective To introduce the Bayesian approach to the modeling and forecasting of time series. Recommended reading West, M. and Harrison, J. (1997). models, (2 nd

### α = u v. In other words, Orthogonal Projection

Orthogonal Projection Given any nonzero vector v, it is possible to decompose an arbitrary vector u into a component that points in the direction of v and one that points in a direction orthogonal to v

### Chapter 4: Vector Autoregressive Models

Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...

### 4. Joint Distributions of Two Random Variables

4. Joint Distributions of Two Random Variables 4.1 Joint Distributions of Two Discrete Random Variables Suppose the discrete random variables X and Y have supports S X and S Y, respectively. The joint

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

### The Bivariate Normal Distribution

The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

### MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

### Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

### Chapter 1. Vector autoregressions. 1.1 VARs and the identi cation problem

Chapter Vector autoregressions We begin by taking a look at the data of macroeconomics. A way to summarize the dynamics of macroeconomic data is to make use of vector autoregressions. VAR models have become

### MATH BOOK OF PROBLEMS SERIES. New from Pearson Custom Publishing!

MATH BOOK OF PROBLEMS SERIES New from Pearson Custom Publishing! The Math Book of Problems Series is a database of math problems for the following courses: Pre-algebra Algebra Pre-calculus Calculus Statistics

### Chapter 6. Orthogonality

6.3 Orthogonal Matrices 1 Chapter 6. Orthogonality 6.3 Orthogonal Matrices Definition 6.4. An n n matrix A is orthogonal if A T A = I. Note. We will see that the columns of an orthogonal matrix must be

### Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

### Covariance and Correlation. Consider the joint probability distribution f XY (x, y).

Chapter 5: JOINT PROBABILITY DISTRIBUTIONS Part 2: Section 5-2 Covariance and Correlation Consider the joint probability distribution f XY (x, y). Is there a relationship between X and Y? If so, what kind?

### Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

### MATHEMATICS (CLASSES XI XII)

MATHEMATICS (CLASSES XI XII) General Guidelines (i) All concepts/identities must be illustrated by situational examples. (ii) The language of word problems must be clear, simple and unambiguous. (iii)

### Time series Forecasting using Holt-Winters Exponential Smoothing

Time series Forecasting using Holt-Winters Exponential Smoothing Prajakta S. Kalekar(04329008) Kanwal Rekhi School of Information Technology Under the guidance of Prof. Bernard December 6, 2004 Abstract

### THREE DIMENSIONAL GEOMETRY

Chapter 8 THREE DIMENSIONAL GEOMETRY 8.1 Introduction In this chapter we present a vector algebra approach to three dimensional geometry. The aim is to present standard properties of lines and planes,

### Least Squares Estimation

Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

### Summary of Probability

Summary of Probability Mathematical Physics I Rules of Probability The probability of an event is called P(A), which is a positive number less than or equal to 1. The total probability for all possible

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Constrained Bayes and Empirical Bayes Estimator Applications in Insurance Pricing

Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 321 327 DOI: http://dxdoiorg/105351/csam2013204321 Constrained Bayes and Empirical Bayes Estimator Applications in Insurance

### So, using the new notation, P X,Y (0,1) =.08 This is the value which the joint probability function for X and Y takes when X=0 and Y=1.

Joint probabilit is the probabilit that the RVs & Y take values &. like the PDF of the two events, and. We will denote a joint probabilit function as P,Y (,) = P(= Y=) Marginal probabilit of is the probabilit

### 3 Random vectors and multivariate normal distribution

3 Random vectors and multivariate normal distribution As we saw in Chapter 1, a natural way to think about repeated measurement data is as a series of random vectors, one vector corresponding to each unit.

### Similarity and Diagonalization. Similar Matrices

MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that

### Covariance and Correlation

Covariance and Correlation ( c Robert J. Serfling Not for reproduction or distribution) We have seen how to summarize a data-based relative frequency distribution by measures of location and spread, such

### Multivariate Analysis of Variance (MANOVA): I. Theory

Gregory Carey, 1998 MANOVA: I - 1 Multivariate Analysis of Variance (MANOVA): I. Theory Introduction The purpose of a t test is to assess the likelihood that the means for two groups are sampled from the

### Section 1.1. Introduction to R n

The Calculus of Functions of Several Variables Section. Introduction to R n Calculus is the study of functional relationships and how related quantities change with each other. In your first exposure to

### Math 141. Lecture 7: Variance, Covariance, and Sums. Albyn Jones 1. 1 Library 304. jones/courses/141

Math 141 Lecture 7: Variance, Covariance, and Sums Albyn Jones 1 1 Library 304 jones@reed.edu www.people.reed.edu/ jones/courses/141 Last Time Variance: expected squared deviation from the mean: Standard

### problem arises when only a non-random sample is available differs from censored regression model in that x i is also unobserved

4 Data Issues 4.1 Truncated Regression population model y i = x i β + ε i, ε i N(0, σ 2 ) given a random sample, {y i, x i } N i=1, then OLS is consistent and efficient problem arises when only a non-random

### UNIT 2 MATRICES - I 2.0 INTRODUCTION. Structure

UNIT 2 MATRICES - I Matrices - I Structure 2.0 Introduction 2.1 Objectives 2.2 Matrices 2.3 Operation on Matrices 2.4 Invertible Matrices 2.5 Systems of Linear Equations 2.6 Answers to Check Your Progress

### Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.

8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent

### Forecasting Geographic Data Michael Leonard and Renee Samy, SAS Institute Inc. Cary, NC, USA

Forecasting Geographic Data Michael Leonard and Renee Samy, SAS Institute Inc. Cary, NC, USA Abstract Virtually all businesses collect and use data that are associated with geographic locations, whether

### MTH304: Honors Algebra II

MTH304: Honors Algebra II This course builds upon algebraic concepts covered in Algebra. Students extend their knowledge and understanding by solving open-ended problems and thinking critically. Topics

### An-Najah National University Faculty of Engineering Industrial Engineering Department. Course : Quantitative Methods (65211)

An-Najah National University Faculty of Engineering Industrial Engineering Department Course : Quantitative Methods (65211) Instructor: Eng. Tamer Haddad 2 nd Semester 2009/2010 Chapter 5 Example: Joint

### CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA

We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical

### MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix.

MATH 304 Linear Algebra Lecture 8: Inverse matrix (continued). Elementary matrices. Transpose of a matrix. Inverse matrix Definition. Let A be an n n matrix. The inverse of A is an n n matrix, denoted

### CITY UNIVERSITY LONDON. BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION

No: CITY UNIVERSITY LONDON BEng Degree in Computer Systems Engineering Part II BSc Degree in Computer Systems Engineering Part III PART 2 EXAMINATION ENGINEERING MATHEMATICS 2 (resit) EX2005 Date: August

### 1 Short Introduction to Time Series

ECONOMICS 7344, Spring 202 Bent E. Sørensen January 24, 202 Short Introduction to Time Series A time series is a collection of stochastic variables x,.., x t,.., x T indexed by an integer value t. The

### Testing for Granger causality between stock prices and economic growth

MPRA Munich Personal RePEc Archive Testing for Granger causality between stock prices and economic growth Pasquale Foresti 2006 Online at http://mpra.ub.uni-muenchen.de/2962/ MPRA Paper No. 2962, posted

### E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

### 7 Time series analysis

7 Time series analysis In Chapters 16, 17, 33 36 in Zuur, Ieno and Smith (2007), various time series techniques are discussed. Applying these methods in Brodgar is straightforward, and most choices are

### 7 Hypothesis testing - one sample tests

7 Hypothesis testing - one sample tests 7.1 Introduction Definition 7.1 A hypothesis is a statement about a population parameter. Example A hypothesis might be that the mean age of students taking MAS113X

### SF2940: Probability theory Lecture 8: Multivariate Normal Distribution

SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,

### Notes on Determinant

ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without

### Indiana State Core Curriculum Standards updated 2009 Algebra I

Indiana State Core Curriculum Standards updated 2009 Algebra I Strand Description Boardworks High School Algebra presentations Operations With Real Numbers Linear Equations and A1.1 Students simplify and

### GRADES 7, 8, AND 9 BIG IDEAS

Table 1: Strand A: BIG IDEAS: MATH: NUMBER Introduce perfect squares, square roots, and all applications Introduce rational numbers (positive and negative) Introduce the meaning of negative exponents for

### Sections 2.11 and 5.8

Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

### Model-based Synthesis. Tony O Hagan

Model-based Synthesis Tony O Hagan Stochastic models Synthesising evidence through a statistical model 2 Evidence Synthesis (Session 3), Helsinki, 28/10/11 Graphical modelling The kinds of models that

### State Space Time Series Analysis

State Space Time Series Analysis p. 1 State Space Time Series Analysis Siem Jan Koopman http://staff.feweb.vu.nl/koopman Department of Econometrics VU University Amsterdam Tinbergen Institute 2011 State

### y t by left multiplication with 1 (L) as y t = 1 (L) t =ª(L) t 2.5 Variance decomposition and innovation accounting Consider the VAR(p) model where

. Variance decomposition and innovation accounting Consider the VAR(p) model where (L)y t = t, (L) =I m L L p L p is the lag polynomial of order p with m m coe±cient matrices i, i =,...p. Provided that

### Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

### Compression algorithm for Bayesian network modeling of binary systems

Compression algorithm for Bayesian network modeling of binary systems I. Tien & A. Der Kiureghian University of California, Berkeley ABSTRACT: A Bayesian network (BN) is a useful tool for analyzing the

### 5. Multiple regression

5. Multiple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/5 QBUS6840 Predictive Analytics 5. Multiple regression 2/39 Outline Introduction to multiple linear regression Some useful

### Forecasting in supply chains

1 Forecasting in supply chains Role of demand forecasting Effective transportation system or supply chain design is predicated on the availability of accurate inputs to the modeling process. One of the

### Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

### ECE302 Spring 2006 HW7 Solutions March 11, 2006 1

ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

### Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

### Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number

EC151.02 Statistics for Business and Economics (MWF 8:00-8:50) Instructor: Chiu Yu Ko Office: 462D, 21 Campenalla Way Phone: 2-6093 Email: kocb@bc.edu Office Hours: by appointment Description This course

### Linear Codes. In the V[n,q] setting, the terms word and vector are interchangeable.

Linear Codes Linear Codes In the V[n,q] setting, an important class of codes are the linear codes, these codes are the ones whose code words form a sub-vector space of V[n,q]. If the subspace of V[n,q]

### NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS

NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document

### Prentice Hall Mathematics: Algebra 2 2007 Correlated to: Utah Core Curriculum for Math, Intermediate Algebra (Secondary)

Core Standards of the Course Standard 1 Students will acquire number sense and perform operations with real and complex numbers. Objective 1.1 Compute fluently and make reasonable estimates. 1. Simplify

### business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel

### Marketing Mix Modelling and Big Data P. M Cain

1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored

### 5.3 The Cross Product in R 3

53 The Cross Product in R 3 Definition 531 Let u = [u 1, u 2, u 3 ] and v = [v 1, v 2, v 3 ] Then the vector given by [u 2 v 3 u 3 v 2, u 3 v 1 u 1 v 3, u 1 v 2 u 2 v 1 ] is called the cross product (or

### 2.5 Elementary Row Operations and the Determinant

2.5 Elementary Row Operations and the Determinant Recall: Let A be a 2 2 matrtix : A = a b. The determinant of A, denoted by det(a) c d or A, is the number ad bc. So for example if A = 2 4, det(a) = 2(5)

### Multivariate Normal Distribution

Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

### South Carolina College- and Career-Ready (SCCCR) Algebra 1

South Carolina College- and Career-Ready (SCCCR) Algebra 1 South Carolina College- and Career-Ready Mathematical Process Standards The South Carolina College- and Career-Ready (SCCCR) Mathematical Process

### Joint Probability Distributions and Random Samples. Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

5 Joint Probability Distributions and Random Samples Week 5, 2011 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Two Discrete Random Variables The probability mass function (pmf) of a single

### Introduction to Matrix Algebra

Psychology 7291: Multivariate Statistics (Carey) 8/27/98 Matrix Algebra - 1 Introduction to Matrix Algebra Definitions: A matrix is a collection of numbers ordered by rows and columns. It is customary

### DATA ANALYSIS II. Matrix Algorithms

DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where

### Mathematics Online Instructional Materials Correlation to the 2009 Algebra I Standards of Learning and Curriculum Framework

Provider York County School Division Course Syllabus URL http://yorkcountyschools.org/virtuallearning/coursecatalog.aspx Course Title Algebra I AB Last Updated 2010 - A.1 The student will represent verbal

### PCHS ALGEBRA PLACEMENT TEST

MATHEMATICS Students must pass all math courses with a C or better to advance to the next math level. Only classes passed with a C or better will count towards meeting college entrance requirements. If

### Multivariate normal distribution and testing for means (see MKB Ch 3)

Multivariate normal distribution and testing for means (see MKB Ch 3) Where are we going? 2 One-sample t-test (univariate).................................................. 3 Two-sample t-test (univariate).................................................

### Precalculus REVERSE CORRELATION. Content Expectations for. Precalculus. Michigan CONTENT EXPECTATIONS FOR PRECALCULUS CHAPTER/LESSON TITLES

Content Expectations for Precalculus Michigan Precalculus 2011 REVERSE CORRELATION CHAPTER/LESSON TITLES Chapter 0 Preparing for Precalculus 0-1 Sets There are no state-mandated Precalculus 0-2 Operations

### ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

### MATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.

MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column

### Orthogonal Projections and Orthonormal Bases

CS 3, HANDOUT -A, 3 November 04 (adjusted on 7 November 04) Orthogonal Projections and Orthonormal Bases (continuation of Handout 07 of 6 September 04) Definition (Orthogonality, length, unit vectors).

### Variances and covariances

Chapter 4 Variances and covariances 4.1 Overview The expected value of a random variable gives a crude measure for the center of location of the distribution of that random variable. For instance, if the

### We have discussed the notion of probabilistic dependence above and indicated that dependence is

1 CHAPTER 7 Online Supplement Covariance and Correlation for Measuring Dependence We have discussed the notion of probabilistic dependence above and indicated that dependence is defined in terms of conditional

### Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components

Eigenvalues, Eigenvectors, Matrix Factoring, and Principal Components The eigenvalues and eigenvectors of a square matrix play a key role in some important operations in statistics. In particular, they

### Quadratic forms Cochran s theorem, degrees of freedom, and all that

Quadratic forms Cochran s theorem, degrees of freedom, and all that Dr. Frank Wood Frank Wood, fwood@stat.columbia.edu Linear Regression Models Lecture 1, Slide 1 Why We Care Cochran s theorem tells us

### PowerTeaching i3: Algebra I Mathematics

PowerTeaching i3: Algebra I Mathematics Alignment to the Common Core State Standards for Mathematics Standards for Mathematical Practice and Standards for Mathematical Content for Algebra I Key Ideas and

### Time Series Analysis

Time Series Analysis Time series and stochastic processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos

### Inner Product Spaces and Orthogonality

Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,

### 10. Graph Matrices Incidence Matrix

10 Graph Matrices Since a graph is completely determined by specifying either its adjacency structure or its incidence structure, these specifications provide far more efficient ways of representing a

### Module 3: Correlation and Covariance

Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis

### Trend and Seasonal Components

Chapter 2 Trend and Seasonal Components If the plot of a TS reveals an increase of the seasonal and noise fluctuations with the level of the process then some transformation may be necessary before doing

### Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

### The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

Cointegration The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series. Economic theory, however, often implies equilibrium

### 4.6 Null Space, Column Space, Row Space

NULL SPACE, COLUMN SPACE, ROW SPACE Null Space, Column Space, Row Space In applications of linear algebra, subspaces of R n typically arise in one of two situations: ) as the set of solutions of a linear

### P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =