TOP-DOWN OR BOTTOM-UP FORECASTING?
|
|
|
- Cleopatra Shaw
- 9 years ago
- Views:
Transcription
1 versão impressa ISSN / versão online ISSN TOP-DOWN OR BOTTOM-UP FORECSTING? Peter Wanke* Centre for Logistics Studies The COPPED Graduate School of Business Federal University of Rio de Janeiro (UFRJ) Rio de Janeiro RJ [email protected] Eduardo Saliby The COPPED Graduate School of Business Federal University of Rio de Janeiro (UFRJ) Rio de Janeiro RJ [email protected] * Corresponding author / autor para quem as correspondências devem ser encaminhadas Recebido em 0/007; aceito em 09/007 Received February 007; accepted September 007 bstract The operations literature continues on inconclusive as to the most appropriate sales forecasting approach (Top-Down or Bottom-up) for the determination of safety inventory levels. This paper presents the analytical results for the variance of the sales forecasting errors during the lead-time in both approaches. The forecasting method used was the Simple Exponential Smoothing and the results led to the identification of two supplementary impacts upon the forecasting error variance, and consequently, upon safety inventory levels: the Portfolio Effect and the nchoring Effect. The first depends upon the correlation coefficient of demand between two individual items and the latter, depends upon the smoothing constant and upon the participation of the individual item in total sales. It is also analysed under which conditions these variables would favour one forecasting approach instead of the other. Keywords: forecasting approach; exponential smoothing; safety inventory levels. Resumo literatura de operações permanece sem concluir sobre a abordagem mais adequada de previsão de vendas (Top-Down ou Bottom-Up) para o dimensionamento de estoques de segurança. Nesse manuscrito são apresentados os resultados analíticos para a variância dos erros de previsão no tempo de resposta com amortecimento exponencial nessas duas abordagens. Os resultados apontam dois impactos complementares na variância do erro de previsão, e conseqüentemente, nos níveis de estoque de segurança: Efeito Portifólio e Efeito ncoragem. O primeiro depende do coeficiente de correlação da demanda entre os produtos e o segundo, da constante de amortecimento e da participação das vendas do produto nas vendas totais. É analisado sob quais condições essas variáveis favoreceriam uma abordagem de previsão em detrimento da outra. Palavras-chave: abordagem de previsão; amortecimento exponencial; níveis de estoque de segurança. Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
2 1. Introduction The objective of this paper is to analyse the behaviour of the variance of the sales forecasting error under the Top-Down and Bottom-Up approaches in order to identify under which conditions one approach would be preferred instead of the other in terms of lower safety inventory levels. In previous studies, analyses were carried out using the Simple Exponential Smoothing method on actual sales data (see for example Kahn, 1998 and Lapide, 1998). They led to different conclusions about the adequacy of these approaches for different levels of the correlation coefficient, the sales variance and the share of a given product in total or aggregate sales. In this research, the expressions for the variance of the forecasting errors during the lead-time, assumed to be discrete, were analytically demonstrated and the two main effects that may favour the Top-Down approach to the detriment of the Bottom-Up were identified: the Portfolio Effect and the nchoring Effect. Differently from previous researches, it was considered that sales forecasts were frozen (had the same value) during the lead-time. The results indicate that a product with small participation in total sales and negatively correlated with the aggregate sales of the remaining products (or individual items) tends to present lower forecasting error variance under the Top-Down approach due to the compensation of part of its variance with the aggregate variance of the remaining items. Even when dealing with higher participation in total sales and correlation coefficient, the Top- Down approach may present better results than Bottom-Up if the product sales variance is sufficiently greater than the variance of the aggregate sales of the remaining products, given that a less than proportional portion of the total variance is added to the product variance. The remainder of the paper is structured as follows: sections and 3 are dedicated to the review of the Top-Down and Bottom-Up approaches and to the Simple Exponential Smoothing concepts, respectively. In section 4, the expressions for the variance of the sales forecasting errors during the lead-time are analytically demonstrated for these two approaches and the impacts of the forecasting errors on safety inventories are discussed. Readers should recall that sales forecasts were considered to be frozen during the lead-time. Finally, in section 5, the adequacy of those approaches to different classes of products or items is argued.. Top-Down and Bottom-Up pproaches There exists great consensus amongst authors about the conceptualisation and operationalisation of the Top-Down (TD) and Bottom-Up (BU) sales forecasting approaches. For example, according to Lapide (1998), under the TD approach, sales forecasting is done first by aggregating all individual items, and then by disaggregating these aggregate data into individual items again, generally based on the historical percentage of the item within the total group. In this sense, Schwarzkopf et al. (1988) point out that in the TD approach is primarily forecasted the aggregate total and the subsequent disaggregation is done based on the historical proportions of each individual item. s regards the BU approach, each one of the individual items is forecasted separately and then all the forecasts are summed up in case an aggregate forecast for the group is deemed necessary (Lapide, 1998). In other words, under the BU approach, the forecaster prepares first the forecasts for each individual item, aggregating them thereafter under the interest level of the analysis (Jain, 1995). 59 Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de 007
3 Several authors have tried to relate the adequacy of the TD and BU approaches for different characteristics of the sales historical data such as, for example: the correlation coefficient between the sales of the individual item being studied and the aggregate sales of the remaining items ( ρ ), the share or proportion of the item being studied in the aggregate total sales ( f ), and the ratio between the sales variance of the individual item being studied and the aggregate sales of the remaining items ( k ). n implicit premise in some of the researched papers is related to the Portfolio Effect, a concept initially defined by Zinn et al. (1989) to evaluate the inventory centralization impact upon the total sales variance. ccording to the Portfolio Effect, inventory centralization minimizes the total variance whenever the correlation coefficient of sales between the markets is 1 and the ratio between the sales variance of markets is 1. Generally speaking, the rationale behind the variance reduction, according to the authors, is the compensation of the sales fluctuations between two markets: when the sales from one market go up, the sales for the other market go down by the same amount. One example of that is Kahn s (1998) paper, according to which in the TD approach, the peaks and valleys inherent to each item are cancelled off by the aggregation. The negative correlation among the individual items would reduce the aggregate sales variance. lso corroborating the main conclusions of the Portfolio Effect, Schwarzkopf et al. (1988) point out that estimates based on aggregate data are more precise than those based on individual forecasts when the individual items present independent sales patterns (null correlation). However, Lapide (1998) suggests that, as a rule of thumb, the TD approach makes sense if and only if all the individual items sales are growing, decreasing or remaining stable, thus characterizing a positive correlation amongst the sales of different items. The author continues on ascertaining that a family of products frequently comprises items that potentially cannibalise each other, as in the case of a family with new and old products. For those items, the sales pattern is very different, given that some items increase to detriment of the others (negative correlation), which would render the BU approach preferable. Gordon et al. (1998) and Gelly (1999) discourse on the adequacy of the TD and BU approaches for other characteristics of the sales historical data. More specifically, these authors studied more than 15,000 aggregate and disaggregate historical data series generating forecasts with Triple Exponential Smoothing. The BU approach yielded more precise forecasts in 75% of the series, and a greater accuracy was verified within individual items with strong positive correlation and whenever they represented a large proportion of the aggregate total sales. On the other hand, when the data were negatively correlated, the TD approach showed itself more accurate regardless of the item s participation in the aggregate total sales. Finally, in the case study presented by Gelly (1999), the TD approach revealed itself to be more adequate for individual items that present more a predictable sales pattern throughout time, as for example, with a small sales coefficient of variation ( CV ). That small CV could result from a large participation of the individual item in the aggregate total sales. It could also result from a small ratio between the sales variance of the individual item and the variance of the aggregate sales of the remaining items. The characteristics that favour the TD and BU approaches are summarized in Table 1. Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
4 Table 1 Characteristics that favour the TD and BU approaches. uthor Top-Down Bottom-Up Kahn (1998) Negative correlation ( ρ ) Lapide (1998) Positive correlation Negative correlation Small coefficient of variation ( CV ) High participation in total sales ( f ) Gelly (1999) Gordon et al. (1998) Schwarzkopf et al. (1988) Low ratio between the sales variance of the individual item and the aggregate sales variance of the remaining items ( k ) Negative correlation ny participation in total sales Null correlation Positive correlation High participation in total sales 3. Simple Exponential Smoothing ccording to Gijbels et al. (1999), the Simple Exponential Smoothing (SES) is the most commonly used model in sales forecasting. Its main advantages are related to the fact that it is a non-parametric model based on a simple algebraic formula that quickly enables the updating of the local level estimation of the sales data. In the last twenty years, some researches were carried out to better comprehend and describe the SES and its extensions from a statistical perspective. For example, Chatfield et al. (001) compare a variety of potential Exponential Smoothing models derived from autoregressive moving averages, structural models and non-linear dynamical spaces and conclude why the SES and its extensions are robust even despite changes in the variance of the historical data. Blackburn et al. (1995) show that the SES may introduce spurious autocorrelations in series that the trend component may have been removed and that these autocorrelations would depend upon the average age of the data and of the smoothing constant value. Finally, Gijbels et al. (1999) compare the SES with the Kernel Regression enabling a better understanding of the equivalence and best adequacy between both approaches. The SES and its extensions were developed in the late 1950s by Brown, Winters, and Holt, amongst other authors (Chatfield et al., 001). mong its main premises and limitations, it is worth highlighting that in the SES, eventual growth or decrease trends, seasonal fluctuations and cyclical variations are not considered. For example, the sales forecast for a random variable X with SES is as follows: where F α X + (1 α) F, t t 1 t 1 F t is the forecast of X for period t, t 1 X is the actual sales of X in period t 1, Ft 1 is the forecast of X in period t 1, and α is the smoothing constant, which ranges from 0 and Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de 007
5 4. Forecasting Errors During Lead-Time This research predominantly departs from the sample errors that are intrinsic to sales and inventory corporate planning systems. When all the historical data are available, estimates for the mean and standard deviation may be determined with the desired significance level. Inferences about the probability distribution of the sales random variable may also be done. However, many corporate systems do not use or have all the historical data, which generates uncertainty about if the forecasting errors result from random noises or from a change in one or more of the parameters values in the underlying demand model. Unfortunately, a priori, it is not possible to clarify those questions regarding the actual sales level; they can only be solved a posteriori (Silver et al., 00). Many forecasting models do not use all available historical data. The moving averages use only the last n data and the SES attributes declining weights upon past data (Silver & Peterson 1985). In these circumstances, according to the authors, the best average sales estimate is simply the sales forecast for the next period. s regards estimating the sales variance, the variance of the forecasting error should be used. ccording to Silver & Peterson (1985) and Greene (1997), the forecasting error variance and the sales variance are not equal. ccording to these authors, the best adequacy of the forecasting error variance is related to the use of forecasting for sales estimation. For instance, the safety inventory level must be determined to protect against variations in the sales forecasting errors. In general, the forecasting error variance tends to be greater than the sales variance (Silver et al., 00). This is due to the additional sampling error introduced by the forecasting models when using only part of the available historical data. ccording to Harrison (1967) and Johnson & Harrison (1986), a more complex situation that must be considered occurs when the sales forecasting is considered during the replenishment lead-time ( ), assumed here to be discrete. In this case, the forecasting error variance during the lead-time must be estimated. ccording to Silver & Peterson (1985), the exact relation between the forecasting error variance and the variance of the forecasting error during the lead-time depends upon complicated relations between the sales pattern, the forecasting review procedures and the value of n used in the moving average, or the value of the smoothing constant used in the SES (see Harrison, 1967). ccording to the authors, one of the reasons for such complexity is that the recurrence procedure in the SES introduces a certain degree of dependence between the forecast errors at different periods of time separated by the lead-time. More recently, Snyder et al. (004) established formulae for calculating means and variances of sales forecasts during lead-time considering several exponential smoothing models. Like Johnston & Harrison (1986), these authors considered that sales forecasts could vary during the lead-time. Using frozen sales forecasts during the lead-time is a very common managerial practice, since it tends to be worthless to review sales forecasts values when replenishment orders are still being processed (Greene, 1997). Unless it is possible for the decision-makers to change previously placed orders during the replenishment lead-time, there is little value in reviewing sales forecasts during time interval between the placement and the receipt of an order. This research departs from previous studies for considering the sales forecasts frozen during the lead-time. In other words, for the purpose of calculating forecast errors mean and variances, sales forecasts are supposed not to change during the lead-time. Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
6 Let the following variables be introduced (source: the authors): t product sales at time period t, B t aggregate sales of the remaining products (excluding t ) at t, T t total aggregate sales ( t + Bt) at t, µ expected value of t (assumed to be constant over t), µ B expected value of B t (assumed to be constant over t), µ T expected value of T t (assumed to be constant over t), σ standard deviation of t (assumed to be constant over t), V( ) variance of t, σ B standard deviation of B t (assumed to be constant over t), V( B ) variance of B t, ρ B correlation coefficient between t and B t also assumed constant, σ T VT ( ) standard deviation of T t ( variance of T t, B B B σ + σ + ρ σ σ ), k ratio between the standard deviations, σ / σ, f historical share of product sales, µ / µ, F, t sales forecast of t, F Tt, sales forecast of T t, replenishment lead-time assumed here to be discrete, µ expected value of, σ standard deviation of, V( ) variance of, aggregated sales during the lead-time, F aggregated sales forecast of during the lead-time, E sales forecast error of during the lead-time, E F. In the remaining text, as a rule, E() denotes the expected value and V () denotes the variance of a given random variable. lso, whenever a variable is constant over time, the index t is omitted. T B 4.1 Bottom-Up pproach In this case, the results are derived for an individual item (). From the above definitions, it follows that: (a) aggregated sales during the lead-time:, (1) t 596 Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de 007
7 (b) frozen sales forecast of during lead-time: F F () t,, (c) sales forecast error of during the lead-time: E F. (3) Substituting (1) and () into (3), we have: E F (4) t t,. Taking the expected values in equation (4), we have: E( E ) E( ) E( ) E( F ). But E ( ) EF ( ), as shown in ppendix 1. So: E( E ) E( F ) E( ) E( F ). Supposing that E( E ) 0. F and are not correlated, we have: Taking the variance of E in equation (4) and supposing that and are not correlated, just as and F, but that and F present correlation given by ρ F, we have: (5) V( E ) V( ) + V( F ) - ρ V( ) V( F ). t FTL t But ρ F is equal to: E( t F ) E( t) E( F ) ρ FTL. V( ) V( F ) That is: t ρ FTL E ( ) V ( ). V( ) V( F ) t (6) Substituting (6) into (5), we have: t + (7) V( E ) V( ) V( F ) E( ) V( ). But, as indicated in ppendix : ( t ) ( ) ( ) + ( ) ( ). (8) V E V E V Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
8 nd according to what is indicated in ppendix 3: V( F ) E( F ) V( ) + E( ) V( F ) + V( F ) V( ). (9) Thus, substituting (8) and (9) into (7) and observing that EF ( ) E ( ), we have: V( E ) E( ) V( ) + E( ) V( F ) + V( F ) V( ). (10) Finally, as shown in ppendix 4: α V( ) V( F ). α Substituting (11) into (10), we have: { α } V( E) V( ) E( ) + E( ) V( ). α + That is: α V( E ) σ µ ( µ σ ) + +. α (11) (1) This result is identical to that obtained by Eppen & Martin (1988) for the variance of the total forecast error during the lead-time, considering sales to be essentially a constant process and the use of simple exponential smoothing (see their equations (18) and (19)). 4. Top-Down pproach In the BU approach the variance of the forecasting error during the lead-time is given by equation (1). ssuming that f is nearly constant, it must be taken into consideration that the forecasting of t in the TD approach depends upon the forecasting of T t, and that is given by: F f F (13), t T, t. s such, taking the variance of F, t in equation (13), we have: V( F ) f V( F ). (14) T In a similar way to equation (11) and supposing the same smoothing constant: α VT ( ) V( FT ). α Substituting (15) into (14), we have: f α V( T) V( F ). α (15) (16) 598 Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de 007
9 Thus, expanding the term VT ( ) in (16), we have: f α V( ) + V( B) + ρb V( ) V( B) V( F ). α Substituting (17) into (10) and also considering that k σ / σ, we have: B (17) f α 1 ρ B V( E) V( ) E( ) E( ) V( ). + + α k k That is: f α 1 ρ B V( E) σ µ ( µ ) + σ (18) α k k s a validation procedure, spreadsheet Monte-Carlo simulations were performed to confirm the results of equations (6), (8), (9), (1) and (18). Long-range time series (more than 1,000 observations) were randomly generated and the forecasts were summed up during the lead-time, which was also randomly generated. Specifically, and B sales were generated through correlated normal bivariate distributions and lead-times were supposed to be discrete and uniformly distributed. These distributions were chosen for the sake of simplicity, since the abovementioned results are non-parametric. Both forecasting approaches were applied when generating these data. The average error after 60 replications (randomly generated runs) was about 1% of the theoretical results given by these formulae. It is worth emphasizing that equations (1) and (18) are the key results for the analysis that follows. Thus, equalling (1) and (18) and then isolating k on the left side of the equation in order to express it in terms of f and ρ B (since the remainder terms are cancelled off during this operation) one gets the critical value of k ( k critical ). It equals the variances of the forecasting error during the lead-time in the TD and BU approaches: k critical f ρb f + f ( ρb 1) f (19) Finally, a closer look into equations (1) and (18) shows that if k > k critical the forecasting error during the lead-time is smaller under the TD approach., the variance of 4.3 Discussion of Results ccording to the results, the smaller the values of ρ B and E( f ), the greater the chances that the TD approach will minimize the forecasting error variance, and consequently, the safety inventory levels. The Portfolio Effect explains part of the result: an individual item negatively correlated with the aggregate sales of the remaining items presents lower forecasting error variance under the TD approach due to the compensation of part of its variance with the aggregate variance of the remaining items. Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
10 Even for higher values of ρ B and E( f ), the TD approach may present better results when compared to the BU if the value of k is sufficiently high, that is, if σ is sufficiently higher than σ B. That result is fully explained by the nchoring Effect (given that the Portfolio Effect is not present): a less than proportional portion of the total variance is added to the variance of product. In Figure 1, the Portfolio Effect and the nchoring Effect are illustrated with numerical examples for the particular case where µ 1 and σ 0 (equations (1) and (18)). Portfolio and nchoring Effects Combined: Let σ 10, σ B 0, E( f) 0.30, ρ B -0.40, and α The total variance (340) is smaller than the sum of the variances of and B (500) due BU Variance σ / ( - α) 100/( 0.10) to the Portfolio Effect generated by the negative correlation. TD Variance σ + [α E( f) / ( - α)] σ T, TD Variance [ / ( 0.10)] [ (-0.40)], TD Variance The nchoring Effect supplements the Portfolio Effect, incorporating into the variance of a portion of the total variance less than proportional to that of the Portfolio Effect [α E( f) / ( - α)]. k critical 0.8, that is, choose the TD approach whenever k > k critical (k 0.5). nchoring Effect Only : Let σ 10, σ B, E( f) 0.80, ρ B +0.40, and α The total variance (10) is greater than the sum of BU Variance σ / ( - α) 100/( 0.10) the variances of and B (104) due to their positive correlation. There is no Portfolio Effect. TD Variance σ + [α E( f) / ( - α) ] σ T, TD Variance [ / ( 0.10)] [ (+0.40)], less than proportional portion of the total variance is incorporated into the variance of [α E( f) / ( - α)], due to the nchoring Effect. TD Variance k critical. (k 5). Figure 1 Portfolio and anchoring effects. In Figure, the results of equation (19) are generalized and the indifference lines between the TD and BU approaches for different values of E( f ), k and ρ B are presented. Essentially, if k > k critical the TD approach must be chosen; otherwise, the BU approach. In other words, given a pair ( E( f ), ρ B ), if the actual value of k is greater than the respective value of k critical linked to that pair, the TD approach must be chosen in the Simple Exponential Smoothing instead of the BU approach. 600 Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de 007
11 E ( f ) Top-Down: k > k critical Bottom-Up: k < k critical 6 k critical 4 E ( f ) E ( f ) 0.50 E ( f ) 0.5 E ( f ) ρ B Figure Indifference lines and recommended choice between TD and BU approaches. One notices from Figure that, the smaller the values of ρ B and E( f ) get, the greater are the chances (area of the graph above the indifference lines) of the TD approach to minimize the forecasting error variance with SES. The fact that can explain that result is the conjoint presence of the Portfolio and nchoring Effects. However, even for higher values of E( f ) and ρ B, the TD approach may present lower forecasting error variance if the value of k is sufficiently high, due to the nchoring Effect: α E( f) /( α). 5. Conclusions and Implications In this paper, the impacts of the Top-Down and Bottom-Up approaches upon the variance of the forecasting errors during the lead-time, and consequently, on the safety inventory levels, were analytically shown. In both approaches, greater values of the smoothing constant and of the lead-time (expected value and variance) increase the error variance by the same amount. The Portfolio and nchoring Effects explain why the Top-Down approach may minimize the variance of the forecasting errors during the lead-time. Under the Portfolio Effect, present when the correlation is negative, the total variance is lower due to the offsetting of the individual item variance with the variance of the remaining individual items. Under the nchoring Effect, which is seen even when the correlation is positive, the term Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
12 α E( f) /( α) implicates the incorporation of a less than proportional portion of the total variance into the variance of the individual item given that E( f ) and α are lower than 1. When the analytical solutions for the variance of the forecasting errors during the lead-time under the Top-Down and Bottom-Up approaches are made equal, the terms α, µ, and σ are cancelled off. The choice of the lower variance approach depends upon the comparison of the value of k with k critical given by equation (19). That is, the lead-time and the smoothing constant do not specifically influence the determination of the more adequate forecasting approach. Practitioners may also benefit from the results since the flexibility often sought for the sales forecasting and the inventory management process is warranted. Primarily, the results presented enable one to determine the approach that leads to the lower variance of the forecasting errors during the lead-time with a relatively small computational effort. Secondly, the presented results may be used to segment the sales forecasting process with the purpose of determining safety inventory levels, as shown in Table. For example, C items typically present lower participation in total sales and greater coefficient of variation (Zinn & Croxton 005). Given that E( f ) value is small, the nchoring Effect is as relevant as the Portfolio Effect even when the correlation is positive, according to what is indicated by the practically straight lines for E( f ) values up to 0.5 in Figure. In those circumstances it is very likely that the value of k is greater than the value of k critical and the TD approach would lead to lower variance of the forecasting errors during the lead-time. On the other hand, since items typically present greater participation in total sales and lower coefficient of variation, they must be individually forecasted (BU approach) mainly if they are positively correlated with the aggregate sales of the remaining items. When the correlation is negative, the TD approach may be more adequate if the value of k is sufficiently high (see Figure ). Table Forecasting approach per product class. Main Characteristics Negative ρ B Positive ρ B Portfolio and nchoring Effects Combined nchoring Effect Only Items High f Low k Bottom-Up (if k < k ) * critical C Items Low f High k Top-Down (if k > k ) critical * When B ρ tends to -1, critical k tends to be smaller than k, therefore favouring the Top-Down approach. 60 Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de 007
13 Finally, suggestions for future research involve the behaviour of the forecasting error variance under these two approaches when the standard deviation of f is not assumed to be zero. Some pertinent issues are therefore raised. For example, would this assumption favour a given forecasting approach? Specifically, under which conditions of α, ρ, and k is a good approximation to assume f constant? What are the biases incurred in that approximation? What are the impacts of the covariance between T and f on both forecasting approaches? What are the implications for the sales forecasting and for the determination of safety inventory levels? B References (1) Blackburn, K.; Orduna, F. & Sola, M. (1995). Exponential smoothing and spurious correlation: a note. pplied Economic Letters,, () Brown, R.G. (198). dvanced Service Parts Inventory Control. Materials Management Systems Inc, Vermont. (3) Chatfield, C.; Koehler,.B. & Snyder, R.D. (001). new look at models for exponential smoothing. The Statistician, 50, (4) Eppen, G. & Martin, K. (1988). Determining safety stock in the presence of stochastic lead time and demand. Management Science, 34, (5) Gelly, P. (1999). Managing bottom-up and top-down approaches: ocean spray s experiences. The Journal of Business Forecasting, 18, 3-6. (6) Gijbels, I.; Pope,. & Wand, M.P. (1999). Understanding exponential smoothing via kernel regression. Journal of the Royal Statistician Society, 61, (7) Gordon, T.P.; Morris, J.S. & Dangerfield, B.J. (1997). Top-down or bottom-up: which is the best approach to forecasting. The Journal of Business Forecasting, 6, (8) Greene, J.H. (1997). Production and Inventory Control Handbook. McGraw-Hill, New York. (9) Harrison, P.J. (1967). Exponential smoothing and shot-term sales forecasting. Management Science, 13, (10) Jain, C.L. (1995). How to determine the approach to forecasting. The Journal of Business Forecasting, 14, -3. (11) Johnston, F.R. & Harrison, P.J. (1986). The variance of lead-time demand. Journal of the Operational Research Society, 31, (1) Kahn, K.B. (1998). Revisiting top-down versus bottom-up forecasting. The Journal of Business Forecasting, 17, (13) Lapide, L. (1998). simple view of top-down versus bottom-up forecasting. The Journal of Business Forecasting, 17, 8-9. (14) Mentzer, J.T. & Krishnan, R. (1988). The effect of the assumption of normality on inventory control/customer service. Journal of Business Logistics, 6, Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
14 (15) Schwarzkopf,.B.; Tersine, R.J. & Morris, J.S. (1988). Top-down versus bottom-up forecasting strategies. International Journal of Production Research, 6, (16) Silver, E.. & Peterson, R. (1985). Decision Systems for Inventory Management and Production Planning. Wiley & Sons, New York. (17) Silver, E..; Pyke, D. & Peterson, R. (00). Decision Systems for Inventory Management and Production Planning and Scheduling. Wiley & Sons, New York. (18) Snyder, R.; Koehler,.; Hyndman, R. & Ord, J. (004). Exponential smoothing models: mean and variances for lead-time demand. European Journal of Operational Research, 158, (19) Zinn, W. & Croxton, K.L. (005). Inventory considerations in networking design. Journal of Business Logistics, 6, (0) Zinn, W.; Levy, M. & Bowersox, D.J. (1989). Measuring the effect of inventory centralization/decentralization on aggregate safety stock: the square root law revisited. Journal of Business Logistics, 10, (1) Zwillinger, D. & Kokosa, S. (000). Standard Probability and Statistics Tables and Formulae. Chapman & Hall, New York. ppendix 1 Establishing EF ( ) Let the forecast of with the SES be given by ( 1): F α + (1 α) F. ( 1) t, t 1 t, 1 Taking the expected values, assumed to be constant over time, we have: EF ( ) α E ( ) + (1 α) EF ( ), EF ( ) E ( ). ppendix Establishing V( ) t The variance of the random sum of product sales during the discrete lead-time was determined with factorial moment generating function (Zwillinger & Kokosa 000), thus corroborating the results previously presented by Mentzer & Krishnan (1988). Φ () t represents the factorial moment generating function of the random sum of during the lead-time. Its variance is obtained by evaluating the following expression for t 1: t Φ () +Φ () Φ (), (B 1) V( ) t t t where Φ ( t) and Φ ( t) are the first and second derivatives of Φ () t, respectively. Let Φ (t) and P () t be the factorial moment generating functions of the random variables (continuous) and (discrete), respectively and let t be a supporting variable: 604 Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de 007
15 Φ() t t f( ) d, (B ) P () t t p( ). (B 3) Φ () t is obtained by substituting (B ) into (B 3): ( ) (B 4) Φ () t t f( ) d p( ). Finally, differentiating (B 4) with respect to t and substituting the results into (B 1), we have: V( ) E( ) V ( ) + E( ) V ( ). t ppendix 3 Establishing V( F ) The variance of the product of the random variables F, t and yields identical result to what was presented by Brown (198). The result V( F ) E( F) V( ) + E( ) V( F) + V( F) V( ) is equivalent to calculating the variance of the frozen sales forecasting during the lead-time from equation (C 1). V ( F ) E( F ) E( F ). (C 1) ppendix 4: Establishing V( F ) Let the forecast of t with SES be given by (D 1): F α + (1 α) F. (D 1) t, t 1 t, 1 Let (D ) be the expansion of the recursive terms in (D 1): 3 t, α t 1 α (1 α) t α (1 α) t 3 α (1 α) t 4 F (D ) Taking the variances of (D ), we have: { α α α 4 α 6 } V( F ) 1 + (1 ) + (1 ) + (1 ) + V( ). (D 3) Finally, solving the sum to infinity of the geometric progression with ratio 1 α in (D 3), we have: α V( ) V( F ). α Pesquisa Operacional, v.7, n.3, p , Setembro a Dezembro de
Time series Forecasting using Holt-Winters Exponential Smoothing
Time series Forecasting using Holt-Winters Exponential Smoothing Prajakta S. Kalekar(04329008) Kanwal Rekhi School of Information Technology Under the guidance of Prof. Bernard December 6, 2004 Abstract
Forecasting in supply chains
1 Forecasting in supply chains Role of demand forecasting Effective transportation system or supply chain design is predicated on the availability of accurate inputs to the modeling process. One of the
Introduction to time series analysis
Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples
Time Series and Forecasting
Chapter 22 Page 1 Time Series and Forecasting A time series is a sequence of observations of a random variable. Hence, it is a stochastic process. Examples include the monthly demand for a product, the
Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I
Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting
Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand
Proceedings of the 2009 Industrial Engineering Research Conference Evaluating the Lead Time Demand Distribution for (r, Q) Policies Under Intermittent Demand Yasin Unlu, Manuel D. Rossetti Department of
Forecasting Methods. What is forecasting? Why is forecasting important? How can we evaluate a future demand? How do we make mistakes?
Forecasting Methods What is forecasting? Why is forecasting important? How can we evaluate a future demand? How do we make mistakes? Prod - Forecasting Methods Contents. FRAMEWORK OF PLANNING DECISIONS....
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Keep It Simple: Easy Ways To Estimate Choice Models For Single Consumers
Keep It Simple: Easy Ways To Estimate Choice Models For Single Consumers Christine Ebling, University of Technology Sydney, [email protected] Bart Frischknecht, University of Technology Sydney,
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
Standard Deviation Estimator
CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of
Part 2: Analysis of Relationship Between Two Variables
Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable
Using simulation to calculate the NPV of a project
Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial
Review of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
UNDERSTANDING THE TWO-WAY ANOVA
UNDERSTANDING THE e have seen how the one-way ANOVA can be used to compare two or more sample means in studies involving a single independent variable. This can be extended to two independent variables
UNDERSTANDING THE DEPENDENT-SAMPLES t TEST
UNDERSTANDING THE DEPENDENT-SAMPLES t TEST A dependent-samples t test (a.k.a. matched or paired-samples, matched-pairs, samples, or subjects, simple repeated-measures or within-groups, or correlated groups)
Time Series Forecasting Techniques
03-Mentzer (Sales).qxd 11/2/2004 11:33 AM Page 73 3 Time Series Forecasting Techniques Back in the 1970s, we were working with a company in the major home appliance industry. In an interview, the person
Analysis of Various Forecasting Approaches for Linear Supply Chains based on Different Demand Data Transformations
Institute of Information Systems University of Bern Working Paper No 196 source: https://doi.org/10.7892/boris.58047 downloaded: 16.11.2015 Analysis of Various Forecasting Approaches for Linear Supply
Single item inventory control under periodic review and a minimum order quantity
Single item inventory control under periodic review and a minimum order quantity G. P. Kiesmüller, A.G. de Kok, S. Dabia Faculty of Technology Management, Technische Universiteit Eindhoven, P.O. Box 513,
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression
Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a
X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)
CORRELATION AND REGRESSION / 47 CHAPTER EIGHT CORRELATION AND REGRESSION Correlation and regression are statistical methods that are commonly used in the medical literature to compare two or more variables.
Measuring Line Edge Roughness: Fluctuations in Uncertainty
Tutor6.doc: Version 5/6/08 T h e L i t h o g r a p h y E x p e r t (August 008) Measuring Line Edge Roughness: Fluctuations in Uncertainty Line edge roughness () is the deviation of a feature edge (as
Module 5: Multiple Regression Analysis
Using Statistical Data Using to Make Statistical Decisions: Data Multiple to Make Regression Decisions Analysis Page 1 Module 5: Multiple Regression Analysis Tom Ilvento, University of Delaware, College
SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION
SPARE PARS INVENORY SYSEMS UNDER AN INCREASING FAILURE RAE DEMAND INERVAL DISRIBUION Safa Saidane 1, M. Zied Babai 2, M. Salah Aguir 3, Ouajdi Korbaa 4 1 National School of Computer Sciences (unisia),
TIME SERIES ANALYSIS & FORECASTING
CHAPTER 19 TIME SERIES ANALYSIS & FORECASTING Basic Concepts 1. Time Series Analysis BASIC CONCEPTS AND FORMULA The term Time Series means a set of observations concurring any activity against different
4. Continuous Random Variables, the Pareto and Normal Distributions
4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random
Confidence Intervals for Cp
Chapter 296 Confidence Intervals for Cp Introduction This routine calculates the sample size needed to obtain a specified width of a Cp confidence interval at a stated confidence level. Cp is a process
Industry Environment and Concepts for Forecasting 1
Table of Contents Industry Environment and Concepts for Forecasting 1 Forecasting Methods Overview...2 Multilevel Forecasting...3 Demand Forecasting...4 Integrating Information...5 Simplifying the Forecast...6
Sales and operations planning (SOP) Demand forecasting
ing, introduction Sales and operations planning (SOP) forecasting To balance supply with demand and synchronize all operational plans Capture demand data forecasting Balancing of supply, demand, and budgets.
Moving averages. Rob J Hyndman. November 8, 2009
Moving averages Rob J Hyndman November 8, 009 A moving average is a time series constructed by taking averages of several sequential values of another time series. It is a type of mathematical convolution.
Financial Mathematics and Simulation MATH 6740 1 Spring 2011 Homework 2
Financial Mathematics and Simulation MATH 6740 1 Spring 2011 Homework 2 Due Date: Friday, March 11 at 5:00 PM This homework has 170 points plus 20 bonus points available but, as always, homeworks are graded
16 : Demand Forecasting
16 : Demand Forecasting 1 Session Outline Demand Forecasting Subjective methods can be used only when past data is not available. When past data is available, it is advisable that firms should use statistical
Forecasting methods applied to engineering management
Forecasting methods applied to engineering management Áron Szász-Gábor Abstract. This paper presents arguments for the usefulness of a simple forecasting application package for sustaining operational
1. Currency Exposure. VaR for currency positions. Hedged and unhedged positions
RISK MANAGEMENT [635-0]. Currency Exposure. ar for currency positions. Hedged and unhedged positions Currency Exposure Currency exposure represents the relationship between stated financial goals and exchange
STRATEGIC CAPACITY PLANNING USING STOCK CONTROL MODEL
Session 6. Applications of Mathematical Methods to Logistics and Business Proceedings of the 9th International Conference Reliability and Statistics in Transportation and Communication (RelStat 09), 21
ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE
ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE YUAN TIAN This synopsis is designed merely for keep a record of the materials covered in lectures. Please refer to your own lecture notes for all proofs.
Using Excel for inferential statistics
FACT SHEET Using Excel for inferential statistics Introduction When you collect data, you expect a certain amount of variation, just caused by chance. A wide variety of statistical tests can be applied
Concepts in Investments Risks and Returns (Relevant to PBE Paper II Management Accounting and Finance)
Concepts in Investments Risks and Returns (Relevant to PBE Paper II Management Accounting and Finance) Mr. Eric Y.W. Leung, CUHK Business School, The Chinese University of Hong Kong In PBE Paper II, students
Time Series Analysis
Time Series Analysis Autoregressive, MA and ARMA processes Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 212 Alonso and García-Martos
COMBINING THE METHODS OF FORECASTING AND DECISION-MAKING TO OPTIMISE THE FINANCIAL PERFORMANCE OF SMALL ENTERPRISES
COMBINING THE METHODS OF FORECASTING AND DECISION-MAKING TO OPTIMISE THE FINANCIAL PERFORMANCE OF SMALL ENTERPRISES JULIA IGOREVNA LARIONOVA 1 ANNA NIKOLAEVNA TIKHOMIROVA 2 1, 2 The National Nuclear Research
Econometrics Simple Linear Regression
Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight
How To Find The Optimal Base Stock Level In A Supply Chain
Optimizing Stochastic Supply Chains via Simulation: What is an Appropriate Simulation Run Length? Arreola-Risa A 1, Fortuny-Santos J 2, Vintró-Sánchez C 3 Abstract The most common solution strategy for
Confidence Intervals for the Difference Between Two Means
Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means
Chapter 5. Risk and Return. Copyright 2009 Pearson Prentice Hall. All rights reserved.
Chapter 5 Risk and Return Learning Goals 1. Understand the meaning and fundamentals of risk, return, and risk aversion. 2. Describe procedures for assessing and measuring the risk of a single asset. 3.
business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar
business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel
CHI-SQUARE: TESTING FOR GOODNESS OF FIT
CHI-SQUARE: TESTING FOR GOODNESS OF FIT In the previous chapter we discussed procedures for fitting a hypothesized function to a set of experimental data points. Such procedures involve minimizing a quantity
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)
INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA) As with other parametric statistics, we begin the one-way ANOVA with a test of the underlying assumptions. Our first assumption is the assumption of
Asymmetry and the Cost of Capital
Asymmetry and the Cost of Capital Javier García Sánchez, IAE Business School Lorenzo Preve, IAE Business School Virginia Sarria Allende, IAE Business School Abstract The expected cost of capital is a crucial
Fairfield Public Schools
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
Generating Random Numbers Variance Reduction Quasi-Monte Carlo. Simulation Methods. Leonid Kogan. MIT, Sloan. 15.450, Fall 2010
Simulation Methods Leonid Kogan MIT, Sloan 15.450, Fall 2010 c Leonid Kogan ( MIT, Sloan ) Simulation Methods 15.450, Fall 2010 1 / 35 Outline 1 Generating Random Numbers 2 Variance Reduction 3 Quasi-Monte
Polynomial and Rational Functions
Polynomial and Rational Functions Quadratic Functions Overview of Objectives, students should be able to: 1. Recognize the characteristics of parabolas. 2. Find the intercepts a. x intercepts by solving
Outline: Demand Forecasting
Outline: Demand Forecasting Given the limited background from the surveys and that Chapter 7 in the book is complex, we will cover less material. The role of forecasting in the chain Characteristics of
Some Quantitative Issues in Pairs Trading
Research Journal of Applied Sciences, Engineering and Technology 5(6): 2264-2269, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: October 30, 2012 Accepted: December
An introduction to Value-at-Risk Learning Curve September 2003
An introduction to Value-at-Risk Learning Curve September 2003 Value-at-Risk The introduction of Value-at-Risk (VaR) as an accepted methodology for quantifying market risk is part of the evolution of risk
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
Testing for Granger causality between stock prices and economic growth
MPRA Munich Personal RePEc Archive Testing for Granger causality between stock prices and economic growth Pasquale Foresti 2006 Online at http://mpra.ub.uni-muenchen.de/2962/ MPRA Paper No. 2962, posted
Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni
1 Web-based Supplementary Materials for Bayesian Effect Estimation Accounting for Adjustment Uncertainty by Chi Wang, Giovanni Parmigiani, and Francesca Dominici In Web Appendix A, we provide detailed
Association Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
CHAPTER 11 FORECASTING AND DEMAND PLANNING
OM CHAPTER 11 FORECASTING AND DEMAND PLANNING DAVID A. COLLIER AND JAMES R. EVANS 1 Chapter 11 Learning Outcomes l e a r n i n g o u t c o m e s LO1 Describe the importance of forecasting to the value
Simple Regression Theory II 2010 Samuel L. Baker
SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the
Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras
Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 36 Location Problems In this lecture, we continue the discussion
Binomial lattice model for stock prices
Copyright c 2007 by Karl Sigman Binomial lattice model for stock prices Here we model the price of a stock in discrete time by a Markov chain of the recursive form S n+ S n Y n+, n 0, where the {Y i }
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
LogNormal stock-price models in Exams MFE/3 and C/4
Making sense of... LogNormal stock-price models in Exams MFE/3 and C/4 James W. Daniel Austin Actuarial Seminars http://www.actuarialseminars.com June 26, 2008 c Copyright 2007 by James W. Daniel; reproduction
USING SEASONAL AND CYCLICAL COMPONENTS IN LEAST SQUARES FORECASTING MODELS
Using Seasonal and Cyclical Components in Least Squares Forecasting models USING SEASONAL AND CYCLICAL COMPONENTS IN LEAST SQUARES FORECASTING MODELS Frank G. Landram, West Texas A & M University Amjad
The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy
BMI Paper The Effects of Start Prices on the Performance of the Certainty Equivalent Pricing Policy Faculty of Sciences VU University Amsterdam De Boelelaan 1081 1081 HV Amsterdam Netherlands Author: R.D.R.
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
Objectives of Chapters 7,8
Objectives of Chapters 7,8 Planning Demand and Supply in a SC: (Ch7, 8, 9) Ch7 Describes methodologies that can be used to forecast future demand based on historical data. Ch8 Describes the aggregate planning
Implementation of demand forecasting models for fuel oil - The case of the Companhia Logística de Combustíveis, S.A. -
1 Implementation of demand forecasting models for fuel oil - The case of the Companhia Logística de Combustíveis, S.A. - Rosália Maria Gairifo Manuel Dias 1 1 DEG, IST, Universidade Técnica de Lisboa,
IDENTIFICATION OF DEMAND FORECASTING MODEL CONSIDERING KEY FACTORS IN THE CONTEXT OF HEALTHCARE PRODUCTS
IDENTIFICATION OF DEMAND FORECASTING MODEL CONSIDERING KEY FACTORS IN THE CONTEXT OF HEALTHCARE PRODUCTS Sushanta Sengupta 1, Ruma Datta 2 1 Tata Consultancy Services Limited, Kolkata 2 Netaji Subhash
STRUTS: Statistical Rules of Thumb. Seattle, WA
STRUTS: Statistical Rules of Thumb Gerald van Belle Departments of Environmental Health and Biostatistics University ofwashington Seattle, WA 98195-4691 Steven P. Millard Probability, Statistics and Information
A logistic approximation to the cumulative normal distribution
A logistic approximation to the cumulative normal distribution Shannon R. Bowling 1 ; Mohammad T. Khasawneh 2 ; Sittichai Kaewkuekool 3 ; Byung Rae Cho 4 1 Old Dominion University (USA); 2 State University
Section Format Day Begin End Building Rm# Instructor. 001 Lecture Tue 6:45 PM 8:40 PM Silver 401 Ballerini
NEW YORK UNIVERSITY ROBERT F. WAGNER GRADUATE SCHOOL OF PUBLIC SERVICE Course Syllabus Spring 2016 Statistical Methods for Public, Nonprofit, and Health Management Section Format Day Begin End Building
Two-sample hypothesis testing, II 9.07 3/16/2004
Two-sample hypothesis testing, II 9.07 3/16/004 Small sample tests for the difference between two independent means For two-sample tests of the difference in mean, things get a little confusing, here,
Chapter Seven. Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS
Chapter Seven Multiple regression An introduction to multiple regression Performing a multiple regression on SPSS Section : An introduction to multiple regression WHAT IS MULTIPLE REGRESSION? Multiple
Module 3: Correlation and Covariance
Using Statistical Data to Make Decisions Module 3: Correlation and Covariance Tom Ilvento Dr. Mugdim Pašiƒ University of Delaware Sarajevo Graduate School of Business O ften our interest in data analysis
LOGNORMAL MODEL FOR STOCK PRICES
LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as
AP Physics 1 and 2 Lab Investigations
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
Lecture 2: ARMA(p,q) models (part 3)
Lecture 2: ARMA(p,q) models (part 3) Florian Pelgrin University of Lausanne, École des HEC Department of mathematics (IMEA-Nice) Sept. 2011 - Jan. 2012 Florian Pelgrin (HEC) Univariate time series Sept.
Identification of Demand through Statistical Distribution Modeling for Improved Demand Forecasting
Identification of Demand through Statistical Distribution Modeling for Improved Demand Forecasting Murphy Choy Michelle L.F. Cheong School of Information Systems, Singapore Management University, 80, Stamford
The Best of Both Worlds:
The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk Jacob Boudoukh 1, Matthew Richardson and Robert F. Whitelaw Stern School of Business, NYU The hybrid approach combines the two most
Confidence Intervals for One Standard Deviation Using Standard Deviation
Chapter 640 Confidence Intervals for One Standard Deviation Using Standard Deviation Introduction This routine calculates the sample size necessary to achieve a specified interval width or distance from
Chi Square Tests. Chapter 10. 10.1 Introduction
Contents 10 Chi Square Tests 703 10.1 Introduction............................ 703 10.2 The Chi Square Distribution.................. 704 10.3 Goodness of Fit Test....................... 709 10.4 Chi Square
Variables Control Charts
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. Variables
1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96
1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years
1 Portfolio mean and variance
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
Equations for Inventory Management
Equations for Inventory Management Chapter 1 Stocks and inventories Empirical observation for the amount of stock held in a number of locations: N 2 AS(N 2 ) = AS(N 1 ) N 1 where: N 2 = number of planned
INCREASING FORECASTING ACCURACY OF TREND DEMAND BY NON-LINEAR OPTIMIZATION OF THE SMOOTHING CONSTANT
58 INCREASING FORECASTING ACCURACY OF TREND DEMAND BY NON-LINEAR OPTIMIZATION OF THE SMOOTHING CONSTANT Sudipa Sarker 1 * and Mahbub Hossain 2 1 Department of Industrial and Production Engineering Bangladesh
On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information
Finance 400 A. Penati - G. Pennacchi Notes on On the Efficiency of Competitive Stock Markets Where Traders Have Diverse Information by Sanford Grossman This model shows how the heterogeneous information
Using kernel methods to visualise crime data
Submission for the 2013 IAOS Prize for Young Statisticians Using kernel methods to visualise crime data Dr. Kieran Martin and Dr. Martin Ralphs [email protected] [email protected] Office
Exponential Smoothing with Trend. As we move toward medium-range forecasts, trend becomes more important.
Exponential Smoothing with Trend As we move toward medium-range forecasts, trend becomes more important. Incorporating a trend component into exponentially smoothed forecasts is called double exponential
The Method of Least Squares
The Method of Least Squares Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract The Method of Least Squares is a procedure to determine the best fit line to data; the
Lean Six Sigma Analyze Phase Introduction. TECH 50800 QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY
TECH 50800 QUALITY and PRODUCTIVITY in INDUSTRY and TECHNOLOGY Before we begin: Turn on the sound on your computer. There is audio to accompany this presentation. Audio will accompany most of the online
Moreover, under the risk neutral measure, it must be the case that (5) r t = µ t.
LECTURE 7: BLACK SCHOLES THEORY 1. Introduction: The Black Scholes Model In 1973 Fisher Black and Myron Scholes ushered in the modern era of derivative securities with a seminal paper 1 on the pricing
Descriptive Statistics. Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion
Descriptive Statistics Purpose of descriptive statistics Frequency distributions Measures of central tendency Measures of dispersion Statistics as a Tool for LIS Research Importance of statistics in research
A FUZZY LOGIC APPROACH FOR SALES FORECASTING
A FUZZY LOGIC APPROACH FOR SALES FORECASTING ABSTRACT Sales forecasting proved to be very important in marketing where managers need to learn from historical data. Many methods have become available for
Sample Size and Power in Clinical Trials
Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance
CHAPTER FIVE. Solutions for Section 5.1. Skill Refresher. Exercises
CHAPTER FIVE 5.1 SOLUTIONS 265 Solutions for Section 5.1 Skill Refresher S1. Since 1,000,000 = 10 6, we have x = 6. S2. Since 0.01 = 10 2, we have t = 2. S3. Since e 3 = ( e 3) 1/2 = e 3/2, we have z =
