1 1 Forecasting in supply chains Role of demand forecasting Effective transportation system or supply chain design is predicated on the availability of accurate inputs to the modeling process. One of the most important inputs are the demands placed on the system. Forecasting techniques are used to predict, in the face of uncertainty, what the demands on the system will be in the future so that appropriate designs and operating plans can be devised. What else might we forecast? Demands are not the only uncertain parameters that may require forecasting when building decision support models for supply chain and transportation systems. Operational parameters on the supply side may be just as important to forecast. For example, suppose one would like to build a model for routing trucks between various customer locations. The objective of the model is to determine minimum travel time tours. Clearly, travel time parameters between customer locations and depots are required. Forecasting techniques can be used. Ideas to remember Forecasts are usually wrong. (Do not treat as known information!). Forecasts must include analysis of their potential errors. The longer the forecast horizon, the less accurate the forecast will generally be. Benefits can often be obtained from transforming a forecasted quantity into a known quantity (when this can be done, it comes at a cost however). Systems that are agile in responding to change are less dependent on accuracy in forecasts. Forecasting Methodology Tree Reference: Armstrong, J.S. Long-range Forecasting, Second Edition, Forecasting time series data Frequently, problems in forecasting for logistics systems require the analysis of univariate time series data; often we are interested in the evolution of customer demand for a single product over time, and what the future demand will be for that product. If demand for that product is independent of demand for other products, univariate techniques can be utilized. (Recall: univariate means of a single variable, like demand). Suppose that the observed demand values are given by X t, t = 1, 2,... Our goal is to develop a forecasted demand value Y t for a number of future periods of interest.
2 2 Figure 1: Forecasting methodology tree. Historical Data Assume that we have collected a number of historical observations X t of a value we wish to forecast, for t = 1, 2,..., n periods up to and including the present. Further, suppose that these observations have been made at uniform intervals; for example, once a day, once a week, once a month, etc. Forecast Notation Suppose that we are currently in some period τ 1 and we wish to develop a forecast for the next period. We will use the notation Y τ to represent that forecast. Additionally, we may also have to generate a multiple-period-ahead forecasts sometimes. In this case, we will use the notation Y T (τ 1) to represent the forecast for period T determined in period τ 1. Evaluating quality Any forecasting method can and should be evaluated by analyzing the errors produced by the forecasts; this is easily done once observations are made. Suppose that e t are the errors over a set of n time periods: e t = Y t X t t = 1, 2,...n Errors should be compared against any assumptions made in the modeling process; for example, mean zero is common. In regression models, we might assume that the errors are normal; this should be verified.
3 Definition 1 (Unbiased Errors) A set of errors is said to be unbiased if and only if: E[e t ] = 0 3 The errors also can give a measure of the quality of the forecast, but this may be confounded with the uncertainty in the quantity to be forecast. One way to do this is to calculate the MSE: MSE = 1 n Note of course that this value has units (something 2 ), and thus the performance of a method for forecasting demand in barrels of oil is not directly comparable to forecasting number of microprocessors. We will refer to the quantity MSE as the standard error of the forecast. Another common measure of quality is the mean absolute deviation (MAD), which is useful since its units are (something) rather than (something 2 ): MAD = 1 n n i=1 e 2 i n e i Also useful is a relative performance error of a set of forecasts that is unitless. One such measure is the mean absolute percentage error (MAPE), which yields the average relative error of a set of forecasts: MAP E = [ 1 n i=1 n i=1 e i X i ] 100 Error Normality Assumption and its Implications In most situations, we will assume that the errors from a set of forecasts are independent and identically-distributed according to a normal random variable with mean 0 and variance σe 2 = MSE: e t N(0, σe 2 ). This should make sense intuitively. The idea behind forecasting is to create a statistical model of the data to be forecast, and we intend that the errors of that statistical model exhibit reasonable behavior. Errors with mean zero indicate that our model is unbiased, and constant-variance errors without any correlation over time indicate that we have captured most of the signal of the data in our model. Extrapolation forecasting for time series data One of the simplest and most prevalent means for forecasting time series data is through the use of extrapolation-based techniques. Assume for now that for our time series data, we have collected a number of historical observations X t, for t = 1, 2,..., n periods up to
4 4 and including the present. Further, suppose that these observations have been made at uniform intervals; for example, once a day, once a week, once a month, etc. Constant-pattern (stationary) time-series data In some cases, you will be forecasting quantities that you expect to remain relatively constant over time. Since these quantities are still likely to fluctuate somewhat around a mean, and since that mean might move from one level to another over time, forecasting techniques have been developed for constant pattern data. An appropriate statistical model for the observed quantity at time t might be: X t = µ + ɛ t where µ is some unknown mean, and ɛ t are i.i.d. random variables with mean zero and variance σ 2. When it is reasonable to model data in this way, but we expect that µ might be varying slowly over time, it is appropriate to use moving-average or exponential smoothing methods. m-period Moving Average m-period moving averages are the simplest type of extrapolation forecast, and are useful for constant-pattern data. An m-period moving average is simply the arithmetic mean of the m most-recent observations. The forecast made in period t 1 for period t is thus: Y t = 1 m t 1 i=t m In moving average models, an implicit assumption is that the data have some constant mean. Thus, the multiple-step-ahead forecast Y T (t 1) is also equal to the single stepahead forecast: Y T (t 1) = Y t for all T > t. Pros: Simple to calculate, implement Easy update procedure: Y t+1 = Y t + 1 m (X t X t m ) Outlier observations eliminated from consideration after m periods X i Cons: If the data contains a trend, this method will always lag behind it. Simple exponential smoothing
5 Another prevalent and popular forecasting method is exponential smoothing, in which the most recent forecast is combined with the most recent observation to generate a new forecast for a stationary series: 5 Y t = αx t 1 + (1 α)y t 1 Y t = Y t 1 + α(x t 1 Y t 1 ) = Y t 1 αe t 1 where α [0, 1], is a smoothing constant. The idea behind exponential smoothing is that your previous forecast was probably pretty good, but now you have one new piece of observed data that you can use to update your forecast. The smoothing constant will control this update. Large values of α will place more weight on the most recent observation (a reactive step), and less weight on the previous forecast. Of course, if α = 1, your new forecast is simply the observed value of demand in the previous period! Exponential smoothing methods with α < 1 essentially use all previous periods of observed demand in generating the forecast. Since the specification equation in this case defines a recursion, we can expand it: Y t = αx t 1 + (1 α)αx t 2 + (1 α) 2 Y t 2 Y t = (1 α) i αx t i 1 = i=0 a i X t i 1 Hence, each previous demand observation is included to produce the newest forecast, but the weights given to each of these observations decreases exponentially (hence, the name). α > α(1 α) > α(1 α) 2 >... Pros: Simple to calculate, implement Easy update procedure Infinite degree of control over data age using α i=0 Cons: If the data contains a trend, this method will always lag behind it. Outliers will always have some impact Exponential smoothing methods will still lag the trend, but less so since the average age of the data used in the forecast can be made younger with an appropriate choice of α.
6 An exponential smoothing example: Example: Weekly orders of Atlanta Gateway CountryStore for a computer model: 200, 250, 175, 186, 225, 285, 305, period moving averages in periods 4,5,6,7,8: 208, 204, 195, 232, 272. Errors: 22, -21, -90, -73, 82. Now, let s try ES with α = 0.3: Suppose that the initial forecast is right on the money, Y 1 = 200. Forecast for period 2 is now (0.3)(200) + (0.7)(200) = 200. Period 3 is (0.3)(250) + (0.7)(200) = 215. Period 4: (0.3)(175) + (0.7)(215) = ,6,7,8 are 198, 206, 230, 252. Age of Data Comparison Between MA and ES One way to compare MA and ES is to consider the average age of the data included in the forecast. The average data age for an m-period moving averages can be calculated as: m m = 1 m(m + 1) m 2 = m where the data from period t 1 is 1 period old, from t 2 is two periods old, etc. The average age of the data in exponential smoothing models can be calculated by assuming we have an infinite history of historical observations. Note from the recursive formula developed earlier, the weight given to data that is i periods old is α(1 α) i 1. Therefore, we can calculate the average age as follows: iα(1 α) i 1 = 1 α i=1 Thus, exponential smoothing and moving average methods use data with the same average age when: m Suppose we would like to tune our smoothing constant so that an exponential smoothing method used data with the same average age as an m-period moving average. We would simply let α = 2 to accomplish this. For example, ES(0.4) is equivalent from a m+1 data age point-of-view to a 4-period moving average, MA(4). = 1 α 6
7 7 Forecasting Data with a Trend: Double-Exponential Smoothing (Holt s Method) As described earlier, simple exponential smoothing and moving averages are suitable for forecasting data that is varying around a constant mean. If the mean changes every once in a while, these methods will also do a good job of forecasting around the new mean once they catch up to it. However, if the demand data to be forecast includes a positive or negative growth trend, MA and ES methods will tend to produce forecasts that always lag behind the trend, thus giving poor forecast errors. Fortunately, exponential smoothing methods have been developed to specifically deal with data with trends. An extension of exponential smoothing, known as Holt s method, is designed for data with trends. In Holt s method, we will calculate two values over time, and use them to create our forecast. The first, R t, is an estimate of the expected level of the series in period t. The second, G t, is an estimate of the trend (or slope) in period t. Given these two values, the standard next period forecast is given by: Y t = R t 1 + G t 1 The idea here is to add our best estimate of the slope to our best estimate of the level (at period t 1) to create the forecast. The slope estimate represents the change in demand we expect from one period to the next; if we imagine that our trend is constant for the next τ time periods, the multiple-step-ahead forecast for period t + τ is given by: Y t+τ (t 1) = R t 1 + (1 + τ)g t 1 Let s now consider how to update our estimates R t and G t each period after the the data X t becomes known. We will now employ two smoothing constants, α [0, 1] for the level and β [0, 1] for the slope. After observing the demand X t, we first update the estimate of the level by combining X t with the previous period s forecast of demand: R t = αx t + (1 α)(r t 1 + G t 1 ) = αx t + (1 α)y t Next, we re-estimate the slope by combining a new slope estimate given by the difference between the current level and the previous level with our previous slope estimate. G t = β(r t R t 1 ) + (1 β)g t 1 Note in this case that we do not use (X t X t 1 ) as the new slope estimate, since the smoothed value (R t R t 1 ) is likely a better estimate of the true slope. (Based on what you know about random fluctuations, you should think about this claim.) To use Holt s method, it is necessary to choose values for the two parameters α and β, and to choose initial values for R 1 and G 1 (assuming that the first period for which data is available is period 1). In contrast, to initialize simple exponential smoothing, it is necessary only to choose values for α and Y 1. A reasonable choice for R 1 might be X 1,
8 and a reasonable choice for G 1 might be X N X 1, where N is the latest period for which N we have already observed data, or perhaps is another earlier period selected so that the trend in [1, N] is representative and constant. Equivalence of Holt s Method to Simple Exponential Smoothing when β = 0 and G 1 = 0 Holt s method reduces to simple exponential smoothing when the parameter β is set to zero and the initial slope G 1 = 0. This should be clear; in each period, G t will now be zero. Therefore, Y t = S t 1 = αx t 1 + (1 α)y t 1, which is the simple exponential smoothing model. Simple linear regression methods When we suspect trend in a time-series, we might postulate that the time series data fits a linear model, as follows: 8 X t = a + bt + ɛ t where a and b are unknown constants, and ɛ t are the error random variables that we assume are i.i.d. normal random variables with mean 0 and variance σ 2. Given, therefore, some estimates of â and b, we can use this type of model for forecasting: Y t = â + bt with the multiple step-ahead forecast given by: Y t+τ (t 1) = â + b(t + τ) Given a time series, X i, i = 1, 2,..., t 1, we can create a prediction for the next period t by determining reasonable estimates of â and b. The most logical way to do so is to use a set of m most recent observations of X i, in fact as far back as you believe the current trend should hold. It can be shown that the least-squares method can be employed with this data to develop good, unbiased estimates of a and b. In this method, we attempt to minimize the sum of the squared errors in our data. Let X i be the predicted value for observation i, X i = â+ bi. The goal then is to minimize: g(â, b) = t 1 i=t m (X i X i ) 2 = t 1 i=t m [X i (â + bi)] 2 It turns out this is easy to do by simply setting the partial derivatives of this function with respect to â and b equal to zero and solving simultaneously. The details of this process are quite simple, and will be skipped in this section. Forecasting with Seasonality
9 Often, time series data display behavior that is seasonal. Seasonality is defined to be the tendency of time-series data to exhibit behavior the repeats itself every q periods. We use the term season to represent the period of time before behavior begins to repeat itself; q is therefore the season length in periods. For example, air travel is known to exhibit several types of seasonal behavior. For example, if we example passenger travel demand week after week, we may see that certain days of the week always exhibit relatively higher demands than others; for example, travel on Fridays, Sundays might be highest, followed by Mondays, followed by Thursdays, and finally Sats, Tues, Weds. In this case, the season length is 7 periods (days). In another example, suppose we look at monthly demand for PCs. This may also exhibit a seasonal pattern in which the months of August (back-to-school), October, November, and December (winter holidays) exhibit strong demand, while January and February exhibit especially weak demand. In this example, the season length is 12 periods (months). Forecasting Seasonal Data with Constant Multiplicative Factors In many cases, it may make sense to model seasonal behavior with a multiplicative factor. For example, perhaps a statement like air travel demand is 20% greater than average on Fridays is reasonable. Note that implicit in this statement is that the seasonal effect on demand is relative; if overall demand grows (due to a positive trend), then the multiplicative seasonal factor would predict 20% more demand of the new average. If we do not believe that seasonal factors change over time, one simple method of dealing with seasonality is to forecast using a deseasonalized time series, X i i = 1, 2,..., t 1, defined as follows: X i = X i S j j = i mod q where X i is the original time series, and S j is the seasonal factor associated with the j-th period in each season. Note that some method must be developed to determine a set of seasonal factors S j j = 1,..., q. Good practice also dictates that these factors have an average value of 1, therefore: q q = j=1 To estimate a set of factors using multiple (at least 2) seasons of historical data, you should first calculate a deseasonalized level Xt for each period. If the data contains no trend, the deseasonalized level might just be the average value over all periods. If the data contains a trend, you might fit a simple linear regression model to the data, and use it to generate predicted values for each period; the predicted value for t would be a good estimate for X t. Each factor S j can then be estimated by averaging over like periods for each season for which data is available: ( ) X j S j = average, X j+q, X j+2q,... X j X j+q X j+2q S j 9
10 Given a set of factors, we can forecast using moving averages, exponential smoothing, or Holt s method using the deasonalized series exactly as before. Then, suppose Y t is the forecast generated for period t using the deasonalized series. Then, the true forecast for t is: Y t = Y t S j where j = t mod q Triple Exponential Smoothing with Multiplicative Seasonal Factors: Winters Method For some problems, we may wish to continually update our seasonal factors as we receive new data. Winters Method uses a triple exponential smoothing approach to extend Holt s Method to additionally update seasonal factors. In this method, we will calculate three values over time: an estimate of the deseasonalized level R t, an estimate of the trend G t, and an estimate of a multiplicative seasonal factor S t. Suppose that the season length is q (for example, if a season is a week of days, then q = 7). For notation purposes, we will now explicitly record a seasonal factor S t for each period t, rather than just for the first q seasonal periods. Given these three values, our forecast for the next period is: Y t = (R t 1 + G t 1 )S t q In this expression, we simply multiply the deseasonalized forecast (given by Holt s equation) by the seasonal factor to yield the new forecast. Note that we use the best estimate of the seasonal factor for this time period in the season, which was last updated q periods ago. To make multiple-step-ahead forecasts (for τ < q), the following expression should be used: Y t+τ (t 1) = (R t 1 + (1 + τ)g t 1 )S t+τ q Now let s look at how to update the level, trend, and seasonal factor each time period. First, the current deseasonalized level is updated as follows: R t = α X t S t q + (1 α)(r t 1 + G t 1 ) Note that this update is identical to Holt s, except that we use a deseasonalized demand observation in the first term, and our deseasonalized forecast in the second term. Next, we update the trend: G t = β(r t R t 1 ) + (1 β)g t 1 Again, we blend a deseasonalized trend estimate with our previous trend estimate. Finally, we update our estimate of the seasonal factor. Here we introduce a new smoothing constant γ [0, 1], and create a combination of the most recently observed seasonal factor given by the demand X t divided by the deasonalized series level estimate R t and the previous best seasonal factor estimate for this time period: 10
11 11 S t = γ X t R t + (1 γ)s t q Since seasonal factors represent deviations above and below the average, the average of any q consecutive seasonal factors should always be 1. Thus, after estimating S t, it is good practice (but not necessary) to renormalize the q most recent seasonal factors such that: t S i = q Variations on Winters Method i=t q+1 First, it should be clear that when γ = 0 and q = 1 and the first seasonal factor is S 1 is set equal to one, Winters method is identical to Holt s method. However, it is sometimes useful to create a forecast that uses seasonality as a component but does not include a trend. This can also be easily modeled in Winters method, by simply setting β = 0 and G 1 = 0. The model now reduces to a regular exponential smoothing model, with multiplicative seasonal factors! Advanced: Initializing Winters Method with Multiplicative Factors Unlike simple exponential smoothing and Holt s method, Winters method requires more data for initialization. As a general rule of thumb, a minimum of two full seasons (or 2q periods) of historical data is needed to initialize a set of seasonal factors. Suppose the current time period is period 0, and we have two seasons of historical data (if you want to initialize with more than two seasons of historical data, a similar process can be easily developed). Let the historical data be given by X 2q+1, X 2q+2,..., X 0. Here are the steps to initialize Winters for period 0: 1. Calculate the averages for the two separate seasons of historical data: V 1 = 1 q V 2 = 1 q q i= 2q+1 0 i= q+1 2. Define the initial slope estimate G 0 to be V 2 V 1 q. This is the slope of the line generated by placing V 1 at the center of the first season s periods, and V 2 at the center of the second season s periods, and connecting the two points. 3. Let R 0, the estimated series level in period 0, be given as V 2 + G 0 q (a) Given R 0 and G 0 now, the estimated series level at any time t during the two previous seasons is given by R 0 + tg 0, t = 2q + 1, 2q + 2,..., 0 (e.g., X i X i
12 for period -1 the value would be R 0 G 0, period -2 R 0 2G 0, etc.). The initial seasonal factors are generated by taking the observed demand in t and dividing by the estimated level: S t = X t R 0 + tg 0 t = 2q + 1, 2q + 2,..., 0 (b) Now we average the factors for the two corresponding periods in each season: S q+1 = S 2q+1 + S q+1,..., I 0 = I q + I (c) Finally, we normalize the seasonal factors: [ ] S t S t = q 0 i= q+1 S t = q + 1, q + 2,..., 0 i 12
13 Causal Forecasting with Linear Regression Models The time series extrapolation methods that we ve discussed up to this point are all reasonable forecasting models, and they can be used to effectively model stationary, trend, and seasonal data. However, there are often additional factors that may play an important role in the demand that will be seen in any given period, and we would like a forecasting methodology that can capture these factors. Typically, we have a number of seasonal factors that need to be incorporated. For example, daily demand for trucking services may have day of week and week of month effects which could be captured with multiple seasonal factors. Also, they may have day of year effects that reflect certain holidays; these may not repeat in a strictly seasonal way since the number of days between holidays changes. Linear regression models provide a useful framework for developing causal demand forecasts that allow us to capture multiple seasonal effects as well as other causal effects. As you already know, a linear regression model is a statistical model that assumes that a value to be estimated (such as a forecast) can be decomposed into a weighted sum of a number of causal factors, plus a random error term: 13 Y t = A 0 + A 1 f 1,t + A 2 f 2,t A n f n,t + ɛ t n = A 0 + A i f j,t + ɛ t j=1 In this expression, the values A 1,..., A n are the values of the weights (coefficients) given to specific factors f, and ɛ t is the random error term, while A 0 is an additive constant. It will be assumed that the error in each time period is independently and identically distributed according to a normal random variable with mean 0 and some variance σ 2 E. Each factor f j,t represents a piece of known data that is available in period t that might be useful to create a forecast. Estimating Coefficients in Regression Models Before we discuss some factors that might be useful in demand forecasting models, we should first review how to estimate the coefficients of a regression model. Here, unlike the case with the extrapolation models presented previously, we can explicitly use the forecast errors during a historical calibration period to estimate values for the coefficients that give the best fit to the data. In essence, we are using an analytical process to tune the parameters of the model such that our forecasts over the historical calibration period have as small errors as possible. You should recall from your statistics courses that the least-squares method can be employed estimates for the values A. In least-squares, we use a calculus methodology to minimize the sum of the squared errors over our historical calibration period. For each period, the actual error value e t is given by e t = Y t X t, where X t is the observed value of demand. Thus, the squared error in t is e 2 t = (Y t X t ) 2. Plugging in our formula for
14 Y t yields: e 2 t = ( A 0 + ) 2 n A j f j,t X t j=1 Now suppose we have N periods of historical data for use in estimation, and suppose for simplicity that these periods have numbers 1,..., N. To determine the best estimates for the factors A, we can look to minimize the sum of squared errors over this historical data: min SSE = min A A N t=1 e 2 t = min A ( N A 0 + t=1 ) 2 n A j f j,t X t Without developing the mathematics formally, it is a simple exercise in calculus and linear algebra to determine the values of A 0, A 1,..., A n that minimize this expression. The details will be skipped here; you can consult your statistics notes for more information. The important point is that the least-squares estimates for the coefficients are easy to determine, and they stem from minimizing the sum of the squared errors observed over a calibration period. Choice of Factors in Regression Models Given that regression might be a useful technique for developing a forecasting model, what might be some useful factors to include? 1. Model Seasonal Effects with Indicator Variables: Seasonal effects are best modeled using indicator variables. For example, suppose you think the month of the year has an important effect on demand. For each month that you think is significantly different from average, you could introduce an indicator variable that is equal to one if the period is in that month, and 0 if not. Note: if you use indicator variables in this way, and you plan on using an indicator for each month, do not use an indicator for the last month since it is not necessary and may confound the estimation procedure (because of linear dependence). For example, if you had an indicator variable for all months of the year except December, it is clear that if a period had zeros for all indicators, it would have to be in December! 2. Model Other Important Exceptional Periods with Indicator Variables: Sometimes exceptional periods occur from a forecasting point of view that are not necessarily seasonal, but still have an effect on demand. Model such effects with indicator variables. For example, perhaps demand for accounting services spikes the two weeks prior to April 15 each year. Or perhaps passenger vehicle-miles traveled spikes on holiday weekends. 3. Capturing Moving Average Effects: In regression models, you may think that past period demands should still be factored into forecasts with a proper coefficient. j=1 14
15 This is simple to do with a regression model. However, when doing so, you must be careful to verify that the errors are not serially-correlated since they must remain independent over time. 4. Capturing Trend: Trends in data over time can be captured by including a factor t. 15 Evaluating Regression Model Quality Again, this is not a course on statistics but I want you to think back to what you ve learned regarding linear regression models. Recall that the R 2 value (the ratio of the sum of squares of the regression terms to the total sum of squares of the data) of a regression model gives you an idea of the quality of a regression model; as R 2 approaches 1, more and more of the total variation in the model is explained by the factors and less is explained by simple random noise. Thus, larger values of R 2 tend to indicate better fit. However, remember that using more predictive factors will always increase the value of R 2. It is equally important to check the statistical significance of the estimated coefficients; they should each have t values that indicate they have a high probability that they are significantly different from zero. Finally, you should ensure that there is no inherent bias in your regression forecasting model by analyzing your errors (residuals). For example, you should verify that the errors appear to be approximately normally distributed (using a normal probability plot or a statistical test). Also, you should look for errors that are homoskedastic and do not exhibit any curvilinear patterns when plotted against (1) time, (2) predicted values, and (3) values of the factors. Homoskedastic errors exhibit a constant variance or spread when plotted against a dependent variable.
Time series Forecasting using Holt-Winters Exponential Smoothing Prajakta S. Kalekar(04329008) Kanwal Rekhi School of Information Technology Under the guidance of Prof. Bernard December 6, 2004 Abstract
Forecasting Methods What is forecasting? Why is forecasting important? How can we evaluate a future demand? How do we make mistakes? Prod - Forecasting Methods Contents. FRAMEWORK OF PLANNING DECISIONS....
Demand Forecasting When a product is produced for a market, the demand occurs in the future. The production planning cannot be accomplished unless the volume of the demand known. The success of the business
Outline: Demand Forecasting Given the limited background from the surveys and that Chapter 7 in the book is complex, we will cover less material. The role of forecasting in the chain Characteristics of
Table of Contents Industry Environment and Concepts for Forecasting 1 Forecasting Methods Overview...2 Multilevel Forecasting...3 Demand Forecasting...4 Integrating Information...5 Simplifying the Forecast...6
Objectives of Chapters 7,8 Planning Demand and Supply in a SC: (Ch7, 8, 9) Ch7 Describes methodologies that can be used to forecast future demand based on historical data. Ch8 Describes the aggregate planning
1. Forecasting Theory at a Glance (For IES, GATE, PSU) Forecasting means estimation of type, quantity and quality of future works e.g. sales etc. It is a calculated economic analysis. 1. Basic elements
26 CHAPTER 2. TREND AND SEASONAL COMPONENTS 2.2 Elimination of Trend and Seasonality Here we assume that the TS model is additive and there exist both trend and seasonal components, that is X t = m t +
Chapter 4 Forecasting Production Planning MRP Purchasing Sales Forecast Aggregate Planning Master Production Schedule Production Scheduling Production What is forecasting? Types of forecasts 7 steps of
Introduction to time series analysis Margherita Gerolimetto November 3, 2010 1 What is a time series? A time series is a collection of observations ordered following a parameter that for us is time. Examples
Part 3 : Acquisition & Production Support. Ch.3 Demand Forecasting. Edited by Dr. Seung Hyun Lee (Ph.D., CPL) IEMS Research Center, E-mail : email@example.com Demand Forecasting. Definition. An estimate
Simple and Multiple Regression Analysis Example: Explore the relationships among Month, Adv.$ and Sales $: 1. Prepare a scatter plot of these data. The scatter plots for Adv.$ versus Sales, and Month versus
SAINT-PETERSBURG STATE UNIVERSITY Mathematics & Mechanics Faculty Chair of Analytical Information Systems Garipov Emil Analysis of algorithms of time series analysis for forecasting sales Course Work Scientific
Logistics and Supply Chain Management Demand Forecasting 1 Outline The role of forecasting in a supply chain Characteristics ti of forecasts Components of forecasts and forecasting methods Basic approach
5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model............................................................. 4
Chapter 22 Page 1 Time Series and Forecasting A time series is a sequence of observations of a random variable. Hence, it is a stochastic process. Examples include the monthly demand for a product, the
17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.
ing, introduction Sales and operations planning (SOP) forecasting To balance supply with demand and synchronize all operational plans Capture demand data forecasting Balancing of supply, demand, and budgets.
8. Time Series and Prediction Definition: A time series is given by a sequence of the values of a variable observed at sequential points in time. e.g. daily maximum temperature, end of day share prices,
A COMPARISON OF REGRESSION MODELS FOR FORECASTING A CUMULATIVE VARIABLE Joanne S. Utley, School of Business and Economics, North Carolina A&T State University, Greensboro, NC 27411, (336)-334-7656 (ext.
Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.
Baseline Forecasting With Exponential Smoothing Models By Hans Levenbach, PhD., Executive Director CPDF Training and Certification Program, URL: www.cpdftraining.org Prediction is very difficult, especially
Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
s Prepared by JOHN S. LOUCKS St. Edward s University 2002 South-Western/Thomson Learning 1 Chapter 18 Forecasting Time Series and Time Series Methods Components of a Time Series Smoothing Methods Trend
A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data Athanasius Zakhary, Neamat El Gayar Faculty of Computers and Information Cairo University, Giza, Egypt
Regression Analysis Prof. Soumen Maity Department of Mathematics Indian Institute of Technology, Kharagpur Lecture - 2 Simple Linear Regression Hi, this is my second lecture in module one and on simple
PRODUCTION PLANNING AND CONTROL CHAPTER 2: FORECASTING Forecasting the first step in planning. Estimating the future demand for products and services and the necessary resources to produce these outputs
1) The S & P/TSX Composite Index is based on common stock prices of a group of Canadian stocks. The weekly close level of the TSX for 6 weeks are shown: Week TSX Index 1 8480 2 8470 3 8475 4 8510 5 8500
4. Simple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/4 Outline The simple linear model Least squares estimation Forecasting with regression Non-linear functional forms Regression
Agenda Managing Uncertainty in the Supply Chain TIØ485 Produkjons- og nettverksøkonomi Lecture 3 Classic Inventory models Economic Order Quantity (aka Economic Lot Size) The (s,s) Inventory Policy Managing
1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.
www.sjm06.com Serbian Journal of Management 10 (1) (2015) 3-17 Serbian Journal of Management FOCUS FORECASTING IN SUPPLY CHAIN: THE CASE STUDY OF FAST MOVING CONSUMER GOODS COMPANY IN SERBIA Abstract Zoran
Indian School of Business Forecasting Sales for Dairy Products Contents EXECUTIVE SUMMARY... 3 Data Analysis... 3 Forecast Horizon:... 4 Forecasting Models:... 4 Fresh milk - AmulTaaza (500 ml)... 4 Dahi/
CHAPTER 19 TIME SERIES ANALYSIS & FORECASTING Basic Concepts 1. Time Series Analysis BASIC CONCEPTS AND FORMULA The term Time Series means a set of observations concurring any activity against different
RELEVANT TO ACCA QUALIFICATION PAPER P3 Studying Paper P3? Performance objectives 7, 8 and 9 are relevant to this exam Business forecasting and strategic planning Quantitative data has always been supplied
Integrated Resource Plan March 19, 2004 PREPARED FOR KAUA I ISLAND UTILITY COOPERATIVE LCG Consulting 4962 El Camino Real, Suite 112 Los Altos, CA 94022 650-962-9670 1 IRP 1 ELECTRIC LOAD FORECASTING 1.1
Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
Numerical Summarization of Data OPRE 6301 Motivation... In the previous session, we used graphical techniques to describe data. For example: While this histogram provides useful insight, other interesting
4 C H A P T E R Forecasting DISCUSSION QUESTIONS 1. Qualitative models incorporate subjective factors into the forecasting model. Qualitative models are useful when subjective factors are important. When
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
CTL.SC1x -Supply Chain & Logistics Fundamentals Causal Forecasting Models MIT Center for Transportation & Logistics Causal Models Used when demand is correlated with some known and measurable environmental
A Primer on Forecasting Business Performance There are two common approaches to forecasting: qualitative and quantitative. Qualitative forecasting methods are important when historical data is not available.
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r is
Demand Management Where Practice Meets Theory Elliott S. Mandelman 1 Agenda What is Demand Management? Components of Demand Management (Not just statistics) Best Practices Demand Management Performance
Calculating Chapter 7 (Chatfield) Monika Turyna & Thomas Hrdina Department of Economics, University of Vienna Summer Term 2009 Terminology An interval forecast consists of an upper and a lower limit between
Exhibit 2: Promotional Forecast Demonstration Consider the problem of forecasting for a proposed promotion that will start in December 1997 and continues beyond the forecast horizon. Assume that the promotion
16 : Demand Forecasting 1 Session Outline Demand Forecasting Subjective methods can be used only when past data is not available. When past data is available, it is advisable that firms should use statistical
Copyright c 2005 by Karl Sigman Portfolio mean and variance Here we study the performance of a one-period investment X 0 > 0 (dollars) shared among several different assets. Our criterion for measuring
1 Forecasting Women s Apparel Sales Using Mathematical Modeling Celia Frank* 1, Balaji Vemulapalli 1, Les M. Sztandera 2, Amar Raheja 3 1 School of Textiles and Materials Technology 2 Computer Information
Elements of Revenue Forecasting II: the Elasticity Approach and Projections of Revenue Components Fiscal Analysis and Forecasting Workshop Bangkok, Thailand June 16 27, 2014 Joshua Greene Consultant IMF-TAOLAM
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
Tarigan Statistical Consulting & Coaching statistical-coaching.ch Doctoral Program in Computer Science of the Universities of Fribourg, Geneva, Lausanne, Neuchâtel, Bern and the EPFL Hands-on Data Analysis
Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main
Chapter 4: Vector Autoregressive Models 1 Contents: Lehrstuhl für Department Empirische of Wirtschaftsforschung Empirical Research and und Econometrics Ökonometrie IV.1 Vector Autoregressive Models (VAR)...
PITFALLS IN TIME SERIES ANALYSIS Cliff Hurvich Stern School, NYU The t -Test If x 1,..., x n are independent and identically distributed with mean 0, and n is not too small, then t = x 0 s n has a standard
Rational Equations Overview of Objectives, students should be able to: 1. Solve rational equations with variables in the denominators.. Recognize identities, conditional equations, and inconsistent equations.
Multiple Linear Regression in Data Mining Contents 2.1. A Review of Multiple Linear Regression 2.2. Illustration of the Regression Process 2.3. Subset Selection in Linear Regression 1 2 Chap. 2 Multiple
Multiple regression Introduction Multiple regression is a logical extension of the principles of simple linear regression to situations in which there are several predictor variables. For instance if we
1 Forecasting Women s Apparel Sales Using Mathematical Modeling Celia Frank* 1, Balaji Vemulapalli 1, Les M. Sztandera 2, Amar Raheja 3 1 School of Textiles and Materials Technology 2 Computer Information
Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial
Chicago Booth BUSINESS STATISTICS 41000 Final Exam Fall 2011 Name: Section: I pledge my honor that I have not violated the Honor Code Signature: This exam has 34 pages. You have 3 hours to complete this
CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship
03-Mentzer (Sales).qxd 11/2/2004 11:33 AM Page 73 3 Time Series Forecasting Techniques Back in the 1970s, we were working with a company in the major home appliance industry. In an interview, the person
Exponential Smoothing with Trend As we move toward medium-range forecasts, trend becomes more important. Incorporating a trend component into exponentially smoothed forecasts is called double exponential
Time Series Analysis and Forecasting Math 667 Al Nosedal Department of Mathematics Indiana University of Pennsylvania Time Series Analysis and Forecasting p. 1/11 Introduction Many decision-making applications
ISSUES IN UNIVARIATE FORECASTING By Rohaiza Zakaria 1, T. Zalizam T. Muda 2 and Suzilah Ismail 3 UUM College of Arts and Sciences, Universiti Utara Malaysia 1 firstname.lastname@example.org, 2 email@example.com and 3
Chapter 2 Maintenance Strategic and Capacity Planning 2.1 Introduction Planning is one of the major and important functions for effective management. It helps in achieving goals and objectives in the most
JUST THE MATHS UNIT NUMBER 1.8 ALGEBRA 8 (Polynomials) by A.J.Hobson 1.8.1 The factor theorem 1.8.2 Application to quadratic and cubic expressions 1.8.3 Cubic equations 1.8.4 Long division of polynomials
COMP6053 lecture: Time series analysis, autocorrelation firstname.lastname@example.org Time series analysis The basic idea of time series analysis is simple: given an observed sequence, how can we build a model that
Time Series Analysis Forecasting with ARIMA models Andrés M. Alonso Carolina García-Martos Universidad Carlos III de Madrid Universidad Politécnica de Madrid June July, 2012 Alonso and García-Martos (UC3M-UPM)
IBM SPSS Forecasting 22 Note Before using this information and the product it supports, read the information in Notices on page 33. Product Information This edition applies to version 22, release 0, modification
TIME SERIES ANALYSIS Ramasubramanian V. I.A.S.R.I., Library Avenue, New Delhi- 110 012 email@example.com 1. Introduction A Time Series (TS) is a sequence of observations ordered in time. Mostly these
4. Forecasting Trends: Exponential Smoothing Introduction...2 4.1 Method or Model?...4 4.2 Extrapolation Methods...6 4.2.1 Extrapolation of the mean value...8 4.2.2 Use of moving averages... 10 4.3 Simple
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma firstname.lastname@example.org The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association
6.0 - Chapter Introduction In this chapter, you will learn to use moving averages to estimate and analyze estimates of contract cost and price. Single Moving Average. If you cannot identify or you cannot
PART V ANALYZING RELATIONSHIPS CHAPTER 14 Analyzing Linear Relationships, Two or More Variables INTRODUCTION In the previous chapter, we introduced Kate Cameron, the owner of Woodbon, a company that produces
Inferential Statistics Sampling and the normal distribution Z-scores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are
CTL.SC1x -Supply Chain & Logistics Fundamentals Time Series Analysis MIT Center for Transportation & Logistics Demand Sales By Month What do you notice? 2 Demand Sales by Week 3 Demand Sales by Day 4 Demand
TIME SERIES ANALYSIS A time series is a sequence of data indexed by time, often comprising uniformly spaced observations. It is formed by collecting data over a long range of time at a regular time interval
Smoothing methods Marzena Narodzonek-Karpowska Prof. Dr. W. Toporowski Institut für Marketing & Handel Abteilung Handel What Is Forecasting? Process of predicting a future event Underlying basis of all
AP Statistics 2002 Scoring Guidelines The materials included in these files are intended for use by AP teachers for course and exam preparation in the classroom; permission for any other use must be sought
Bivariate Regression Analysis The beginning of many types of regression TOPICS Beyond Correlation Forecasting Two points to estimate the slope Meeting the BLUE criterion The OLS method Purpose of Regression
Causal Leading Indicators Detection for Demand Forecasting Yves R. Sagaert, El-Houssaine Aghezzaf, Nikolaos Kourentzes, Bram Desmet Department of Industrial Management, Ghent University 13/07/2015 EURO
Forecast Forecast is the linear function with estimated coefficients T T + h = b0 + b1timet + h Compute with predict command Compute residuals Forecast Intervals eˆ t = = y y t+ h t+ h yˆ b t+ h 0 b Time
Your consent to our cookies if you continue to use this website.