The Economic Impact of Software Development Process Choice - Cycle-time Analysis and Monte Carlo Simulation Results
|
|
|
- Nigel Craig
- 10 years ago
- Views:
Transcription
1 The Economic Impact of Software Development Process Choice - Cycle-time Analysis and Monte Carlo Simulation Results Troy Magennis [email protected] Abstract IT executives initiate software development process methodology change with faith that it will lower development cost, decrease time-to-market and increase quality. Anecdotes and success stories from agile practitioners and vendors provide evidence that other companies have succeeded following a newly chosen doctrine. Quantitative evidence is scarcer than these stories, and when available, often unverifiable. This paper introduces a quantitative approach to assess software process methodology change. It proposes working from the perspective of impact on cycle-time performance (the time from the start of individual pieces of work until their completion), before and after a process change. This paper introduces the history and theoretical basis of this analysis, and then presents a commercial case study. The case study demonstrates how the economic value of a process change initiative was quantified to understand success and payoff. Cycle-time is a convenient metric for comparing proposed and ongoing process improvement due to its easy capture and applicability to all processes. Poor cycle-time analysis can lead to teams being held to erroneous service level expectations. Properly comparing the impact of proposed process change scenarios, modeled using historical or estimated cycle-time performance helps isolate the bottom line impact of process changes with quantitative rigor. 1. Introduction Company IT leadership often initiate software development process improvements due to dissatisfaction with time to market and development cost. Leaders responsible for product direction have many ideas, but feel these ideas take too long to make it into their customers hands. These leaders get frustrated that competitors are (or appear) faster to market than their organization, and look for ways to compete more equitably. Process change proponents and vendors offer a variety of the latest tools, books, training, and certification as immediate relief to the vague too slow and too costly symptoms. XP, Agile, Lean, Scrum, and Kanban are some of the well-known processes that have risen to the top of the popularity charts, each with case studies (often just one) showing great impact when applied correctly by the inventors. The final choice appears to fall on faith based lines, with many organizations moving from one process to the next in search of nirvana. A quantitative framework for estimating and assessing true impact is needed for informed decisions. Measuring the quantitative impact of a software development process change is hard. Measurable change takes weeks or months to evolve, and there is little in the way of control group change is implemented and the outcome if that change wasn t performed isn t an interesting or easily discernable metric. This paper presents one technique for quantitatively estimating the potential economic outcomes both before and after a change has been implemented. The basis for the method described here is probabilistically simulating the impact of changes in cycle-time samples from a prior project to a completed project using new methodology. To estimate the potential payoff for a new process, existing cycle-time samples can be discounted by fixed percentage amount to simulate the financial return for hypothetical reductions (10%, 25%, for example). Once change has occurred, actual results can be compared to the predicted data to validate the difference and improve modeling efforts on future initiatives. This paper also looks at the impact of process change on cycle-time over the last 40 years to determine if there is a pattern. Understanding the forces at work causing changes in cycle-time distribution data throughout process improvement over this time hints at what management strategies will give the biggest return on investment. This knowledge helps choose the contextually correct process strategy in a compelling and confident way. The goal of this paper is two-fold: 1. Discuss the cycle-time distribution patterns and parameters for different known software development processes. 2. Demonstrate the use of cycle-time data as a comparative tool for calculating potential and actual economic benefit of process change.
2 2. Structure of this paper This paper consists of three major sections 1. Cycle-time distribution analysis, and how development process can manipulate the shape and uncertainty of this distribution 2. Probabilistic modelling of software processes using cycle-time for delivery estimate and how to formulate an economic model for process change. 3. Case study: Practical application of cycletime analysis as presented in an economic benefit project for an industry client. The first part of this paper introduces cycle-time in software development processes and develops the theory behind why cycle-time value distribution consistently follows very similar patterns across teams and organizations. The second part of this paper shows how through statistical modelling, the likely improvement of process changes can be simulated and an economic argument used to justify the effort of process change. The third section will demonstrate and show the application of a cycle-time comparative technique used in industry. 3. Cycle-time distribution analysis 3.1. Defining cycle-time Cycle-time is defined as the time between starting date and completion date of work items. In software development this is most often measured in days, but can be in hours for teams with small work items that are started and completed in the same day. This paper always presents cycle-times in days. No consistent definition of work item start and complete has emerged as standard in IT process. Definition often differs between departments and teams within an organization. For the purposes of this paper, any consistently defined start and complete date is acceptable as long as it is consistently applied when capturing cycle-time data. Cycle-time is a good candidate metric for prediction and comparison because it is an elapsed time measurement that is not measured in abstract units (such as function points, story points and velocity common in agile). Increasingly, electronic tools are used to manage the software development process and automatic capture of cycle-time data is common-place. The richness of this data hasn t been surfaced beyond scatter plots and for some tools a histogram as eye candy. This paper describes how this data can be leveraged for predictive purposes. If no policy and/or process is currently in place for capturing cycle-time data then the following guidance is offered. These are also the definitions used in this paper - 1. Capture the date that work items are first created as an idea. This data is useful for understanding if there is a decrease in first idea to customer delivery. This will be referred to as Created Date in this paper. 2. Capture the date that work items are prioritized for work to begin. This is the date that a company has committed to delivering the work item, or has to put effort into creating the intended result from the work item. This will be referred to as Started Date in this paper. 3. Capture the date work is ready for delivery to a customer. This is referred to as Completed Date in this paper. 4. Capture the date work items are first exposed to customers in a usable way. This is referred to as In Production Date in this paper. Given these definitions of the times, the following definitions will be defined in this paper as (1) Lead-time = In Production Date Created Date (2) Cycle-time = Completed Date Started Date Lead-time (1) is defined as the time between when work description is created, and that work is delivered to production. Cycle-time (2) is defined as the time between when hands-on work begins on a piece of work until it is considered complete. This paper focuses on cycle-time in its examples and case study. It is feasible, and likely that the same factors of probability distribution are at play at a lead-time level and this will be the subject of future work Cycle-time probability distribution Analysis of industry data. Norden [1] in 1963, then Putnam and Myers [2] in 1978 empirically established that software projects follow a Rayleigh density distribution in staffing estimation and effort. Kan [3] in 2002 extended the use of the Rayleigh model to include defect removal rate and remaining defect prediction. The Rayleigh distribution is a specific form of the Weibull distribution (shape parameter of 2.0), and this paper proposes that cycle-
3 time follows the Weibull probability distribution with lower shape parameters from 1.0 to 1.6 depending on process and inter-team dependency factors. Cycle-time histograms obtained during many commercial client engagements established the prevalence of left-skewed density distributions. The lack of this pattern in software development cycletime is often an indicator of poor data capture. Although it is not essential to know the exact distribution curve for modelling an existing process when actual data samples can be used when simulating [13], curiosity drove my initial attempts to determine what distribution and why. Candidate probability distributions are Gamma, Weibull, Rayleigh and Log-Normal as shown in Figure 1 fitted to a typical agile team in a commercial context. f(x) f(x) Probability Density Function 50 Histogram Gamma (3P) Lognormal Rayleigh Weibull Figure 1 Potential probability distribution fits against a typical commercial data-set When faced with multiple potential correct distributions fitting historical data, the safest is chosen. For forecasting dates, longer is safer if a project is valid delivering later, then it is likely still a valid investment delivered early. The Weibull distribution is the wiser choice having a heavier tail, meaning the decay in higher values is at a slower rate (higher values have higher probability) when compared against Log-Normal. It also proved a successful candidate when used as an initial estimate on prior commercial projects. Written evidence suggesting Weibull is appropriate for modeling similar problems was found by Fente, Schexnayder, and Knutson [5] who describe Weibull as a good candidate for modeling construction and engineering. And McCombs, Elam and Pratt [4] chose Weibull as most appropriate for modeling task durations in PERT analysis. McCombs et al [4] and Fente et al [5], outlined the desirable attributes of an acceptable distribution for modeling task durations - 60 x 1. It is continuous. 2. It has finite endpoints. 3. It has a defined mode between its endpoints It is capable of describing both skewed and symmetric activity time distributions. The Weibull distribution is a good choice for all but point 2 - finite endpoints. It could be argued that this isn t a necessarily desirable attribute for a cycletime distribution; Long delays of very low likelihood occur in the real software development world. Extremely large values have even extremely small probabilities, and when used in a probabilistic prediction, their influence should they occur is minimal. McCombs et al [4] noted that although not a mathematical property of the Weibull distribution, commercial practicalities eliminate (ridiculously long) non-finite endpoints. Tasks taking far longer than anticipated will more often be identified and solved with resourcing and attention of causes. Fitting actual commercial data and finding confirming prior research increases confidence that Weibull distribution is the most likely candidate. Confirming this with a plausible simulation of a typical real world software development process is valuable to understand limitations, ramifications and potential utilization Process factors causing skewed distribution. The software development process involves taking a description of ideas and creating a manifestation of those ideas in working computer code. Large ideas are meticulously broken down into smaller ideas, and implemented incrementally. Each work item builds on the previously completed computer code. XP, Scrum, Kanban, and Agile, all decompose work items into smaller batches of work for delivery, but differ in approach. These batches of work go by various names: user stories, cards and work items are commonly used (this paper always uses the name work items for no reason other than consistency). Work items pass through a variety of steps from idea to solution. Although it is simple to think of a software development process like a factory production line where work follows a pre-defined step-by-step process. The key difference is that every work item is different in a functional way from the others. Each work item is solving a different problem. Rather than building many of the same things as in factory production, software development is building consecutive one offs. Efforts to model software development in similar ways to traditional manufacturing production line thinking fails to embrace this key difference. Traditional repetitive manufacturing process cycle times will lead to a Gaussian response. Work-items flow through fixed sequence of assembly with some
4 bounded variance. This process yields a Normal distribution due to the Central Limit Theorem. This expected Gaussian result is engrained in quality management practices. Shewart [7, 8] and Demming [9, 10] make good use of anticipated Normal distribution in manufacturing for statistically managing quality. For example, work output is closely monitored for process deviation, often through policy to keep variability within certain ranges of sigma (standard deviation) through standardization. Driving for standardization in creative pursuits often leads to erosion of quality and inventiveness as teams strive to meet the pre-set control limits at great effort and at any cost. Acemoglu et al [14] study this tension between standardization and innovation. For work that is constant innovation, cycle-time for each work item will be inherently variable and unlikely to form a Normal distribution pattern. Software development process has the following traits that set it apart from standard manufacturing process and need to be modelled specifically 1. Work items follow one of many possible sequential process, not one fixed process for all items. Some work items skip entire steps. 2. The effort for each process step is different for each work item. Some harder, some easier due to the amount of new and known concepts. 3. The delays each work item might encounter are different and often unrelated to the work item itself (key person on vacation, awaiting another item to finish). 4. Completing some work items identifies new work. Missed requirements, un-identified complexity and defects are examples. 5. The uncertainty of effort and delays of each work item means that the system continuously operates in an unstable state, where constraints move, emerge and resolve. To simulate the simplest process that causes asymmetry in cycle-time, a simple Monte Carlo model can be built. This model explains how initially known work and a combination of potential, but not guaranteed delays probabilistically convolve into a Weibull distribution. The hypothetical model simulates a fixed amount of work-items being constructed with some uniform variance. Impacting their on-time completion is the possibility of 1 to 5 delays occurring, each with a 25% chance of occurrence. These delays are simulated as an amount of extra work if they occur, or have no impact if they don t occur. Delays in software are a combination of waiting time and discovered work, but this model simply increases work item time by adding work. Figure 2 shows the result as delays are added in Histogram form. The Y-Axis is count (probability), and the X-Axis is time (earlier to the left, later to the right). Peaks emerge and mix based on the binary permutations of none, some and all delays occurring. When there are no delays, the result is a Normal distribution (or would be if modelled with more cycles). When the first delay is added, 75% of the outcomes remain un-affected, and 25% incur the delay as our binomial logic would suggest. When two delays occur, there are now 3 possible peaks: no delays occur (left-most), one or the other delay occurs (middle), or both delays occur (rightmost, with a chance of 6.5%). This pattern continues until around 4 delays when it is more likely than not that one delay will impact every work item and it becomes the mode position. By 5 delays, the Weibull shape is clearly forming. The blank spaces between the peaks of this simulation would be populated with the uncertainty of the delay times (not all identical) and the probabilities (not all 25%) in the real world. Figure 2 Simulation of the impact of increasing number of identical delays each with probability of occurring p=0.25
5 Although this is a simplistic model, it does help explain and visualize the process of uncertain work effort and delays that are un-related to the work item. For example, often these delays are staffing related. Work items stall awaiting a specialist staff skill to become free. Or system related, where work items are stalled waiting for a test environment to free up, or waiting for another team to deliver a dependent piece of work. These delays have little to do with the work item task at hand, they are just bad luck for some work items and not others. The combination of just a few of these delays impacting some work items but not others causes the asymmetry. It s unlikely that every delay factor hits a given work item. It s unlikely no delays hit an individual work item. There will most often be a mix of a few delays. The exact percentage likelihood of each risk cannot be accurately estimated in advance, but can be measured historically. Analysis of these probabilities and impact should drive process selection decisions and budget to solve the most common and impacting delays. Heuristics to isolate the most impacting delays is a vibrant area of commercial interest Historical observed cycle-time reduction. A supporting factor that Weibull may be the correct distribution for cycle-time is its consistent prevalence over the last 40 years. The distribution shape common in the 1970 s was wider (larger shape parameter), but it is also common in 2014 with a narrower shape. The hypothesis of this paper is that process improvements are correlated to the Weibull distributions shape parameter. During the 1970 s to 90 s a general development process called Waterfall was common. The characteristics of this development pattern were upfront design, followed by coding, followed by testing. Projects were inhibited from exiting one phase until the prior phase was completed. This was the era Norden [1], Putnam [2] and Kan [3] documented the Rayleigh distribution. They showed empirical evidence this was the case during projects they oversaw during that era. Figure 3 shows a density function for the Rayleigh distribution. Rayleigh is part of the Weibull family, with its specific characteristic being a shape parameter is 2.0. Agile methodologies (XP, Scrum, Kanban) became prevalent in the late 1990 s to the current time. For these processes the Weibull distribution moves left and narrows. The Weibull shape parameter is decreasing. Figure 4 shows a Weibull distribution with a shape parameter of 1.5, in the center of the 1.3 to 1.6 range seen in available cycletime analysis by the author. Further research is needed to confirm other sources see similar results s (Waterfall process) Figure 3 Rayleigh Distribution (shape = 2.0) ~ (Agile processes) Figure 4 Weibull Distribution (shape = 1.5) ~ (Lean processes) Figure 5 Weibull Distribution (shape = 1.0) The lowest limit seen in commercial practice, nears an Exponential distribution as shown in Figure 5. The Exponential distribution is of the Weibull family, its shape parameter being 1.0. It has been seen on teams that have few or no external dependencies and the work performed is mostly repetitive. Operations teams and teams responsible for releasing code to production. The hypothesis is that these teams experience few or no external delays, and work have little inventiveness. This is consistent with the simulation in Figure 2. With none, one or two delays resulting in a distribution with a left mode, and generally more exponentially distributed. It s likely that for these teams measuring in hours or minutes will zoom in on the leading edge of the Weibull that is invisible at the day granularity for cycle-times. Teams in this area have often laser focused on eradicating delays and queuing time using principles from the Lean manufacturing world (reducing WIP, eliminating waste, etc.). Kanban
6 teams have been observed approaching this ideal when in a predominately operational capacity. The narrowing of the shape parameter has important impacts on the variability of forecasting using cycle-time data. Teams with a lower shape parameter (narrower curve) have less variance, and will be able to forecast and promise with higher certainty (relative to the unit they estimate in). The hypothesis is that some iterative practices or Agile techniques are reducing the shape parameter due to iterative delivery (cycle-times are lower), and that other Agile practices are decreasing the shape parameter by eliminating uncertainty. The ability to put a dollar figure on this reduction of cycle-time and less uncertainty is paramount to justifying the expense of implementing any change in agile practice Implications of Weibull Distributed Cycle-times Errors in control chart control limits. The most visible implication of non-gaussian cycle-time distribution is the setting of service level agreements (the time development organizations promise to deliver within) or other control limits. Shewart [7, 8] and Deming [9, 10] introduce and discuss the use of control charts to maintain quality and determine when intervention in a process is needed (beyond normal variation). One popular chart is Shewart s control chart, a scatterplot with horizontal upper and lower bounds statistically calculated. When samples from production move beyond these limits, action is warranted. Traditional calculations for these limits assume normality which is correct for general repetitive manufacturing, but not software development cycle-times as shown. The computed control limits shown in graphs within almost all commercial task management systems is incorrect for cycle-time. Teams are being held to a service levels that are erroneous. To demonstrate the impact of this error on team expectations, random numbers were generated for a Weibull distribution with a shape parameter (α) of 1.5 and a scale parameter (β) of 30. Table 1 shows the various results of target intervals computed three different ways. The Actual Weibull calculation (1) is based on the Weibull formula and is the correct value if the data truly conforms to the specified Weibull parameters (these would be close in our example but not exact generating perfect random numbers is difficult). The next row (2) shows the erroneous Standard Deviation calculation (when applied to Weibull data) that adds the Standard Deviation (σ) the mean (µ), 1, 2 and 3 times. The lower limit calculations below the mean (subtracting σ from µ) have been omitted because they go below zero. Some major commercial tool vendors have plotted the lower control limit below zero without noticing this is invalid. Cycle-times cannot go below zero, until time can be reversed. Table 1 Incorrect x(p) on Weibull (shape k = ~1.5, scale γ = ~30) μ + 1σ μ + 2σ μ + 3σ Target p (1) Weibull formula (2) Using StdDev in Excel (3) Using Percentile in Excel The formula using Standard Deviation (row 2) is widely incorrect at +1 and +3 sigma (σ). If a team was expected to perform within a μ + 3σ upper limit, typical off-the-shelf charting controls (and Microsoft Excel) would calculate 79 days as the target. The actual performance solved using the Weibull PDF is 94 days. The team would be at a 15 day disadvantage due to poor mathematics. The alternative is to compute the percentiles directly, the results shown in row (3). This is a simple remedy for the issue and acceptable with the computing power we have today (but didn t back in Shewart s [7,8] time circa 1939 where Standard Deviation and percentile values were computed using printed tables, not computers). The formulas used in the closer approximations seen in Table 1 in Microsoft Excel are - =PERCENTILE.INC([cycle-time data], 0.683) =PERCENTILE.INC([cycle-time data], 0.954) =PERCENTILE.INC([cycle-time data], 0.997) Process decision implications. Figure 2 presented a simple model showing the impact of delays on delivery time. The uncertainty of time it would take to create the work items with no delays quickly shifts to the right. Depending on the final width of cycle-time Weibull distribution (its shape parameter) and its stretch (its scale parameter), different process decisions might be appropriate. The first step required to give this advice is determining what the actual shape and scale parameter are given historical cycle-time data. Many commercial packages exist to estimate the Weibull shape and scale parameters from a set of samples (Easyfit, are common). R [6] is a
7 freely available statistical application and has libraries that will estimate the best fit parameters as shown in Listing 1. R code: require(mass) fit<-fitdistr([data frame],"weibull")$estimate fit Output: shape scale Listing 1 R Code to estimate Weibull parameters from historical samples Having determined the shape and scale parameters, likely traits and process advice can be broadly stated. For example, higher shape parameters are indicative of teams being impacted by more delays, and for longer (as seen in Figure 2, more delays the more the fatter the distribution). External team dependencies have been commonly seen as a major root cause for these teams. Teams with low shape parameter values approaching 1.0 with low scale parameters do many smaller items fast. This is typical of operations and support teams who undertake critical, but repetitive work. Automating the repetitive and critical tasks is prudent for these teams. Figure 7 shows some high level traits and process advice based on a matrix of shape and scale parameters. Advice is very contextual, and this isn t an exhaustive list. Further research and confirmation is needed. seems to be good logic expressing that the longer a work item remains incomplete the more likely they are to stay that way. Once a work item has been impeded and the team takes on other work whilst waiting, pressure and focus is on the new work. Based on the Hazard function of the Weibull distribution (related to Survival analysis), shape parameters < 1.0 would be at increasing Hazard (reverse logic for completion, hazard in this case is work finishing). Work items would have a higher chance of being completed the longer they are impeded. Stories of this occurring in real teams seem rare. Not impossible and future work will attempt to investigate this issue further to see what process circumstances might cause this. 4. Software process impact modelling Having established that software development practices shape the variability of cycle-time, computing a likely impact of this cycle-time in financial terms can provide evidence when process change is prudent. The first step is to model software projects using cycle-time to forecast a baseline delivery date of a prior project that matches what occurred (actual delivery). Modeling a prior software project using cycle time requires the following data 1. The known start date of the project. 2. The known end-date of the project. 3. The number of work-items delivered. 4. The cycle-time values for each work-item. Inputs 1 and 2 are known based on team knowledge, and inputs 3 and 4 are obtained from the electronic tracking system used by the organization. The cycle-times samples are checked for obvious erroneous outliers, and that they follow a Weibull distribution for data quality confidence. Figure 8 shows the basic process for generating a probabilistic forecast based on the historical cycle times. The goal is given the known start date, tune the model so that a given confidence level (0.85 is common) forecasted release date matches the known actual release. The forecast we produce will be elapsed days, and this target is calculated as the number of days between actual release and actual start using the logic Figure 7 Suggesting process advice based on Weibull Scale and Shape parameters. In theory, the shape parameter of the Weibull should not be < 1.0 in the software development estimation context. In software development, there (3) Target forecast days = Known actual release date Known start release date Cycle-time samples are combined with the known amount of work completed using the Monte Carlo process described in Figure 8. Random samples of
8 historical cycle-time are bootstrapped (sampled with replacement) as described by Efron and Tibshirani, 1994 [11]. Sets of these samples the same size as the completed work-item count are summed. This is repeated many times, often thousands of trials. Some trials happen to randomly choose all lower numbers from the samples, other trials get all higher values, but mostly a mix of values. By analyzing the results, probability of any one outcome can be computed by the percentage of trials that were equal to or less the desired value. Figure 9 Sample result for baseline Monte Carlo simulation using cycle-time 4.1. Hypothetical cycle-time reduction impact Figure 8 Cycle-time Monte Carlo process for forecasting project completion More than one work item was in-progress at a time, and this is modeled by dividing the total summed cycle-times in each trial by an average amount of parallel effort. Average parallel work effort is computed experimentally by increasing or decreasing an initial guess until the forecasted number of days to complete matches that of formula (3) Target forecast days at a given certainty (0.85 common). The certainty is the value at the 85 th percentile value of all trials after being divided by the parallel effort. The results shown in this paper were computed using a commercial Monte Carlo simulation tool, KanbanSim and ScrumSim by Focused Objective [12]. Figure 9 shows a typical result table. The advantage of this tool is it correctly computes calendar dates from elapsed days, and can compute the cost of delay based on missing target dates. The process it employs is no more complex that described in this paper. Calculating the impact of reducing cycle-time by a fixed percentage is easy once a baseline model has been produced. Cycle-time samples are discounted at fixed percentage when randomly sampled in a trial. For example, multiplying each sample by 0.9 simulates a 10% decrease in cycle-time across the board. Forecasts using the discounted cycle-time show an earlier delivery date and the value of this earlier release calculated (lower development cost plus extra revenue). Table 2 shows the cash flow impact when cycletime samples were reduced by 10% and 20%. An updated forecast release date was used to calculate a cost saving (development expenses) and additional revenue forecast based on being earlier to market. The total cash-flow benefit shown is based on financial revenue predictions obtained from the financial team. The baseline shows some revenue was made but the project missed a seasonal window and paid dearly. Even small percentage decreases in cycle-time can make a big cash-flow impact. Table 2 Example cash flow Improvement Cycle Time Forecast Date Forecast Cost Cash flow Benefit (cost savings + FY revenue) Current 15-Jul 1000K $0 + $60K = $60K 0% 10% Decrease 20% Decrease 27-May 4-Apr 912K 820K $87K+ $90K = $177K $120K + $145K = $265K 296% Better 442% Better
9 5. Case study: Process Economic Impact A large commercial client (approximately 200 staff in the division analyzed) wanted to calculate the impact of a recently introduced process change. Although the IT department felt that a newly introduced process considerably improved completed work throughput, there was some healthy skepticism within the financial controllers who glibly stated show us the money. The initial question to answer is did the process change make any impact at all? If so, how much? To answer these questions, a baseline model for a prior project (called prior in this paper) and test project (called test in this paper) were built using the process described in section 4. These models were tuned as described in section 4 to accurately match the known outcomes. Having a baseline for the prior and test project with forecasts, the cycle-time samples were swapped between the models. Prior samples were modeled in the test project, and the test project s cycle-times were used in the prior project. New forecasts were produced giving new hypothetical forecasts if the new process was used in the prior, or the old process was used in the test. Figure 10 shows the general process for tuning against actual dates then swapping cycle-time histories to get the modeled outcomes. Figure 10 Process for modeling known and modeled (hypothetical) forecasts This straight swap was appropriate in this context because teams were stable and the prior and test projects were similar in complexity. This was confirmed by looking at the work-item count and effort for features delivered in both. If the teams or projects were significantly different, then this analysis would be erroneous. Testing the impact of the new process on the prior project demonstrated the results in Table 2. With the new cycle-time history bought about by process change, there would have been an almost 4 month (110 day) improvement in delivery time, with the reduced staff carrying cost alone saving $4M. It was decided that the impact of being on the market earlier in gained revenue was hard to estimate, so it was omitted. This means that the outcomes seen understate the losses. They do not account for cost of delay on the projects delivered late, or the cost of delay for the next project that was delayed. Staff carrying cost was calculated as the per day cost of the teams implementing the project. The calculation was simply the yearly IT budget divided by 365 days. Table 2 Impact if new process WAS used in the prior project Process Target Delivered Cost Old 1-Dec 1-Mar $18M (actual) New (modeled) 1-Dec +90 days 11-Nov -20 days $14M -$4M The second part of the analysis was to estimate the outcome if the new process wasn t implemented for the test project. The cycle-times for the prior project were used in the model for the test project. The outcome is shown in Table 3. The new process saved at least $1.4M, again only accounting for staff carrying cost. Table 3 Impact if new process WASN T used in the test project Process Target Delivered Cost Old 31-Dec 5-Feb $8M (modeled) New (actual) 31-Dec +36 days 31-Dec On-time +$1.4M $6.6M The combined message to the executive teams was that if the new process was in place for the prior project - a. Delivered on-time rather than 3 months late b. $4M decrease in development budget And if the new process wasn t implemented for the test project - a. Test project delivered 1 Month late b. Increase of $1.4M development budget Due to the compelling nature of these forecasts even if halved, the decision to fund and roll-out the new process to the entire organization was happily accepted by all financial and IT staff. The ability to quickly model the financial impact using the cycle-time data on-hand was central to gaining executive sponsorship in the process change initiatives. Three years after this analysis, the teams in question still model, assess viability and measure resulting impact of all process changes. Confidential data available on request.
10 6. Conclusion This paper has outlined a method for calculating the financial impact of software development process change. Practitioners can choose to predict likely impact before a change is implemented, and/or assess its impact post change process implementation. The software industry has a history of documented use of Rayleigh and Weibull distributions for various problems. Prior examples exist for predicting remaining defects in the field and staff capacity planning. This paper continues the Weibull tradition with a theory of why cycle-time for software development follows this distribution within a narrow range of parameter values. This paper provides new sources of evidence in support of this claim 1. Theory of why Weibull distribution is the outcome of the typical process constraints seen in software development. 2. The Weibull shape parameter can be seen to correlate with progressive process changes over forty years. Knowing cycle-time follows the Weibull distribution allows informed decisions to be made in risk mitigation and process change decisions. This paper makes the following advice 1. Calculations on cycle-time data should not assume a Normal Distribution. This is evident in commercial tooling vendors who show erroneous confidence level limits on control charts. 2. Agile processes has compressed the shape and scale of the cycle-time distribution, decreasing its variability. 3. Computing the shape and scale parameters of the cycle-time distribution allows informed process recommendations. Probabilistic modelling allows financial impact of process change to be estimated. Monte Carlo modeling using captured cycle-time samples and known project outcomes allows simple what-if experiments based on hypothetical cycle-time improvements. This is a rapid way to quantitatively determine economic outcome. The commercial case study shown in this paper demonstrates how quickly a financial argument can be made for the continually challenging the current process and investing in improvements. The payback is rapid. 7. References [1] Norden, P V, Useful Tools for Project Management Research, Operations Research in Research and Development, John Wiley and Sons, NY, 1963 [2] Putnam, L H, and W. Myers, Measures for Excellence: Reliable Software on Time, Within Budget, Yourdon Press, Englewood Cliffs, NJ, 1992 [3] Kan, Stephen H, Metrics and Models in Software Quality Engineering 2 nd Edition, Addison Wesley, Upper Saddle River, NJ, 2002 [4] McCombs, Edward L.; Elam, Matthew E.; and Pratt, David B. (2009) "Estimating Task Duration in PERT using the Weibull Probability Distribution,"Journal of Modern Applied Statistical Methods: Vol. 8: Iss. 1, Article 26. Available at: [5] Fente, J., Schexnayder, C., & Knutson, K. (2000). Defining a probability distribution function for construction simulation. Journal of Construction Engineering and Management, 123(3), [6] R Core Team (2014). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL [7] Shewhart, W. A. (1931). Economic Control of Quality of Manufactured Product. [8] Shewhart, W. A. (1939). Statistical Method from the Viewpoint of Quality Control [9] Deming, W. E. (1975). "On probability as a basis for action". The American Statistician 29 (4): [10] Deming, W. E. (1982). Out of the Crisis: Quality, Productivity and Competitive Position. [11] Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap (Vol. 57). CRC press. [12] KanbanSim and ScrumSim, A software environment for modeling and forecasting agile software development projects. Focused Objective LLC. Seattle, USA. URL [13] Magennis, Troy (2012) Managing Software Development Risk using Modeling and Monte Carlo Simulation. Lean Software and Systems 2012 Proceedings: url: LSSC-Proceedings.pdf [14] Acemoglu, Daron & Gancia, Gino & Zilibotti, Fabrizio, Competing engines of growth: Innovation and standardization. Journal of Economic Theory 147 (2012)
Troy Magennis (@t_magennis) Moneyball for Software Projects: Agile Metrics for the Metrically Challenged
Troy Magennis () Moneyball for Software Projects: Agile Metrics for the Metrically Challenged Brad Pitt Billy Beane Paul DePodesta Jonah Hill (Playing fictional char.) 2 Earnshaw Cook Percentage Baseball
Simulation and Lean Six Sigma
Hilary Emmett, 22 August 2007 Improve the quality of your critical business decisions Agenda Simulation and Lean Six Sigma What is Monte Carlo Simulation? Loan Process Example Inventory Optimization Example
Normality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected]
AP Physics 1 and 2 Lab Investigations
AP Physics 1 and 2 Lab Investigations Student Guide to Data Analysis New York, NY. College Board, Advanced Placement, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks
What is meant by the term, Lean Software Development? November 2014
What is meant by the term, Lean Software Development? Scope of this Report November 2014 This report provides a definition of Lean Software Development and explains some key characteristics. It explores
Change-Point Analysis: A Powerful New Tool For Detecting Changes
Change-Point Analysis: A Powerful New Tool For Detecting Changes WAYNE A. TAYLOR Baxter Healthcare Corporation, Round Lake, IL 60073 Change-point analysis is a powerful new tool for determining whether
Lean Metrics How to measure and improve the flow of work. Chris Hefley, CEO of LeanKit. November 5 th, 2014
Lean Metrics How to measure and improve the flow of work Chris Hefley, CEO of LeanKit November 5 th, 2014 Introduction to Lean Metrics What metrics should you measure? How to track them? What effect do
Practical Calculation of Expected and Unexpected Losses in Operational Risk by Simulation Methods
Practical Calculation of Expected and Unexpected Losses in Operational Risk by Simulation Methods Enrique Navarrete 1 Abstract: This paper surveys the main difficulties involved with the quantitative measurement
Common Tools for Displaying and Communicating Data for Process Improvement
Common Tools for Displaying and Communicating Data for Process Improvement Packet includes: Tool Use Page # Box and Whisker Plot Check Sheet Control Chart Histogram Pareto Diagram Run Chart Scatter Plot
Monte Carlo Simulations for Patient Recruitment: A Better Way to Forecast Enrollment
Monte Carlo Simulations for Patient Recruitment: A Better Way to Forecast Enrollment Introduction The clinical phases of drug development represent the eagerly awaited period where, after several years
Gamma Distribution Fitting
Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics
Lean Software Development and Kanban
1 of 7 10.04.2013 21:30 Lean Software Development and Kanban Learning Objectives After completing this topic, you should be able to recognize the seven principles of lean software development identify
Using simulation to calculate the NPV of a project
Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial
seven Statistical Analysis with Excel chapter OVERVIEW CHAPTER
seven Statistical Analysis with Excel CHAPTER chapter OVERVIEW 7.1 Introduction 7.2 Understanding Data 7.3 Relationships in Data 7.4 Distributions 7.5 Summary 7.6 Exercises 147 148 CHAPTER 7 Statistical
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
Exercise 1.12 (Pg. 22-23)
Individuals: The objects that are described by a set of data. They may be people, animals, things, etc. (Also referred to as Cases or Records) Variables: The characteristics recorded about each individual.
Risk Analysis and Quantification
Risk Analysis and Quantification 1 What is Risk Analysis? 2. Risk Analysis Methods 3. The Monte Carlo Method 4. Risk Model 5. What steps must be taken for the development of a Risk Model? 1.What is Risk
Industry Environment and Concepts for Forecasting 1
Table of Contents Industry Environment and Concepts for Forecasting 1 Forecasting Methods Overview...2 Multilevel Forecasting...3 Demand Forecasting...4 Integrating Information...5 Simplifying the Forecast...6
On Correlating Performance Metrics
On Correlating Performance Metrics Yiping Ding and Chris Thornley BMC Software, Inc. Kenneth Newman BMC Software, Inc. University of Massachusetts, Boston Performance metrics and their measurements are
Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012
Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts
The problem with waiting time
The problem with waiting time Why the only way to real optimization of any process requires discrete event simulation Bill Nordgren, MS CIM, FlexSim Software Products Over the years there have been many
The Power of Two: Combining Lean Six Sigma and BPM
: Combining Lean Six Sigma and BPM Lance Gibbs and Tom Shea Lean Six Sigma (LSS) and Business Process Management (BPM) have much to contribute to each other. Unfortunately, most companies have not integrated
INT 3 Schedule Risk Analysis
INT 3 Schedule Risk Analysis David T. Hulett, Ph.D. Hulett & Associates, LLC ICEAA Professional Development and Training Workshop San Diego, CA June 9-12, 2015 1 Agenda Add uncertainty to the schedule,
(Refer Slide Time: 01:52)
Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This
Gaining Competitive Advantage through Reducing Project Lead Times Duncan Patrick, CMS Montera Inc. Jack Warchalowski, CMS Montera Inc.
Gaining Competitive Advantage through Reducing Project Lead Times Duncan Patrick, CMS Montera Inc. Jack Warchalowski, CMS Montera Inc. Introduction We all know that any successfully completed project has
Exploratory Data Analysis
Exploratory Data Analysis Johannes Schauer [email protected] Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction
Process Methodology. Wegmans Deli Kiosk. for. Version 1.0. Prepared by DELI-cious Developers. Rochester Institute of Technology
Process Methodology for Wegmans Deli Kiosk Version 1.0 Prepared by DELI-cious Developers Rochester Institute of Technology September 15, 2013 1 Table of Contents 1. Process... 3 1.1 Choice... 3 1.2 Description...
APPLYING A STOCHASTIC LINEAR SCHEDULING METHOD TO PIPELINE CONSTRUCTION
APPLYING A STOCHASTIC LINEAR SCHEDULING METHOD TO PIPELINE CONSTRUCTION Fitria H. Rachmat 1, Lingguang Song 2, and Sang-Hoon Lee 2 1 Project Control Engineer, Bechtel Corporation, Houston, Texas, USA 2
OBJECTIVE ASSESSMENT OF FORECASTING ASSIGNMENTS USING SOME FUNCTION OF PREDICTION ERRORS
OBJECTIVE ASSESSMENT OF FORECASTING ASSIGNMENTS USING SOME FUNCTION OF PREDICTION ERRORS CLARKE, Stephen R. Swinburne University of Technology Australia One way of examining forecasting methods via assignments
ModelRisk for Insurance and Finance. Quick Start Guide
ModelRisk for Insurance and Finance Quick Start Guide Copyright 2008 by Vose Software BVBA, All rights reserved. Vose Software Iepenstraat 98 9000 Gent, Belgium www.vosesoftware.com ModelRisk is owned
QUANTIFIED THE IMPACT OF AGILE. Double your productivity. Improve Quality by 250% Balance your team performance. Cut Time to Market in half
THE IMPACT OF AGILE QUANTIFIED SWAPPING INTUITION FOR INSIGHT KEY FIndings TO IMPROVE YOUR SOFTWARE DELIVERY Extracted by looking at real, non-attributable data from 9,629 teams using the Rally platform
PROJECT TIME MANAGEMENT
6 PROJECT TIME MANAGEMENT Project Time Management includes the processes required to ensure timely completion of the project. Figure 6 1 provides an overview of the following major processes: 6.1 Activity
Amajor benefit of Monte-Carlo schedule analysis is to
2005 AACE International Transactions RISK.10 The Benefits of Monte- Carlo Schedule Analysis Mr. Jason Verschoor, P.Eng. Amajor benefit of Monte-Carlo schedule analysis is to expose underlying risks to
Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I
Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting
FORECASTING. Operations Management
2013 FORECASTING Brad Fink CIT 492 Operations Management Executive Summary Woodlawn hospital needs to forecast type A blood so there is no shortage for the week of 12 October, to correctly forecast, a
A Better Statistical Method for A/B Testing in Marketing Campaigns
A Better Statistical Method for A/B Testing in Marketing Campaigns Scott Burk Marketers are always looking for an advantage, a way to win customers, improve market share, profitability and demonstrate
SENSITIVITY ANALYSIS AND INFERENCE. Lecture 12
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
Fixed-Effect Versus Random-Effects Models
CHAPTER 13 Fixed-Effect Versus Random-Effects Models Introduction Definition of a summary effect Estimating the summary effect Extreme effect size in a large study or a small study Confidence interval
BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I
BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential
www.stephenbarkar.se Lean vs. Agile similarities and differences 2014-08-29 Created by Stephen Barkar - www.stephenbarkar.se
1 www.stephenbarkar.se Lean vs. Agile similarities and differences 2014-08-29 Purpose with the material 2 This material describes the basics of Agile and Lean and the similarities and differences between
Brillig Systems Making Projects Successful
Metrics for Successful Automation Project Management Most automation engineers spend their days controlling manufacturing processes, but spend little or no time controlling their project schedule and budget.
Agile and Earned Value. A white paper. October 2013. Author Stephen Jones, Sellafield Ltd
Agile and Earned Value A white paper October 2013 Author Stephen Jones, Sellafield Ltd This document is a whitepaper produced by the APM Planning, Monitoring and Control SIG. It represents the thoughts
Measurement and Metrics Fundamentals. SE 350 Software Process & Product Quality
Measurement and Metrics Fundamentals Lecture Objectives Provide some basic concepts of metrics Quality attribute metrics and measurements Reliability, validity, error Correlation and causation Discuss
Better decision making under uncertain conditions using Monte Carlo Simulation
IBM Software Business Analytics IBM SPSS Statistics Better decision making under uncertain conditions using Monte Carlo Simulation Monte Carlo simulation and risk analysis techniques in IBM SPSS Statistics
Getting to Done The Secret Sauce of High Performing Teams
Getting to Done The Secret Sauce of High Performing Teams Hosts: JJ Sutherland Jeff Sutherland Coauthors: 2011 Scrum Inc. Who We Are Scrum Inc. is the Agile leadership company of Dr. Jeff Sutherland, co-creator
A Scientific Approach to Implementing Change
A Scientific Approach to Implementing Change Gail J. Longbotham Regent University C. Roger Longbotham Amazon.com Organizations have used rigorous methodologies to identify the improvements necessary for
Conn Valuation Services Ltd.
CAPITALIZED EARNINGS VS. DISCOUNTED CASH FLOW: Which is the more accurate business valuation tool? By Richard R. Conn CMA, MBA, CPA, ABV, ERP Is the capitalized earnings 1 method or discounted cash flow
A Case Study in Software Enhancements as Six Sigma Process Improvements: Simulating Productivity Savings
A Case Study in Software Enhancements as Six Sigma Process Improvements: Simulating Productivity Savings Dan Houston, Ph.D. Automation and Control Solutions Honeywell, Inc. [email protected] Abstract
USING TIME SERIES CHARTS TO ANALYZE FINANCIAL DATA (Presented at 2002 Annual Quality Conference)
USING TIME SERIES CHARTS TO ANALYZE FINANCIAL DATA (Presented at 2002 Annual Quality Conference) William McNeese Walt Wilson Business Process Improvement Mayer Electric Company, Inc. 77429 Birmingham,
Simulation Exercises to Reinforce the Foundations of Statistical Thinking in Online Classes
Simulation Exercises to Reinforce the Foundations of Statistical Thinking in Online Classes Simcha Pollack, Ph.D. St. John s University Tobin College of Business Queens, NY, 11439 [email protected]
9. Sampling Distributions
9. Sampling Distributions Prerequisites none A. Introduction B. Sampling Distribution of the Mean C. Sampling Distribution of Difference Between Means D. Sampling Distribution of Pearson's r E. Sampling
Risk Analysis Overview
What Is Risk? Uncertainty about a situation can often indicate risk, which is the possibility of loss, damage, or any other undesirable event. Most people desire low risk, which would translate to a high
How To Understand And Solve A Linear Programming Problem
At the end of the lesson, you should be able to: Chapter 2: Systems of Linear Equations and Matrices: 2.1: Solutions of Linear Systems by the Echelon Method Define linear systems, unique solution, inconsistent,
Content Sheet 7-1: Overview of Quality Control for Quantitative Tests
Content Sheet 7-1: Overview of Quality Control for Quantitative Tests Role in quality management system Quality Control (QC) is a component of process control, and is a major element of the quality management
Understanding Agile Project Management
Understanding Agile Project Management Author Melanie Franklin Director Agile Change Management Limited Overview This is the transcript of a webinar I recently delivered to explain in simple terms what
6 Scalar, Stochastic, Discrete Dynamic Systems
47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
THE BASICS OF STATISTICAL PROCESS CONTROL & PROCESS BEHAVIOUR CHARTING
THE BASICS OF STATISTICAL PROCESS CONTROL & PROCESS BEHAVIOUR CHARTING A User s Guide to SPC By David Howard Management-NewStyle "...Shewhart perceived that control limits must serve industry in action.
Quantitative Methods for Finance
Quantitative Methods for Finance Module 1: The Time Value of Money 1 Learning how to interpret interest rates as required rates of return, discount rates, or opportunity costs. 2 Learning how to explain
t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon
t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. [email protected] www.excelmasterseries.com
Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools 2009-2010
Curriculum Map Statistics and Probability Honors (348) Saugus High School Saugus Public Schools 2009-2010 Week 1 Week 2 14.0 Students organize and describe distributions of data by using a number of different
Module 11. Software Project Planning. Version 2 CSE IIT, Kharagpur
Module 11 Software Project Planning Lesson 29 Staffing Level Estimation and Scheduling Specific Instructional Objectives At the end of this lesson the student would be able to: Identify why careful planning
Standard Deviation Estimator
CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of
Empirical study of Software Quality Evaluation in Agile Methodology Using Traditional Metrics
Empirical study of Software Quality Evaluation in Agile Methodology Using Traditional Metrics Kumi Jinzenji NTT Software Innovation Canter NTT Corporation Tokyo, Japan [email protected] Takashi
BASIC STATISTICAL METHODS FOR GENOMIC DATA ANALYSIS
BASIC STATISTICAL METHODS FOR GENOMIC DATA ANALYSIS SEEMA JAGGI Indian Agricultural Statistics Research Institute Library Avenue, New Delhi-110 012 [email protected] Genomics A genome is an organism s
The Thinking Approach LEAN CONCEPTS. 2012-2013, IL Holdings, LLC All rights reserved 1
The Thinking Approach LEAN CONCEPTS All rights reserved 1 Basic Thinking to Manage the Journey MANAGEMENT TACTICS OF A LEAN TRANSFORMATION All rights reserved 2 LEAN MANAGEMENT Two key questions What is
Statistical Modeling and Analysis of Stop- Loss Insurance for Use in NAIC Model Act
Statistical Modeling and Analysis of Stop- Loss Insurance for Use in NAIC Model Act Prepared for: National Association of Insurance Commissioners Prepared by: Milliman, Inc. James T. O Connor FSA, MAAA
Fairfield Public Schools
Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity
Valuation of Your Early Drug Candidate. By Linda Pullan, Ph.D. www.sharevault.com. Toll-free USA 800-380-7652 Worldwide 1-408-717-4955
Valuation of Your Early Drug Candidate By Linda Pullan, Ph.D. www.sharevault.com Toll-free USA 800-380-7652 Worldwide 1-408-717-4955 ShareVault is a registered trademark of Pandesa Corporation dba ShareVault
pg. pg. pg. pg. pg. pg. Rationalizing Supplier Increases What is Predictive Analytics? Reducing Business Risk
What is Predictive Analytics? 3 Why is Predictive Analytics Important to Sourcing? 4 Rationalizing Supplier Increases 5 Better Control of Sourcing and Costs 6 Reducing Business Risk 7 How do you implement
Quality Tools, The Basic Seven
Quality Tools, The Basic Seven This topic actually contains an assortment of tools, some developed by quality engineers, and some adapted from other applications. They provide the means for making quality
Chapter 3 RANDOM VARIATE GENERATION
Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.
SPARE PARTS INVENTORY SYSTEMS UNDER AN INCREASING FAILURE RATE DEMAND INTERVAL DISTRIBUTION
SPARE PARS INVENORY SYSEMS UNDER AN INCREASING FAILURE RAE DEMAND INERVAL DISRIBUION Safa Saidane 1, M. Zied Babai 2, M. Salah Aguir 3, Ouajdi Korbaa 4 1 National School of Computer Sciences (unisia),
Simple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
Telephone Traffic Queues in a Customer Call Center. A Thesis. Presented for the. Master of Science Degree. The University of Tennessee at Chattanooga
Telephone Traffic Queues in a Customer Call Center A Thesis Presented for the Master of Science Degree The University of Tennessee at Chattanooga Patrick Todd April 2009 Copyright 2009 by Patrick Todd
Basic Tools for Process Improvement
What is a Histogram? A Histogram is a vertical bar chart that depicts the distribution of a set of data. Unlike Run Charts or Control Charts, which are discussed in other modules, a Histogram does not
PMP SAMPLE QUESTIONS BASED ON PMBOK 5TH EDITION
PMP SAMPLE QUESTIONS http://www.tutorialspoint.com/pmp-exams/pmp_sample_questions_set2.htm Copyright tutorialspoint.com BASED ON PMBOK 5TH EDITION Here are 200 more objective type sample questions and
Optimization applications in finance, securities, banking and insurance
IBM Software IBM ILOG Optimization and Analytical Decision Support Solutions White Paper Optimization applications in finance, securities, banking and insurance 2 Optimization applications in finance,
Geostatistics Exploratory Analysis
Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa Master of Science in Geospatial Technologies Geostatistics Exploratory Analysis Carlos Alberto Felgueiras [email protected]
Development Methodologies Compared
N CYCLES software solutions Development Methodologies Compared Why different projects require different development methodologies. December 2002 Dan Marks 65 Germantown Court 1616 West Gate Circle Suite
Statistics 104: Section 6!
Page 1 Statistics 104: Section 6! TF: Deirdre (say: Dear-dra) Bloome Email: [email protected] Section Times Thursday 2pm-3pm in SC 109, Thursday 5pm-6pm in SC 705 Office Hours: Thursday 6pm-7pm SC
Governments information technology
So l u t i o n s Blending Agile and Lean Thinking for More Efficient IT Development By Harry Kenworthy Agile development and Lean management can lead to more cost-effective, timely production of information
pm4dev, 2015 management for development series Project Schedule Management PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS
pm4dev, 2015 management for development series Project Schedule Management PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS PROJECT MANAGEMENT FOR DEVELOPMENT ORGANIZATIONS A methodology to manage development
Scheduling Resources and Costs
Student Version CHAPTER EIGHT Scheduling Resources and Costs McGraw-Hill/Irwin Copyright 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Gannt Chart Developed by Henry Gannt in 1916 is used
Lean Six Sigma Black Belt-EngineRoom
Lean Six Sigma Black Belt-EngineRoom Course Content and Outline Total Estimated Hours: 140.65 *Course includes choice of software: EngineRoom (included for free), Minitab (must purchase separately) or
A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data
A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data Athanasius Zakhary, Neamat El Gayar Faculty of Computers and Information Cairo University, Giza, Egypt
Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data
Using Excel (Microsoft Office 2007 Version) for Graphical Analysis of Data Introduction In several upcoming labs, a primary goal will be to determine the mathematical relationship between two variable
Software Development and Testing: A System Dynamics Simulation and Modeling Approach
Software Development and Testing: A System Dynamics Simulation and Modeling Approach KUMAR SAURABH IBM India Pvt. Ltd. SA-2, Bannerghatta Road, Bangalore. Pin- 560078 INDIA. Email: [email protected],
How to Win the Stock Market Game
How to Win the Stock Market Game 1 Developing Short-Term Stock Trading Strategies by Vladimir Daragan PART 1 Table of Contents 1. Introduction 2. Comparison of trading strategies 3. Return per trade 4.
Agile So)ware Development
Software Engineering Agile So)ware Development 1 Rapid software development Rapid development and delivery is now often the most important requirement for software systems Businesses operate in a fast
Chapter 11 Monte Carlo Simulation
Chapter 11 Monte Carlo Simulation 11.1 Introduction The basic idea of simulation is to build an experimental device, or simulator, that will act like (simulate) the system of interest in certain important
Quantitative Risk Analysis with Microsoft Project
Copyright Notice: Materials published by Intaver Institute Inc. may not be published elsewhere without prior written consent of Intaver Institute Inc. Requests for permission to reproduce published materials
Response Time Analysis
Response Time Analysis A Pragmatic Approach for Tuning and Optimizing SQL Server Performance By Dean Richards Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 866.CONFIO.1 www.confio.com
How To Write A Data Analysis
Mathematics Probability and Statistics Curriculum Guide Revised 2010 This page is intentionally left blank. Introduction The Mathematics Curriculum Guide serves as a guide for teachers when planning instruction
