Measuring Flexibility in Software Project Schedules

Size: px
Start display at page:

Download "Measuring Flexibility in Software Project Schedules"

Transcription

1 1 Measuring Flexibility in Software Project Schedules Muhammad Ali Khan Department of Mathematics & Statistics University of Calgary, 2500 Campus Drive NW, Calgary AB, Canada T2N 1N4 Sajjad Mahmood Information and Computer Science Department King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia Abstract The complexity of software projects is growing with the increasing complexity of software systems. The pressure to fit schedules within shorter periods of time leads to initial project schedules with a complex logic. These schedules are often highly susceptible to any subsequent delays in project activities. Thus techniques need to be developed to determine the quality of a software project schedule. Most of the existing measures of schedule quality define the goodness of a schedule in terms of its network complexity. However, these measures fail to estimate the flexibility of a schedule, that is, the extent to which a schedule can withstand delays without requiring extensive changes. The relatively few schedule flexibility measures that exist in literature suffer from several drawbacks such as lack of a theoretical foundation, not having a definite scale and not being able to distinguish between schedules with similar network topologies. In this paper, we address these issues by defining two flexibility measures for software project schedules, namely path shift and value shift, which respectively predict the impact of changes in activity durations on the critical paths and the critical value of a schedule. Inspired by the notion of betweenness centrality, these measures are theoretically sound, have a well-defined scale and require little computational effort. Furthermore, by several examples and two real-life software project case studies we demonstrate that these measures outperform the existing flexibility measures in clearly discriminating between the flexibility of software project schedules having very similar topologies. Index Terms Software project, software project schedule, schedule flexibility, social network analysis, betweenness centrality. 1 INTRODUCTION Over the last decade, the demand for software products has risen at a phenomenal rate. This has placed new demands and expectations on the software industry, especially for enhancing development productivity and reducing time to market. Software developers look for new ways, for instance component-based

2 2 development [1, 2, 3], design re-factoring [4], global software development [5, 6] and open source initiative [7], to develop quality software within a shorter period of time and reduce overall development costs. As a result, careful management of software project schedules has become crucial in modern software industry [8, 9, 10]. Schedules play a fundamental role in managing daily operations and are an important project control instrument [11]. Over the years, the number of successful software projects has doubled form 16% in 1994 to 32% in 2009 [10]. The increase can be attributed, at least in part, to the development of software project management processes and tools [10, 12]. Despite this improvement, 24% of the projects reported in the CHAOS Report 2009 [10] had failed and another 44% were late and over-budget. This highlights the need for quality assessment of software project schedules. Several methods have been suggested for analyzing schedules, but the critical path method (CPM) continues to be used most widely [12, 13]. A CPM schedule takes the form of a directed acyclic graph (DAG), whose nodes represent different project activities while the arcs (directed edges) represent precedence relations between activities. The indegree of a node is the number of arcs directed towards it, whereas its outdegree is the number of arcs directed away from it. A node with zero indegree is called a source and a node with zero outdegree is called a sink. A schedule can have multiple sources and sinks. A path p in a CPM network is a sequence of nodes A 1 A 2 A k starting from a source A 1 and ending at a sink A k arranged according to their precedence order. Each node is assigned a non-negative weight that represents the duration of the corresponding activity. Arcs can also be assigned a duration if there is a time lag between the corresponding activities. The duration of a path is defined as the sum of durations of its activities and time lags. In the sequel, we make no distinction between a schedule and its network representation; and use the terms activity and node interchangeably. Furthermore, for the sake of simplicity we assume that there are no time lags between activities. However, this does not result in any loss of generality as the technique developed in this paper can be readily applied to a schedule with time lags (see Section 7 for details). A critical path in a schedule is a path with the longest duration. A schedule can have multiple critical paths. The activities that lie on a critical path are called critical activities. The duration of a critical path is said to be the critical value or the make span of the schedule as it determines the earliest time by which the project can be completed [12]. The float or slack of an activity is the amount of time it can be delayed without changing the schedule critical value. Traditionally, there has been significant emphasis on the better management of critical activities as even a small change of duration in any critical activity changes the schedule make span. In reality, however, a schedule may contain a number of near critical paths with a potential of becoming critical due to delays in the activities that lie on these paths. It is therefore imperative to develop techniques that are able to predict the impact of delay in any activity, whether critical or not, on the whole schedule [12]. The present work aims to achieve this by defining and testing two new schedule flexibility measures.

3 3 What constitutes a good schedule is a matter that generates much debate [11]. One perspective is that a schedule is of good quality if its logic and precedence relations are easy to understand and keep track of. The idea underlying this point of view is that a complex schedule requires greater coordination efforts and so cannot be considered of good quality [14]. Significant work has been done to quantify the quality of schedules in terms of their complexity. Several complexity measures [14, 15, 16, 17, 18, 19] have been proposed to determine the quality of a project schedule. These measures generally interpret a schedule network with more arcs as more complex. However, recent research [19, 20, 21] has pointed out the limitations of existing methods in managing schedule complexity and this lack of quality assessment results in project failures. Another viewpoint is to determine schedule quality based on its flexibility. We define flexibility as the ability of a schedule to absorb delays in activities without requiring substantial revisions as a whole. One way to measure the flexibility of a schedule is to predict the impact of activity delays on its critical paths and its critical value. In this paper, we define two measures that achieve this by using only the basic schedule data. We call these measures path shift and value shift and show how they can be used to predict the flexibility of a software project schedule. The proposed flexibility measures are based on the concept of betweenness centrality from social network analysis (see Section 4). Due to this strong theoretical background and generality of our techniques, the analysis presented in this paper can potentially be applied to a wide variety of network-based schedules across different domains. Here we focus on applications in software engineering as we have already tested our measures on a real-world software development project and have gathered feedback from practitioners and colleagues on the utility of our approach in this field. In project scheduling literature it is common practice to apply any newly developed techniques to certain examples such as schedules consisting of a single sequence of activities, a few parallel sequences of activities or a few intersecting sequences of activities [14]. These examples help in validating new techniques by comparing them with the existing methods. This is usually followed by a case study applying the new technique on a real-world project schedule and checking if the findings are consistent with what is expected. The expected behavior is often identified using Monte Carlo simulation that relies on repeated random sampling to quantify how a schedule is likely to behave in reality [12]. In this paper, we exactly follow this standard approach. We present an application of our measures to seven example schedules as well as two real-world case studies, namely, Obesity Health Clinic System and Life Cycle Assessment System. The example cases help to highlight that unlike the existing measures of schedule flexibility, our measures provide more insight to distinguish between good and bad schedules, while the case studies enables us to verify the predictions made by our measures in an industrial setting using Monte Carlo simulation techniques. Moreover, we present a qualitative analysis based on practitioners feedback to highlight strengths and limitations of our work. The centrality-based measures developed in this paper provide software project managers with a way

4 4 to forecast how a schedule would respond to unforeseen changes in activity durations. An early prediction of this response can lead to potential savings of time and effort for software developers by detecting lack of flexibility in a schedule before project execution. Furthermore, we demonstrate that the shift measures are able to distinguish between the flexibility of schedules with similar network topologies, while the existing flexibility measures fall short. The measures developed here have a strong theoretical background as they are motivated by the idea of betweenness centrality from social network analysis. They also offer additional benefits of being easy to calculate and having a well-defined scale that helps in comparing the flexibility of different schedules. We use these measures to predict the behavior of several example cases and a real-life software development schedule under duration changes. The findings indicate that the schedule behavior predicted by path shift and value shift matches the results obtained by running Monte Carlo simulations. The rest of this paper is organized as follows. Section 2 reviews the related literature, while Section 3 discusses the research design. In Section 4, we present our schedule flexibility measures and their motivation from social network analysis. We also apply our measures to several example schedules to establish their validity and compare them with the existing flexibility measures. Sections 5 and 6 present two case studies based on real-life software projects. We discuss some salient features of our approach in Section 7, while Section 8 summarizes the research and outlines directions for future work. 2 RELATED WORK 2.1 Software Project Scheduling In software project management literature, researchers have recognized the direct impact of schedules on the success rates of software projects [8, 9, 22, 23]. In particular, significant amount of research has been focused on understanding the impact of project schedules on development time and cost. For instance, Abdel-Hamid et al. [24, 25] reported that software project schedule has a direct impact on the productivity and indirect impact on the error rate of a project. An aggressive pressure on software project schedule could lead to higher development effort and cost. Similarly, Ding and Jing [26] reported that around 40% of software projects in China have failed due to poor scheduling. A number of studies have been carried out to better understand how schedule compression affects the software development process [9, 27, 28] and its relationship with effective management strategies. For example, Austin [29] indicates that less flexible schedules make it impossible for developers to meet deadlines. However, he also observes that schedule pressure within a limit could maintain or improve software project performance. Recently, Nan and Harter [9] studied the impact of budget and schedule constraints on software development time and effort, while Chen and Zhang [23] proposed an event-based schedule representation and applied an ant colony optimization technique to schedule critical tasks as early as possible and to assign tasks to suitable human resources. In summary, these studies indicate that

5 5 the impact of scheduling constraints on project outcomes is nonlinear and depending on the degree of schedule compression, schedule pressure may have either positive or negative effect on the development effort. The relationship between software project schedules and project team size has also been investigated. Hericko et al. [8] proposed a mathematical model to estimate the optimal team size that leads to stable schedules. They conclude that optimal team size depends on software size and project schedule; and provide a table of recommended team sizes for developing software of varying size. 2.2 Evaluation of Schedule Quality - The Complexity Measures The influence of good scheduling on software project success makes it important to measure the goodness of a schedule before adopting it. In project management literature, a number of researchers measure schedule quality in terms of the complexity of the schedule network, with an understanding that a less complex network leads to a better quality schedule. The coefficients of network complexity are commonly used as indicators of schedule complexity [15, 16, 17]. Latva-Koivisto [19] presented a number of complexity measures for business process models such as cyclomatic number, reduction complexity index, restrictiveness estimator and number of trees in a graph. One of the major limitations of these measures is that they also count redundant arcs and give a false impression of complexity [14]. Nassar and Hegab [14] introduced another schedule complexity measure where a schedule with more links is considered more complex. Recently, Vidal et al. [21] proposed a project complexity index based on the analytic hierarchy process to highlight the most complex scheduling alternatives and their sources of complexity. 2.3 The Flexibility Measures Some attempts have also been made to determine the quality of a schedule based on its flexibility. Cesta et al. [30] defined the robustness RB(G) of a schedule G as its ability to absorb temporal variations in an activity without carrying them forward. More precisely, if we use d(t 1, t 2 ) to represent the minimum temporal distance between time points t 1 and t 2, then RB(G) = i j d(eai, s Aj ) d(s Aj, e Ai ) H n (n 1) 100, where s Ak and e Ak respectively denote the start time and end time of activity A k, n is the total number of activities in G and H stands for the horizon of the problem (see [30, 31] for an explanation). The quantity d(eai, s Aj ) d(s Aj, e Ai ) measures the temporal flexibility between a pair of activities Ai and A j. Therefore, intuitively a schedule with a higher value of RB(G) is more likely to restrict any variation in the duration of an activity to local changes.

6 6 Aloulou and Portman [32] defined the flexibility in job sequencing flex seq (G) as the number of pairs of activities in G that are not ordered with respect to each other by explicit or implicit precedence links. The idea is that a schedule with a high value of flex seq has higher flexibility as more pairs of activities are independent of variations in each other. In addition to the above mentioned robustness measures, some measures have been proposed to estimate the impact of disruptions on a schedule. Policella et al. [31] defined the disruptability, dsrp(g), of a schedule G as the average ratio of the temporal slack in each activity A i to the total number of delays caused by shifting A i forward by an amount of time Ai, that is, dsrp = 1 n n i=1 slack Ai num changes (A i, Ai ). Furthermore, the authors recommend Ai = slack Ai as increment [31]. Recently, Klimek and Lebowski [33] defined another measure of schedule disruption, namely stability, that calculates the sum of delays in all activities caused by a one unit delay in each activity, that is, ( n 1 n ( ) ) stab(g) = s j A i s Ai, j=1 i=1 where s j A i denotes the start time of activity A i after a one unit increment in the duration of activity A j. The lower the value of stab(g) the more stable the schedule. 2.4 Limitations of Existing Measures Despite their usefulness, the current measures have several shortcomings. To begin with, the complexity measures do not consider the inherent variability in the scheduling process. They describe schedule quality only in terms of number of links between activities and assume that larger number of links lead to bad quality [14, 19]. However, from a software project manager s standpoint, a good schedule is one that is flexible and exhibits stability despite changes caused by unforeseen factors. The complexity based quality metrics do not consider the activity duration in their analysis and hence, cannot guide software project managers on this important aspect of schedule quality. Although, the current flexibility measures give an indication of the extent to which a schedule can absorb temporal changes, they fail to distinguish between the flexibility of schedules that have similar network topologies. Additionally, most of these measures do not have a fixed range of values, making it difficult to compare different schedules. In Section 4.4, we demonstrate these problems using some concrete examples. This necessitates the development of measures that present a more complete picture and clearly discriminate between different schedules. The flexibility measures developed in this paper aims to address the above mentioned issues.

7 7 3 METHODOLOGY AND RESEARCH DESIGN The flexibility analysis of schedules presented in this paper has been developed in four stages as follows: 1) Schedule representation 2) Centrality-based flexibility measures 3) Comparative analysis 4) Validation In the first stage, we adopt the critical path method (CPM) to analyze a schedule and represent it as a directed acyclic graph (DAG). This has been discussed in Section??. The second stage, which forms the core of our research, deals with defining robust flexibility measures based on a strong theoretical background and an intuitively clear interpretation. In addition, the new metrics should be easy to evaluate, have a well-defined fixed scale and be able to differentiate between the flexibility of schedules having similar network topologies. This stage is carried out in Sections In Section 4.1, we discuss social network analysis and betweenness centrality - the motivation and theory behind our research. Section 4.2 introduces the node centrality of the nodes (activities) in a schedule and uses it to define two new schedule flexibility measures called path shift and value shift. Finally, Section 4.3 demonstrates how the new measures can be successfully applied and interpreted. The comparative analysis stage compares our approach to the existing methodologies of analyzing schedule flexibility. Scheduling examples are used in Section 4.4 to highlight several advantages and desirable properties of the centrality-based flexibility measures. The last stage consists of validating the flexibility analysis using (a) examples (Section 4.3), (b) two real-life case studies accompanied by Monte Carlo simulations (Sections 5 and 6) and qualitative data collected from academia and industry participants (Section 5.4). In this paper, we follow the design science research framework [34] to present the centrality-based flexibility analysis of schedules. Hevner et al. [34] have presented seven design science research guidelines, namely, design as an artifact, problem relevance, design evaluation, research contributions, research rigor, design as a search process and communication of research. The design as an artifact guideline indicates that research must produce an artifact in the form of a model or method. Problem relevance points out the need to develop technology-based solutions to relevant problems. The design evaluation guideline suggests that the quality of a design artifact must be demonstrated via evaluation method such as case study or experiments. By research contribution it is implied that the research should make clear contribution in the area of the design artifact. The research rigor guideline points out that the model should use rigorous methods in the development and evaluation of a design artifact. By design as search process it is meant that the search for an effective artifact requires utilizing available means to reach desired ends while satisfying laws in the problem domain. Finally, the communication of research states that the research must be presented to technology-oriented as well as management-oriented audiences

8 8 [3, 34]. Here we discuss our approach with reference to these guidelines. Design as an artifact: The main artifact of our research is the centrality-based schedule flexibility analysis. The analysis uses CPM to represent a schedule as a network. Then it applies the centralitybased measures on the schedule network to determine its flexibility. Problem Relevance: Our research aims to answer the question: What is a good schedule? We quantify the goodness of a schedule in terms of its flexibility and provide an improved mechanism to measure schedule flexibility. We apply our methodology to a real-world software engineering project and gather feedback from researchers as well as practitioners. Thus the problem under consideration is highly relevant to the fields of software engineering, project management and scheduling. Design Evaluation: The methodology developed in this paper is evaluated using scheduling examples (Section 4.3 and 4.4), two real-life case studies (Sections 5 and 6) and the feedback gathered from academia and industry (Section 5.4). Furthermore, in Section 7, we investigate some distinctive features of our approach. Research Contribution: The main research contributions of our work are (1) applying techniques from social network analysis to a scheduling problem for the first time, (2) defining two new centralitybased schedule flexibility measures that offer several advantages over the existing approaches and (3) illustrating how these measures can be used in practice. We show that compared to the existing measures of schedule flexibility, our measures are unique for their ease of use, clear interpretation, strong theoretical background and ability to distinguish between similar looking schedules. Research Rigor: To the best of our knowledge, the schedule flexibility measures developed here are the first to have such an elaborate theoretical foundation. The research performed in this paper draws heavily from the literature on CPM, software project management, social network analysis and project scheduling; and meets the prevalent research standards in these areas. Design as a Search Process: The schedule flexibility analysis presented in this paper has been derived by selecting and utilizing well-established techniques of social network analysis, scheduling and project management. The end product has been reached by a search-based process [34]. Communication of Research: The intended audience of this paper includes industry practitioners as well as researchers in the fields of software engineering, social network analysis, scheduling and project management. Due to the interdisciplinary nature of the research, we have made every effort to make the paper self-contained. Furthermore, through two real-life case studies we show that our research can be implemented in industry with minimal effort (see Sections 5, 6 and 7).

9 9 4 CENTRALITY-BASED FLEXIBILITY MEASURES 4.1 Social Network Analysis Betweenness Centrality A social network is an undirected or directed graph whose nodes represent actors and whose edges or arcs represent social relationships between actors such as friendliness and unfriendliness. Several types of networks from disciplines other than sociology can be considered as social networks. Thus tools from social network analysis find wide range applications across domains. Centrality is one such tool that has been applied to diverse areas such as biosciences and software engineering [35]. A centrality measure determines the influence of a node in a network-based on a given criterion and ranks the nodes according to their relative importance. Various measures of centrality have been developed over the years including degree, closeness, betweenness and eigenvector centralities [36]. If a node v in a social network lies on many shortest paths between other nodes, it would have a higher influence in the network. This is due to the fact that most of the short connections between other nodes depend on node v. The betweenness centrality of a node determines precisely this influence [36]. Given a graph G(V, E) with node set V and edge set E, the betweenness centrality of a node v V is defined as C B (v) = x v y V σ xy (v) σ xy, (1) where σ xy and σ xy (v) respectively denote the number of shortest paths between nodes x and y and the number of such shortest paths that pass through v. Since a schedule is also a network of activities which influence each other through precedence links, the betweenness centrality can be adapted to rank the activities of a schedule. However, an activity is more influential in a schedule if it is critical or near critical, that is if it lies on a long duration source-sink path. In the following section, we develop these ideas further and propose a scheme for ranking the activities of a schedule. 4.2 Path Shift and Value Shift We begin by defining the node centrality of a node (activity) in a schedule. As explained in Section 3.1, we want a node that lies on a long duration source-sink path to have a larger node centrality. Let G(V, E) be a schedule network with V being the set of nodes and E the set of arcs. For any node v V, let us denote by P (v) the set of source-sink paths in G that pass through v. Also let D(v) denote the duration of an activity v and D(p) = v p D(v) be the duration of a source-sink path p. Then we define the node centrality of v as NC(v) = max p P (v) D(p), (2) CV where CV denotes the critical value of schedule G and max p P (v) D(p) represents the duration of the longest duration source-sink path in G that passes through v.

10 10 Clearly, 0 < NC(v) 1, for any node v V. Moreover, NC(v) = 1 if and only if v represents a critical activity. Basically, the node centrality compares the influences of different activities on the entire schedule in the sense that how delays in these activities would affect the schedule. If NC(u) > NC(v), for some activities u and v of G, then an increase of t units of time in D(u) would affect the schedule more than the same increase in D(v). This is because u is more critical compared to v. Having defined a node ranking scheme, we now define our schedule flexibility measures. The value shift S v (G) of the schedule G is defined as the average of all the node centralities, i.e., S v (G) = v V NC(v). V (3) It can be seen that 0 < S v (G) 1 and S v (G) = 1 if and only if all activities of G are critical. As the name suggests, the value shift predicts the effect of delays in activities on the critical value of the schedule. A higher value of S v (G) implies that several activities have high node centralities, i.e., most of the activities are either critical or near critical. Therefore a delay in any of these critical or near critical activities can potentially lead to a change of critical value. Thus a higher value shift S v (G) results in the critical value of G being more sensitive to changes in activity durations. The second measure, named path shift and denoted by S p (G), predicts the influence of delays on the critical paths of G. It is defined as the average of node centralities of the non-critical activities in G. v V S p (G) = nc NC(v), (4) V nc where V nc is the set of non-critical activities of G. We note that 0 S p (G) < 1 and S p (G) < S v (G). Obviously, S p (G) cannot be equal to one as there is at least one critical activity in the schedule. Moreover, S p (G) = 0 if and only if all activities are critical. A higher value of path shift implies that several noncritical activities are near critical and so any change in the duration of these activities can potentially change the critical paths of the schedule. This is an undesirable effect as generally a significant amount of planning effort is concentrated on the effective management of critical paths. A non-critical path becoming critical can totally disrupt this planning. Overall, the value and path shifts predict the tendency of a schedule to undergo whole-scale changes when some activities are delayed. Since flexibility is defined as the resistance to whole-scale changes, lower values of S v (G) and S p (G) can be interpreted as indicating higher flexibility of G. 4.3 Example Cases, Scale and Interpretation Having established a theoretical foundation of our measures, it is natural to ask how the values of these measures can be used to distinguish between good and bad schedules. Figure 1 shows seven example schedules. Such examples are often used in project management literature to test new tools and techniques [14]. The linear schedule A consists of five activities on a single source-sink path. So the critical path in

11 11 schedule A cannot change. However, even the slightest of delays in any activity will result in a change in the critical value of schedule A. Therefore, we expect schedule A to have low flexibility. Schedules B, C and D consist of the same set of nodes and arcs with exactly the same source-sink paths and so have the same network complexity. However, the durations of activities in these schedules are quite different and hence we expect these schedules to have varying levels of flexibility. Schedule E is obtained by augmenting schedule D with an additional critical path consisting of two new critical activities. This leads to the expectation that E offers less flexibility compared to D. Finally, schedule F and G have the same set of nodes with G having an additional arc A5 A2. Since this additional arc creates an extra near critical path in G, it would be reasonable to assume that F has a greater capacity to absorb temporal changes than G. We now evaluate path shift and value shift for these schedules and show how these measures can be used to determine the flexibility, and hence the quality, of these schedules. Fig. 1: Example schedules Table 1 lists node centralities and the values of S p and S v for the seven schedules of Figure 1. We observe that, as expected, S p (A) = 0 which is minimum possible, whereas S v (A) = 1 which is maximum possible. These values indicate that although the critical path of schedule A is absolutely stable (there is only one path), the critical value is extremely unstable as a delay in any activity will directly change the critical value. TABLE 1: Path shift and value shift of the example schedules (rounded to the nearest hundredth) Furthermore, despite their identical network topologies, our measures are able to differentiate between the quality of schedules B, C and D based on their flexibility. Since S p (D) < S p (C) < S p (B) and S v (D) < S v (C) < S v (B), our measures indicate that schedule D is the most flexible, while B is the least. This conclusion is consistent with the scheduling intuition because the non-critical path and the non-critical activities in B are near critical, while this is not the case for schedule D. As a result we expect B to be more sensitive to changes in activity durations as such changes will affect its critical path and critical value more than schedule D. We also observe that the flexibility of schedule C is greater than B but less than D. Moreover, S p (D) = S p (E) while S v (D) < S v (E), the former due to the fact that both schedules have the same non-critical paths while the latter owing to schedule E having more critical activities. Our measures indicate D to be a more flexible schedule which is in line with the scheduling

12 12 logic. Table 1 also shows that both the shift measures assign a lower value to schedule F compared to schedule G. This is in line with the intuitive reasoning that the activities of G are relatively more near critical and thus G should be considered less flexible than F. The above example cases suggest that a schedule with lower path shift and value shift is more flexible and hence, according to our criterion, of better quality. Therefore, among schedules B G in Figure 1, D is of the best quality. However, in comparing two schedules, it is possible for one schedule to have a lower path shift while the other has a lower value shift (for example, A and D). In this case, we conclude that the former schedule has a more stable critical path while the latter has a more stable critical value and the overall schedule quality depends on which type of stability is more desirable for a given software project. 4.4 Comparison with the Current Flexibility Measures The aim of this section is to show that the centrality-based flexibility measures outperform the current metrics of schedule flexibility in discriminating between the flexibility of different schedules. In Section 4.3 we have seen that S p and S v clearly differentiate between the flexibility of schedules A G in Figure 1. Furthermore, the results obtained are consistent with the scheduling intuition. However, we will see that this is not the case for the existing flexibility metrics flex seq, RB, dsrp and stab. The measure flex seq can give a false impression of flexibility. A schedule with many non-intersecting parallel paths will have a high value of flex seq even if all the paths are critical, whereas in reality such a schedule allows no flexibility at all. Furthermore, flex seq assigns the same value of 2 for the schedules B, C and D as they have the same precedence relations between activities. The robustness metric RB fails to distinguish between the flexibility of schedules F and G as the additional arc A5 A2 does not change the start and end times of activities, nor does it change the critical value. However, as discussed in Section 3.3, the scheduling logic suggests that G is less flexible. The two disruption measures dsrp and stab also have certain drawbacks. Firstly, it is not clear why the two measures respectively use the slacks and one unit delays as increments to calculate schedule disruption. Moreover, it is not clear how dsrp handles critical activities as for these activities both slack and num changes are equal to zero. On the other hand, stab fails to differentiate between schedules B, C and D giving a value of 8 for each of these schedules. In addition to the issues discussed above, none of flex seq, dsrp and stab has a specific range of values. This lack of scale makes it difficult to compare the flexibility of different schedules.

13 13 5 CASE STUDY 1: OBESITY HEALTH CLINIC SYSTEM This section discusses an application of our proposed schedule flexibility analysis to the Obesity Health Clinic System (OHCS) implemented for an obesity clinic at Saudi Aramco 1 in Dhahran, Saudi Arabia. Within this system, patient and health team member profiles can be created, updated and deleted. The OHCS allows health team members and patients to create obesity reducing goals. The goals are added to the bank of ideas and classified under the appropriate category (for example, physical, dietary etc). These goals can also be customized according to individual patient needs by the health team. The OHCS also has the goal suggestion feature which helps the health team to find appropriate goals for a patient according to his health condition. Moreover, the OHCS has a report engine that allows health team members to generate reports about patient performance and popular goals. TABLE 2: Activities, durations and predecessors for OHCS implementation schedules Often the activities of a software development project can be planned in different ways. Thus it is quite possible to come up with alternative schedules to plan the activities of a project. In our first case study, the project team proposed two schedules for the implementation of the OHCS. Generally in such cases, the project team has to base their choice of schedule on their experience and mutual consensus. Here we show that the flexibility analysis developed in this paper can greatly facilitate the project team in making informed decisions in this regard. Fig. 2: The proposed OHCS schedules Table 2 lists the activities of the OHCS project, their planned durations in days and their predecessors in the two schedules designed by the project team. The OHCS project consists of sixteen activities. The business logic of the system is implemented during activities A1, A2, A3, A6, A7 and A8, while the backend functionalities are implemented during A4, A9, A10 and A11. Similarly, the OHCS user interface layout is implemented during activities A5, A12 and A13. The activity A14 is performed to carry out the integration of the business logic and the back-end features of the OHCS, whereas A15 is performed to complete the integration of the different subsystems of the OHCS. The final project activity A16 is to deploy the system. Figure 2 show the CPM networks for the schedules S and S proposed for the OHCS project. Note that A0 is a dummy source node added so that the schedule networks have exactly one source and one sink. This is a standard practice in project management. 1. Saudi Aramco is the national oil and gas company of Saudi Arabia.

14 Shift Measures Applied to the OHCS The CPM approach works by calculating the longest duration path passing through each project activity and uses this information to obtain the critical paths and the critical value of a schedule. These calculations can be carried out by using any standard project management tool such as Microsoft Project. In this study, we 2 which is a project risk management add-in for Microsoft Excel. Our preference is due to its ability to run Monte Carlo simulation readily (see Section 5.2). TABLE 3: Flexibility measures for schedule S For both the OHCS schedules, the critical value turns out to be 98 days while the critical path is the directed source-sink path A0 A1 A2 A3 A7 A8 A14 A15 A16. We now use the flexibility measures defined in Section 4.2 to predict and compare the quality of schedules S and S. Table 3 and Table 4 list the values of node centralities and the shift measures for the schedules S and S, respectively. The path shift and the value shift for the schedule S equals 0.71 and 0.86, respectively. Analyzing S shows that S p (S ) = 0.67 < S p (S) and S v (S ) = 0.84 < S v (S). From these results, we expect that the critical path and the critical value of S will remain relatively more stable if some activities are delayed. TABLE 4: Flexibility measures for schedule S 5.2 Validation Using Monte Carlo Simulation The predictive analysis performed in Section 5.1 can be validated by running Monte Carlo simulations of the two proposed schedules. Over the years, Monte Carlo simulation has emerged as an important decision support tool in a variety of domains including finance, project management and scheduling [12, 37, 38]. From a scheduling perspective, Monte Carlo simulations enable a decision maker to forecast the likely behavior of a schedule based on the uncertainty in activity durations [12, 38]. The uncertainty in activity duration is encoded by defining a probability distribution for each such activity. For scheduling applications, the triangular distribution is the most popular [37, 38], in which the shortest, longest and expected duration of each vulnerable activity is specified by a domain expert. This is followed by the selection of a suitable sampling method. The simple Monte Carlo sampling is the traditional technique for using random or pseudo-random numbers to sample from a probability distribution. These techniques are completely random as any given sample value may fall anywhere within the range of the input 2.

15 15 sample space. Typically, a large number of samples are required to match the input distributions. On the other hand, the Latin hypercube sampling [39] divides the input probability distributions into a finite number of equally probable intervals and then chooses a sample randomly from each interval. This type of stratified sampling ensures that even with a small number of samples the input distribution is matched very closely [39]. After a sampling method is selected, Monte Carlo simulation generates a probability distribution of the outcome values. In scheduling, we are generally interested in the distribution of critical value [12]. Considering their merits, here we apply a triangular distribution to each input and use Latin hypercube sampling to run Monte Carlo simulations. We select seven activities, namely A6, A8, A9, A10, A13, A14 and A15 that the project team considered as more susceptible to potential delays and apply triangular distribution to the durations of these activities in order to simulate these potential delays. We to run the simulations of S and S by specifying a minimum, mostly likely and maximum duration for each of the vulnerable activities. The minimum duration was set one day less than the planned duration, whereas the most likely duration was set as the planned duration. On the other hand, the maximum value for the triangular distribution was chosen as double of the planned duration to account for significant delays. While running simulations, we tracked changes in the critical value and the critical path of the schedules. Fig. 3: Change of critical value and critical paths during Monte Carlo simluations of S and S Figures 3 (a-b) show the distribution of the critical value for the OHCS schedules S and S as generated after 100 simulation runs, while Figure 3 (c) describes the stability of critical paths of S and S during these simulation runs. Note that 100 iterations suffice due to our choice of Latin hypercube sampling, which requires fewer samples. The minimum, mean and maximum critical value of S are recorded to be 99.36, and , respectively. It is noteworthy that the planned critical value of 98 days for the schedule was not achieved in any of the simulation runs. In fact, in only about 10% of the iterations the critical value was less than 105. Similarly, the simulation results show that the critical path of S changed in 55% of the instances, which though not as bad as the variability exhibited by the critical value, is still quite high. On the other hand, the critical path and the critical value of S display relatively greater stability when the same triangular distributions are applied to the activities of S. It was observed that the critical path changed in only 43% of the iterations while the critical value was distributed between and with an average of All these findings are in line with the flexibility analysis performed in Section 5.1 and indicate that S is a more flexible schedule for the OHCS project.

16 Comparison with the Example Cases Comparing the results obtained by applying value and path shift to the schedules S and S, and their validation using Monte Carlo simulations, with the results obtained in Section 4.3 reveals an interesting analogy. Indeed, schedules S and S can be compared with the schedules F and G considered among the example cases. Recall that schedules F and G consist of the same activities with the same durations. They only differ in one arc. However, even this minor change results in different degrees of flexibility for these schedules as determined by the shift measures. Since F has lower path and value shifts than G, it was concluded in Section 4.3 that F was more flexible. It was remarked that the relative lack of flexibility of G could be attributed to it having more near-critical activities (more nodes with large node centralities) than F. Analogously, schedules S and S consist of the same activities having the same durations, with the only difference being a single arc. The shift measures are once again able to distinguish between the flexibility of these very similar schedules and we conclude that S is more flexible than S. Here also we can explain the lower flexibility of S in terms of three of its nodes A9, A10 and A11 having larger node centralities (see Table 3 and Table 4). Thus the values and interpretation of node centrality, path shift and value shift remain consistent. Moreover, the results of Monte Carlo simulation support our findings. 5.4 Qualitative Analysis of Feedback In this section, we present qualitative analysis of the feedback received from software engineers and developers, both from academia and industry, on using our schedule flexibility measures in practice. The participants belonged to one or more of the following categories. OHCS project team members Researchers from the Information and Computer Science Department, King Fahd University University of Petroleum and Minerals Researchers from the Department of Computer Science, University of Calgary Researchers from the Department of Electrical and Computer Engineering, University of Calgary Industrial practitioners from Saudi Aramco Software release planning experts from Microsoft Canada The participants were either first-hand users of the flexibility analysis presented in this paper as part of the OHCS project team or had been provided full project information and technical details needed to implement our approach in projects of their own. A total of 17 professionals participated in the study. The qualitative data was collected by conducting interviews with the participants. Their experiences were documented using mainly two open-ended questions, encompassing the advantages and difficulties associated with applying the proposed flexibility measures. The interviews were kicked off with the question: From your observation and experience, what are the characteristics of our flexibility measures

17 17 that helps you distinguish a set of project schedules? We used follow-up questions to clarify and gather more details about the strengths and limitations mentioned by the participants. The participants indicated four key advantages of the proposed flexibility measures. First, the measures were deemed helpful in predicting the impact of activity delays on critical paths and critical values associated with network-based schedules by a large majority of the participants. Second, there was a consensus that path shift and value shift measures helped in early prediction of how schedules would respond to changes in activity duration. Third, most of the interviewees agreed that the proposed flexibility measures lead to effort saving by pointing out lack of flexibility in a project schedule. Finally, the feedback from the participants also indicated the fixed range between 0 and 1 as a major strength of the new measures as it facilitates in comparing different schedules. The participants in the study did not indicate any major disadvantages in applying the proposed flexibility measures as they do not require additional computational effort and can be computed alongside the standard CPM calculations. However, two of the participants indicated that the proposed flexibility analysis only accounts for task schedules and does not explicitly consider resource allocation constraints. Similarly, feedback from another participant indicated that the flexibility measures should perhaps be modified to provide some indication of how project team members could be best matched with the project tasks. One participant from construction management background pointed out that our approach could just as well be implemented on construction management and other industrial management projects. We agree with all these participants and have incorporated their suggestions in our plans for future work. 6 CASE STUDY 2: LIFE CYCLE ASSESSMENT SYSTEM This section discusses an application of our schedule flexibility analysis to the Life Cycle Assessment (LCA) system implemented for an ecologically sustainable development consulting firm in Dhahran, Saudi Arabia. Within this system, building architecture engineers investigate the existing building practices from the energy and environmental perspectives. The LCA system allows architecture engineers to evaluate the environmental burdens associated with a product, system or activity by quantifying the energy impact of different building materials. TABLE 5: Activities, durations and flexibility analysis of the LCA implementation schedule T Table 5 lists, among other things, the activities of the LCA development project and their planned durations in days designated by the project team. The LCA project consists of thirty four activities. The user profiles are implemented during activities A1 and A2, while the business logic of the system is implemented during the activities A3,..., A9. The LCA system user interface layout and database is implemented during activities A10 and A11, respectively. The environmental impact assessment and

18 18 improvement analysis is carried out during activities A13,..., A28. Finally, the inventory analysis is implemented during A29,..., A32. Activity A33 is performed to complete the integration of the different subsystems of the LCA system. The final project activity A34 is to deploy the system. Figure 4 shows the CPM network schedule for the LCA system project as designed by the project team. Fig. 4: The LCA schedule T We aim to apply the centrality-based flexibility measures on schedule T and compare its flexibility with schedules S and S of Case Study 1. Like the first case study, we perform the CPM calculations For schedule T the critical value turns out to be 141 days while the critical path is the directed source-sink path A0 A13 A15 A17 A21 A23 A24 A27 A28 A33 A34. The node centralities and the flexibility measures are computed, as in Section 5, to quantify the flexibility of schedule T and compare it with S and S. Table 5 lists the values of node centralities and the shift measures for schedule T. The path shift and the value shift of T are 0.65 and 0.76, respectively, both of which are lower than the corresponding values for schedules S and S. These results forecast that, if some activities are delayed, the critical paths and the critical value of T will demonstrate greater stability compared to S and S. We validate this predicted behavior by running a Monte Carlo simulation of schedule T. In order to maintain consistency with Case Study 1, we apply triangular distributions to the durations of thirteen activities, namely A1, A4, A5, A7, A10, A11, A13, A21, A24, A28, A30, A32 and A33 that were considered most vulnerable to potential delays by the project team. Again, as in the OHCS case study, the minimum duration for the triangular distribution of each activity is set one day less than the planned duration, the most likely duration is set as the planned duration, while the maximum value for the triangular distribution is chosen as double of the planned duration to account for significant delays. Furthermore, we select Latin hypercube sampling to run the Monte Carlo simulation. During simulation runs, we tracked changes in the critical value and the critical path of the LCA schedule T. Fig. 5: Change of critical value and critical paths during Monte Carlo simulation runs of T Figure 5 (a) shows the distribution of the critical value for schedule T as generated after 100 simulation runs, whereas Figure 5 (b) compares the stability of critical paths of T with those of S and S during these iterations. The minimum, mean and maximum critical value of T turns out to be , and , respectively. We note that the mean critical value of T during simulation runs lies within ( )/141 = 14.66% of the expected critical value of 141. On the other hand, from Figure

19 19 3 (a-b), the variation from the expected critical value of 98 for schedules S and S is respectively 16.56% and 16.4%. Thus Monte Carlo simulation results back our prediction that of S, S and T, the schedule T has a more stable critical value. The critical path of T shows even greater stability, remaining unchanged in 71% of the iterations. On the other hand, the critical path of S is unaltered in 45% and that of S remains the same in 57% of the iterations. All these findings validate our centrality-based flexibility analysis of schedules S, S and T. 7 DISCUSSION We have seen that the measures introduced in this paper perform better than the existing measures in distinguishing between the flexibility of different schedules. In this section, we outline some additional characteristics of our approach that make it more attractive for practical use compared to the current techniques for analyzing schedule robustness. Ease of Use: Practicability is one of the main criteria for the acceptance of any project analysis technique by the software engineering community. A complex methodology, however effective it may be, is unlikely to attract practitioners as they simply don t have the time to come to terms with it. The project flexibility analysis presented in this paper has a theoretical foundation in social network analysis, yet it is very easy to implement as suggested by the findings of the qualitative study presented in Section 5.4. The path and value shifts are two numbers ranging between 0 and 1 with a clear interpretation. The simplicity of scale makes it easy to determine the flexibility of schedules. Computational complexity: Our measures only depend on the basic temporal data of a software project schedule, which is computed as part of the standard CPM calculations. After running the CPM algorithm, the node centralities and the two shift measures can be determined in O( V ) time, where V is the number of activities in the schedule. Hence, our measures can be evaluated with nominal computational effort. This provides software project managers the ability to measure the flexibility of a schedule alongside CPM calculation and without running complex simulations. Scalability: Since our flexibility measures do not require any significant computational effort, they can be easily applied to large-scale software projects. Several centrality-based metrics are routinely applied to complex real-life networks including the World Wide Web, food chains, neuron networks in the brain, Facebook networks and citation/collaboration networks [40]. Often these networks involve thousands or even millions of nodes and edges. Since path shift and value shift are based on betweenness centrality, they retain their effectiveness irrespective of the size and the complexity of the project schedule. We are planning a large-scale industrial study as part of the future work to demonstrate this feature of our schedule flexibility measures. Portability: In this paper, we applied our flexibility analysis to schedules without any time lags between activities. Such time lags are represented as time durations on arcs. It is noteworthy that even in the presence of time lags, equations (2), (3) and (4) remain the same. The only change is that

20 20 the duration of a path p would be defined as D(p) = v p D(v) + e p l(e), where l(e) is the time lag on arc e. Furthermore, all the schedules considered in this paper are represented as Activity on Node (AoN) networks, i.e., the nodes represent activities while the arcs represent precedence relations between activities. Equivalently, we can adopt the Activity on Arc (AoA) formulation by swapping the roles of nodes and arcs. However, path shift and value shift, their scales and interpretations still remain unchanged. To illustrate this fact, let us consider the AoA representations of the OHCS schedules S and S. Fig. 6: Activity on Arc representation of S and S Here activities are represented by arcs. As before, let V denote the nodes and E the set of arcs of a schedule network G. For any arc a E, denote by P (a) the set of source-sink paths that pass through a. Also let D(a) denote the duration of an arc a and D(p) = a p D(a) be the duration of a source-sink path p. A critical path is a largest duration source-sink path in G. A critical arc is an arc that lies on a critical path and E nc denotes the set of non-critical arcs in G. To make the shift measures work in the AoA setting, we just have to make minor changes to formulas (2), (3) and (4). We define the arc centrality of an arc a E as AC(a) = max p P (a) D(p), (5) CV where CV denotes, as before, the critical value of the schedule and max p P (a) D(p) represents the duration of the longest duration source-sink path in the schedule that passes through a. The value shift and path shift can now be defined as a E S v (G) = AC(a), (6) E and a E S p (G) = nc AC(a). (7) E nc It is easy to see that using these definitions does not change the path durations, the critical paths, the centralities and the values of S v and S p. Therefore, the schedule flexibility analysis performed in this paper is equally applicable to AoN and AoA schedule networks. 8 CONCLUSION AND FUTURE WORK The present work provides software project managers with a diagnostic technique to estimate the flexibility of a schedule before its implementation. The technique developed in this paper has a strong theoretical background as it is motivated by the concept of betweenness centrality from social network analysis. Additionally, our technique can be applied with minimal computational effort. The two measures, path shift and value shift output two numbers on a scale from 0 to 1 and have a clear interpretation. While

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi

Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi Project and Production Management Prof. Arun Kanda Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture - 9 Basic Scheduling with A-O-A Networks Today we are going to be talking

More information

A Comparison of System Dynamics (SD) and Discrete Event Simulation (DES) Al Sweetser Overview.

A Comparison of System Dynamics (SD) and Discrete Event Simulation (DES) Al Sweetser Overview. A Comparison of System Dynamics (SD) and Discrete Event Simulation (DES) Al Sweetser Andersen Consultng 1600 K Street, N.W., Washington, DC 20006-2873 (202) 862-8080 (voice), (202) 785-4689 (fax) albert.sweetser@ac.com

More information

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis

SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis SYSM 6304: Risk and Decision Analysis Lecture 5: Methods of Risk Analysis M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October 17, 2015 Outline

More information

Criticality of Schedule Constraints Classification and Identification Qui T. Nguyen 1 and David K. H. Chua 2

Criticality of Schedule Constraints Classification and Identification Qui T. Nguyen 1 and David K. H. Chua 2 Criticality of Schedule Constraints Classification and Identification Qui T. Nguyen 1 and David K. H. Chua 2 Abstract In construction scheduling, constraints among activities are vital as they govern the

More information

Amajor benefit of Monte-Carlo schedule analysis is to

Amajor benefit of Monte-Carlo schedule analysis is to 2005 AACE International Transactions RISK.10 The Benefits of Monte- Carlo Schedule Analysis Mr. Jason Verschoor, P.Eng. Amajor benefit of Monte-Carlo schedule analysis is to expose underlying risks to

More information

Schedule Risk Analysis Simplified 1 by David T. Hulett, Ph.D.

Schedule Risk Analysis Simplified 1 by David T. Hulett, Ph.D. Schedule Risk Analysis Simplified 1 by David T. Hulett, Ph.D. Critical Path Method Scheduling - Some Important Reservations The critical path method (CPM) of scheduling a project is a key tool for project

More information

Measurement Information Model

Measurement Information Model mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides

More information

(Refer Slide Time: 01:52)

(Refer Slide Time: 01:52) Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This

More information

Social Media Mining. Network Measures

Social Media Mining. Network Measures Klout Measures and Metrics 22 Why Do We Need Measures? Who are the central figures (influential individuals) in the network? What interaction patterns are common in friends? Who are the like-minded users

More information

PROJECT TIME MANAGEMENT

PROJECT TIME MANAGEMENT 6 PROJECT TIME MANAGEMENT Project Time Management includes the processes required to ensure timely completion of the project. Figure 6 1 provides an overview of the following major processes: 6.1 Activity

More information

The Trip Scheduling Problem

The Trip Scheduling Problem The Trip Scheduling Problem Claudia Archetti Department of Quantitative Methods, University of Brescia Contrada Santa Chiara 50, 25122 Brescia, Italy Martin Savelsbergh School of Industrial and Systems

More information

Project Time Management

Project Time Management Project Time Management Plan Schedule Management is the process of establishing the policies, procedures, and documentation for planning, developing, managing, executing, and controlling the project schedule.

More information

A Non-Linear Schema Theorem for Genetic Algorithms

A Non-Linear Schema Theorem for Genetic Algorithms A Non-Linear Schema Theorem for Genetic Algorithms William A Greene Computer Science Department University of New Orleans New Orleans, LA 70148 bill@csunoedu 504-280-6755 Abstract We generalize Holland

More information

A Graph Based Requirements Clustering Approach for Component Selection

A Graph Based Requirements Clustering Approach for Component Selection 1 A Graph Based Requirements Clustering Approach for Component Selection Muhammad Ali Khan a and Sajjad Mahmood b College of Applied and Supporting Studies a, Information and Computer Science Department

More information

Completeness, Versatility, and Practicality in Role Based Administration

Completeness, Versatility, and Practicality in Role Based Administration Completeness, Versatility, and Practicality in Role Based Administration Slobodan Vukanović svuk002@ec.auckland.ac.nz Abstract Applying role based administration to role based access control systems has

More information

Using simulation to calculate the NPV of a project

Using simulation to calculate the NPV of a project Using simulation to calculate the NPV of a project Marius Holtan Onward Inc. 5/31/2002 Monte Carlo simulation is fast becoming the technology of choice for evaluating and analyzing assets, be it pure financial

More information

The Goldberg Rao Algorithm for the Maximum Flow Problem

The Goldberg Rao Algorithm for the Maximum Flow Problem The Goldberg Rao Algorithm for the Maximum Flow Problem COS 528 class notes October 18, 2006 Scribe: Dávid Papp Main idea: use of the blocking flow paradigm to achieve essentially O(min{m 2/3, n 1/2 }

More information

Table of Contents Author s Preface... 3 Table of Contents... 5 Introduction... 6 Step 1: Define Activities... 7 Identify deliverables and decompose

Table of Contents Author s Preface... 3 Table of Contents... 5 Introduction... 6 Step 1: Define Activities... 7 Identify deliverables and decompose 1 2 Author s Preface The Medialogist s Guide to Project Time Management is developed in compliance with the 9 th semester Medialogy report The Medialogist s Guide to Project Time Management Introducing

More information

Project Scheduling: PERT/CPM

Project Scheduling: PERT/CPM Project Scheduling: PERT/CPM CHAPTER 8 LEARNING OBJECTIVES After completing this chapter, you should be able to: 1. Describe the role and application of PERT/CPM for project scheduling. 2. Define a project

More information

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 36 Location Problems In this lecture, we continue the discussion

More information

A Computer Application for Scheduling in MS Project

A Computer Application for Scheduling in MS Project Comput. Sci. Appl. Volume 1, Number 5, 2014, pp. 309-318 Received: July 18, 2014; Published: November 25, 2014 Computer Science and Applications www.ethanpublishing.com Anabela Tereso, André Guedes and

More information

Network (Tree) Topology Inference Based on Prüfer Sequence

Network (Tree) Topology Inference Based on Prüfer Sequence Network (Tree) Topology Inference Based on Prüfer Sequence C. Vanniarajan and Kamala Krithivasan Department of Computer Science and Engineering Indian Institute of Technology Madras Chennai 600036 vanniarajanc@hcl.in,

More information

March 30, 2007 CHAPTER 4

March 30, 2007 CHAPTER 4 March 30, 07 CHAPTER 4 SUPPORTING PLANNING AND CONTROL: A CASE EXAMPLE Chapter Outline 4.1 Background What was the cause of the desperation that lead to the development of the Program Evaluation and Review

More information

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test

Math Review. for the Quantitative Reasoning Measure of the GRE revised General Test Math Review for the Quantitative Reasoning Measure of the GRE revised General Test www.ets.org Overview This Math Review will familiarize you with the mathematical skills and concepts that are important

More information

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics*

IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* IE 680 Special Topics in Production Systems: Networks, Routing and Logistics* Rakesh Nagi Department of Industrial Engineering University at Buffalo (SUNY) *Lecture notes from Network Flows by Ahuja, Magnanti

More information

OPRE 6201 : 2. Simplex Method

OPRE 6201 : 2. Simplex Method OPRE 6201 : 2. Simplex Method 1 The Graphical Method: An Example Consider the following linear program: Max 4x 1 +3x 2 Subject to: 2x 1 +3x 2 6 (1) 3x 1 +2x 2 3 (2) 2x 2 5 (3) 2x 1 +x 2 4 (4) x 1, x 2

More information

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT?

WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? WHAT ARE MATHEMATICAL PROOFS AND WHY THEY ARE IMPORTANT? introduction Many students seem to have trouble with the notion of a mathematical proof. People that come to a course like Math 216, who certainly

More information

Chapter 11 Monte Carlo Simulation

Chapter 11 Monte Carlo Simulation Chapter 11 Monte Carlo Simulation 11.1 Introduction The basic idea of simulation is to build an experimental device, or simulator, that will act like (simulate) the system of interest in certain important

More information

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation:

Cost Model: Work, Span and Parallelism. 1 The RAM model for sequential computation: CSE341T 08/31/2015 Lecture 3 Cost Model: Work, Span and Parallelism In this lecture, we will look at how one analyze a parallel program written using Cilk Plus. When we analyze the cost of an algorithm

More information

Use project management tools

Use project management tools Use project management tools Overview Using project management tools play a large role in all phases of a project - in planning, implementation, and evaluation. This resource will give you a basic understanding

More information

Chapter 6: Project Time Management. King Fahd University of Petroleum & Minerals SWE 417: Software Project Management Semester: 072

Chapter 6: Project Time Management. King Fahd University of Petroleum & Minerals SWE 417: Software Project Management Semester: 072 Chapter 6: Project Time Management King Fahd University of Petroleum & Minerals SWE 417: Software Project Management Semester: 072 Learning Objectives Understand the importance of project schedules Define

More information

COMPUTING DURATION, SLACK TIME, AND CRITICALITY UNCERTAINTIES IN PATH-INDEPENDENT PROJECT NETWORKS

COMPUTING DURATION, SLACK TIME, AND CRITICALITY UNCERTAINTIES IN PATH-INDEPENDENT PROJECT NETWORKS Proceedings from the 2004 ASEM National Conference pp. 453-460, Alexandria, VA (October 20-23, 2004 COMPUTING DURATION, SLACK TIME, AND CRITICALITY UNCERTAINTIES IN PATH-INDEPENDENT PROJECT NETWORKS Ryan

More information

http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions

http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions A Significance Test for Time Series Analysis Author(s): W. Allen Wallis and Geoffrey H. Moore Reviewed work(s): Source: Journal of the American Statistical Association, Vol. 36, No. 215 (Sep., 1941), pp.

More information

Why? A central concept in Computer Science. Algorithms are ubiquitous.

Why? A central concept in Computer Science. Algorithms are ubiquitous. Analysis of Algorithms: A Brief Introduction Why? A central concept in Computer Science. Algorithms are ubiquitous. Using the Internet (sending email, transferring files, use of search engines, online

More information

Module 11. Software Project Planning. Version 2 CSE IIT, Kharagpur

Module 11. Software Project Planning. Version 2 CSE IIT, Kharagpur Module 11 Software Project Planning Lesson 29 Staffing Level Estimation and Scheduling Specific Instructional Objectives At the end of this lesson the student would be able to: Identify why careful planning

More information

Composite performance measures in the public sector Rowena Jacobs, Maria Goddard and Peter C. Smith

Composite performance measures in the public sector Rowena Jacobs, Maria Goddard and Peter C. Smith Policy Discussion Briefing January 27 Composite performance measures in the public sector Rowena Jacobs, Maria Goddard and Peter C. Smith Introduction It is rare to open a newspaper or read a government

More information

CHAPTER 4: FINANCIAL ANALYSIS OVERVIEW

CHAPTER 4: FINANCIAL ANALYSIS OVERVIEW In the financial analysis examples in this book, you are generally given the all of the data you need to analyze the problem. In a real-life situation, you would need to frame the question, determine the

More information

How To Find Influence Between Two Concepts In A Network

How To Find Influence Between Two Concepts In A Network 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation Influence Discovery in Semantic Networks: An Initial Approach Marcello Trovati and Ovidiu Bagdasar School of Computing

More information

Operational Research. Project Menagement Method by CPM/ PERT

Operational Research. Project Menagement Method by CPM/ PERT Operational Research Project Menagement Method by CPM/ PERT Project definition A project is a series of activities directed to accomplishment of a desired objective. Plan your work first..then work your

More information

Project Management Chapter 3

Project Management Chapter 3 Project Management Chapter 3 How Project Management fits the Operations Management Philosophy Operations As a Competitive Weapon Operations Strategy Project Management Process Strategy Process Analysis

More information

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm.

! Solve problem to optimality. ! Solve problem in poly-time. ! Solve arbitrary instances of the problem. !-approximation algorithm. Approximation Algorithms Chapter Approximation Algorithms Q Suppose I need to solve an NP-hard problem What should I do? A Theory says you're unlikely to find a poly-time algorithm Must sacrifice one of

More information

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets

Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Unraveling versus Unraveling: A Memo on Competitive Equilibriums and Trade in Insurance Markets Nathaniel Hendren January, 2014 Abstract Both Akerlof (1970) and Rothschild and Stiglitz (1976) show that

More information

Importance of Project Schedules. matter what happens on a project. projects, especially during the second half of projects

Importance of Project Schedules. matter what happens on a project. projects, especially during the second half of projects Project Time Management Chapter 6 Importance of Project Schedules Managers often cite delivering projects on time as one of their biggest challenges Time has the least amount of flexibility; it passes

More information

Problems, Methods and Tools of Advanced Constrained Scheduling

Problems, Methods and Tools of Advanced Constrained Scheduling Problems, Methods and Tools of Advanced Constrained Scheduling Victoria Shavyrina, Spider Project Team Shane Archibald, Archibald Associates Vladimir Liberzon, Spider Project Team 1. Introduction In this

More information

Chapter 6. The stacking ensemble approach

Chapter 6. The stacking ensemble approach 82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described

More information

Center for Complex Engineering Systems at KACST and MIT

Center for Complex Engineering Systems at KACST and MIT Center for Complex Engineering Systems - KACST King Abdulaziz City for Science and Technology P.O Box 6086, Riyadh 11442 Kingdom of Saudi Arabia Center for Complex Engineering Systems - MIT Massachusetts

More information

Goals of the Unit. spm - 2014 adolfo villafiorita - introduction to software project management

Goals of the Unit. spm - 2014 adolfo villafiorita - introduction to software project management Project Scheduling Goals of the Unit Making the WBS into a schedule Understanding dependencies between activities Learning the Critical Path technique Learning how to level resources!2 Initiate Plan Execute

More information

Scheduling Resources and Costs

Scheduling Resources and Costs Student Version CHAPTER EIGHT Scheduling Resources and Costs McGraw-Hill/Irwin Copyright 2011 by The McGraw-Hill Companies, Inc. All rights reserved. Gannt Chart Developed by Henry Gannt in 1916 is used

More information

8. Project Time Management

8. Project Time Management 8. Project Time Management Project Time Management closely coordinated Two basic approaches -bottom-up (analytical) -top-down (expert judgement) Processes required to ensure timely completion of the project

More information

EdExcel Decision Mathematics 1

EdExcel Decision Mathematics 1 EdExcel Decision Mathematics 1 Notes and Examples Critical Path Analysis Section 1: Activity networks These notes contain subsections on Drawing an activity network Using dummy activities Analysing the

More information

Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network

Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network , pp.273-284 http://dx.doi.org/10.14257/ijdta.2015.8.5.24 Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network Gengxin Sun 1, Sheng Bin 2 and

More information

Application Survey Paper

Application Survey Paper Application Survey Paper Project Planning with PERT/CPM LINDO Systems 2003 Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM) are two closely related techniques for monitoring

More information

STATISTICA. Clustering Techniques. Case Study: Defining Clusters of Shopping Center Patrons. and

STATISTICA. Clustering Techniques. Case Study: Defining Clusters of Shopping Center Patrons. and Clustering Techniques and STATISTICA Case Study: Defining Clusters of Shopping Center Patrons STATISTICA Solutions for Business Intelligence, Data Mining, Quality Control, and Web-based Analytics Table

More information

CSE 435 Software Engineering. Sept 16, 2015

CSE 435 Software Engineering. Sept 16, 2015 CSE 435 Software Engineering Sept 16, 2015 2.1 The Meaning of Process A process: a series of steps involving activities, constraints, and resources that produce an intended output of some kind A process

More information

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks

A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks A Spectral Clustering Approach to Validating Sensors via Their Peers in Distributed Sensor Networks H. T. Kung Dario Vlah {htk, dario}@eecs.harvard.edu Harvard School of Engineering and Applied Sciences

More information

Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I

Section A. Index. Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1. Page 1 of 11. EduPristine CMA - Part I Index Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting techniques... 1 EduPristine CMA - Part I Page 1 of 11 Section A. Planning, Budgeting and Forecasting Section A.2 Forecasting

More information

Unit 4: Time Management (PMBOK Guide, Chapter 6)

Unit 4: Time Management (PMBOK Guide, Chapter 6) (PMBOK Guide, Chapter 6) The questions on this topic focus heavily on scheduling techniques, network diagrams, Gantt charts, the critical path, compressing the schedule, PERT, and float. You may or may

More information

Clustering and scheduling maintenance tasks over time

Clustering and scheduling maintenance tasks over time Clustering and scheduling maintenance tasks over time Per Kreuger 2008-04-29 SICS Technical Report T2008:09 Abstract We report results on a maintenance scheduling problem. The problem consists of allocating

More information

Analysis of the critical path within a project with WinQSB software

Analysis of the critical path within a project with WinQSB software Analysis of the critical path within a project with WinQSB software GURAU MARIAN ANDREI, MELNIC LUCIA VIOLETA Faculty of Engineering and Technological Systems Management, Faculty of Mechanical Engineering

More information

7 Conclusions and suggestions for further research

7 Conclusions and suggestions for further research 7 Conclusions and suggestions for further research This research has devised an approach to analyzing system-level coordination from the point of view of product architecture. The analysis was conducted

More information

Testing LTL Formula Translation into Büchi Automata

Testing LTL Formula Translation into Büchi Automata Testing LTL Formula Translation into Büchi Automata Heikki Tauriainen and Keijo Heljanko Helsinki University of Technology, Laboratory for Theoretical Computer Science, P. O. Box 5400, FIN-02015 HUT, Finland

More information

2.3.5 Project planning

2.3.5 Project planning efinitions:..5 Project planning project consists of a set of m activities with their duration: activity i has duration d i, i = 1,..., m. estimate Some pairs of activities are subject to a precedence constraint:

More information

GRANULARITY METRICS FOR IT SERVICES

GRANULARITY METRICS FOR IT SERVICES GRANULARITY METRICS FOR IT SERVICES Bernd Heinrich University of Regensburg Universitätsstraße 31, 93053 Regensburg, Germany bernd.heinrich@wiwi.uniregensburg.de Completed Research Paper Steffen Zimmermann

More information

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur

Module 2. Software Life Cycle Model. Version 2 CSE IIT, Kharagpur Module 2 Software Life Cycle Model Lesson 4 Prototyping and Spiral Life Cycle Models Specific Instructional Objectives At the end of this lesson the student will be able to: Explain what a prototype is.

More information

Project Time Management

Project Time Management Project Time Management Study Notes PMI, PMP, CAPM, PMBOK, PM Network and the PMI Registered Education Provider logo are registered marks of the Project Management Institute, Inc. Points to Note Please

More information

Software Project Scheduling. - Introduction - Project scheduling - Task network - Timeline chart - Earned value analysis

Software Project Scheduling. - Introduction - Project scheduling - Task network - Timeline chart - Earned value analysis Software Project Scheduling - Introduction - Project scheduling - Task network - Timeline chart - Earned value analysis Eight Reasons for Late Software Delivery An unrealistic deadline established by someone

More information

Chapter 2: Project Time Management

Chapter 2: Project Time Management Chapter 2: Project Time Management Learning Objectives o o o o Understand the importance of project schedules and good project time management. Define activities as the basis for developing project schedules.

More information

Project Time Management

Project Time Management Project Time Management Study Notes PMI, PMP, CAPM, PMBOK, PM Network and the PMI Registered Education Provider logo are registered marks of the Project Management Institute, Inc. Points to Note Please

More information

Research on Task Planning Based on Activity Period in Manufacturing Grid

Research on Task Planning Based on Activity Period in Manufacturing Grid Research on Task Planning Based on Activity Period in Manufacturing Grid He Yu an, Yu Tao, Hu Da chao Abstract In manufacturing grid (MG), activities of the manufacturing task need to be planed after the

More information

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths

Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths Single-Link Failure Detection in All-Optical Networks Using Monitoring Cycles and Paths Satyajeet S. Ahuja, Srinivasan Ramasubramanian, and Marwan Krunz Department of ECE, University of Arizona, Tucson,

More information

CHAPTER 4 LINEAR SCHEDULING METHOD (LSM) AND ITS APPLICATIONS

CHAPTER 4 LINEAR SCHEDULING METHOD (LSM) AND ITS APPLICATIONS 33 CHAPTER 4 LINEAR SCHEDULING METHOD (LSM) AND ITS APPLICATIONS 4.1 BASICS OF LINEAR SCHEDULING The LSM is a graphical technique in which the locations or the length of the linear project is indicated

More information

A SIMULATION MODEL FOR RESOURCE CONSTRAINED SCHEDULING OF MULTIPLE PROJECTS

A SIMULATION MODEL FOR RESOURCE CONSTRAINED SCHEDULING OF MULTIPLE PROJECTS A SIMULATION MODEL FOR RESOURCE CONSTRAINED SCHEDULING OF MULTIPLE PROJECTS B. Kanagasabapathi 1 and K. Ananthanarayanan 2 Building Technology and Construction Management Division, Department of Civil

More information

Introduction to Scheduling Theory

Introduction to Scheduling Theory Introduction to Scheduling Theory Arnaud Legrand Laboratoire Informatique et Distribution IMAG CNRS, France arnaud.legrand@imag.fr November 8, 2004 1/ 26 Outline 1 Task graphs from outer space 2 Scheduling

More information

2.3.4 Project planning

2.3.4 Project planning .. Project planning project consists of a set of m activities with their duration: activity i has duration d i, i =,..., m. estimate Some pairs of activities are subject to a precedence constraint: i j

More information

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma

CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma CMSC 858T: Randomized Algorithms Spring 2003 Handout 8: The Local Lemma Please Note: The references at the end are given for extra reading if you are interested in exploring these ideas further. You are

More information

Network analysis: P.E.R.T,C.P.M & Resource Allocation Some important definition:

Network analysis: P.E.R.T,C.P.M & Resource Allocation Some important definition: Network analysis: P.E.R.T,C.P.M & Resource Allocation Some important definition: 1. Activity : It is a particular work of a project which consumes some resources (in ) & time. It is shown as & represented

More information

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms

Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms Nonlinear Model Predictive Control of Hammerstein and Wiener Models Using Genetic Algorithms Al-Duwaish H. and Naeem, Wasif Electrical Engineering Department/King Fahd University of Petroleum and Minerals

More information

COMBINING THE METHODS OF FORECASTING AND DECISION-MAKING TO OPTIMISE THE FINANCIAL PERFORMANCE OF SMALL ENTERPRISES

COMBINING THE METHODS OF FORECASTING AND DECISION-MAKING TO OPTIMISE THE FINANCIAL PERFORMANCE OF SMALL ENTERPRISES COMBINING THE METHODS OF FORECASTING AND DECISION-MAKING TO OPTIMISE THE FINANCIAL PERFORMANCE OF SMALL ENTERPRISES JULIA IGOREVNA LARIONOVA 1 ANNA NIKOLAEVNA TIKHOMIROVA 2 1, 2 The National Nuclear Research

More information

The work breakdown structure can be illustrated in a block diagram:

The work breakdown structure can be illustrated in a block diagram: 1 Project Management Tools for Project Management Work Breakdown Structure A complex project is made manageable by first breaking it down into individual components in a hierarchical structure, known as

More information

Scheduling. Anne Banks Pidduck Adapted from John Musser

Scheduling. Anne Banks Pidduck Adapted from John Musser Scheduling Anne Banks Pidduck Adapted from John Musser 1 Today Network Fundamentals Gantt Charts PERT/CPM Techniques 2 WBS Types: Process, product, hybrid Formats: Outline or graphical organization chart

More information

A Generalized PERT/CPM Implementation in a Spreadsheet

A Generalized PERT/CPM Implementation in a Spreadsheet A Generalized PERT/CPM Implementation in a Spreadsheet Abstract Kala C. Seal College of Business Administration Loyola Marymount University Los Angles, CA 90045, USA kseal@lmumail.lmu.edu This paper describes

More information

Time series Forecasting using Holt-Winters Exponential Smoothing

Time series Forecasting using Holt-Winters Exponential Smoothing Time series Forecasting using Holt-Winters Exponential Smoothing Prajakta S. Kalekar(04329008) Kanwal Rekhi School of Information Technology Under the guidance of Prof. Bernard December 6, 2004 Abstract

More information

Performance of networks containing both MaxNet and SumNet links

Performance of networks containing both MaxNet and SumNet links Performance of networks containing both MaxNet and SumNet links Lachlan L. H. Andrew and Bartek P. Wydrowski Abstract Both MaxNet and SumNet are distributed congestion control architectures suitable for

More information

CRITICAL PATH ANALYSIS AND GANTT CHARTS

CRITICAL PATH ANALYSIS AND GANTT CHARTS CRITICAL PATH ANALYSIS AND GANTT CHARTS 1. An engineering project is modelled by the activity network shown in the figure above. The activities are represented by the arcs. The number in brackets on each

More information

Scheduling Glossary Activity. A component of work performed during the course of a project.

Scheduling Glossary Activity. A component of work performed during the course of a project. Scheduling Glossary Activity. A component of work performed during the course of a project. Activity Attributes. Multiple attributes associated with each schedule activity that can be included within the

More information

Six Degrees of Separation in Online Society

Six Degrees of Separation in Online Society Six Degrees of Separation in Online Society Lei Zhang * Tsinghua-Southampton Joint Lab on Web Science Graduate School in Shenzhen, Tsinghua University Shenzhen, Guangdong Province, P.R.China zhanglei@sz.tsinghua.edu.cn

More information

3 Guidance for Successful Evaluations

3 Guidance for Successful Evaluations 3 Guidance for Successful Evaluations In developing STEP, project leads identified several key challenges in conducting technology evaluations. The following subsections address the four challenges identified

More information

An Empirical Study of Two MIS Algorithms

An Empirical Study of Two MIS Algorithms An Empirical Study of Two MIS Algorithms Email: Tushar Bisht and Kishore Kothapalli International Institute of Information Technology, Hyderabad Hyderabad, Andhra Pradesh, India 32. tushar.bisht@research.iiit.ac.in,

More information

Project Scheduling: PERT/CPM

Project Scheduling: PERT/CPM Project Scheduling: PERT/CPM Project Scheduling with Known Activity Times (as in exercises 1, 2, 3 and 5 in the handout) and considering Time-Cost Trade-Offs (as in exercises 4 and 6 in the handout). This

More information

Baseline Code Analysis Using McCabe IQ

Baseline Code Analysis Using McCabe IQ White Paper Table of Contents What is Baseline Code Analysis?.....2 Importance of Baseline Code Analysis...2 The Objectives of Baseline Code Analysis...4 Best Practices for Baseline Code Analysis...4 Challenges

More information

ABC ANALYSIS OF MRO INVENTORY

ABC ANALYSIS OF MRO INVENTORY ABC ANALYSIS OF MRO INVENTORY Background Although there is not a direct correlation between the two, an ABC Analysis on MRO parts is somewhat similar to Criticality Rankings that are developed for plant

More information

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling

Chapter 11. 11.1 Load Balancing. Approximation Algorithms. Load Balancing. Load Balancing on 2 Machines. Load Balancing: Greedy Scheduling Approximation Algorithms Chapter Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one

More information

Diversity Coloring for Distributed Data Storage in Networks 1

Diversity Coloring for Distributed Data Storage in Networks 1 Diversity Coloring for Distributed Data Storage in Networks 1 Anxiao (Andrew) Jiang and Jehoshua Bruck California Institute of Technology Pasadena, CA 9115, U.S.A. {jax, bruck}@paradise.caltech.edu Abstract

More information

one Introduction chapter OVERVIEW CHAPTER

one Introduction chapter OVERVIEW CHAPTER one Introduction CHAPTER chapter OVERVIEW 1.1 Introduction to Decision Support Systems 1.2 Defining a Decision Support System 1.3 Decision Support Systems Applications 1.4 Textbook Overview 1.5 Summary

More information

Chapter 4: Project Time Management

Chapter 4: Project Time Management Chapter 4: Project Time Management Importance of Project Schedules Managers often cite delivering projects on time as one of their biggest challenges Time has the least amount of flexibility; it passes

More information

The Project Planning Process Group

The Project Planning Process Group 3 The Project Planning Process Group............................................... Terms you ll need to understand: Activity Activity attributes Activity list Activity on arrow diagram (AOA) Activity

More information

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK

AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,

More information

Social Media Mining. Graph Essentials

Social Media Mining. Graph Essentials Graph Essentials Graph Basics Measures Graph and Essentials Metrics 2 2 Nodes and Edges A network is a graph nodes, actors, or vertices (plural of vertex) Connections, edges or ties Edge Node Measures

More information

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar

Complexity Theory. IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Complexity Theory IE 661: Scheduling Theory Fall 2003 Satyaki Ghosh Dastidar Outline Goals Computation of Problems Concepts and Definitions Complexity Classes and Problems Polynomial Time Reductions Examples

More information

1.204 Final Project Network Analysis of Urban Street Patterns

1.204 Final Project Network Analysis of Urban Street Patterns 1.204 Final Project Network Analysis of Urban Street Patterns Michael T. Chang December 21, 2012 1 Introduction In this project, I analyze road networks as a planar graph and use tools from network analysis

More information