A TOOL FOR SOFTWARE PROJECT MANAGEMENT FOR ESTIMATION, PLANNING & TRACKING AND CALIBRATION
|
|
|
- Bernice Davidson
- 10 years ago
- Views:
Transcription
1 A TOOL FOR SOFTWARE PROJECT MANAGEMENT FOR ESTIMATION, PLANNING & TRACKING AND CALIBRATION DISSERTATION SUBMITTED IN PARTIAL FULLFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF TECHNOLOGY IN INFORMATION TECHNOLOGY, (SOFTWARE ENGINEERING) Under the Guidance of Dr. Ratna Sanyal Coordinator, UDL IIIT-Allahabad Submitted By Nilesh Chandra Shukla MS IIIT-Allahabad INDIAN INSTITUTE OF INFORMATION TECHNOLOGY (DEEMED UNIVERSITY) DEOGHAT, JHALWA ALLAHABAD (U.P.)
2 INDIAN INSTITUTE OF INFORMATION TECHNOLOGY Allahabad (Deemed University) (A Center of Excellence in Information Technology Established by Govt. of India) DATE: I DO HEREBY RECOMMEND THAT THIS THESIS PREPARED UNDER MY SUPERVISION BY NILESH CHANDRA SHUKLA ENTITLED A TOOL FOR SOFTWARE PROJECT MANAGEMENT FOR ESTIMATION, PLANNING & TRACKING AND CALIBRATION BE ACCEPTED IN PARTIAL FULFILLMENT OF THE REQUIREMENT OF THE COMPLETION OF MASTER OF TECHNOLOGY IN INFORMATION TECHNOLOGY (SOFTWARE ENGINEERING) PROGRAM FOR EXAMINATION. COUNTERSIGNED Dr. Ratna Sanyal THESIS ADVISOR Dr. U. S. Tiwary DEAN ACADEMIC
3 INDIAN INSTITUTE OF INFORMATION TECHNOLOGY Allahabad (Deemed University) (A Center of Excellence in Information Technology Established by Govt. of India) CERTIFICATE OF APPROVAL The foregoing thesis is hereby approved as a creditable study in the area of Information Technology carried out and presented in a manner satisfactory to warrant its acceptance as a pre-requisite to the degree for which it has been submitted. It is understood that by this approval the undersigned do not necessarily endorse or approve any statement made, opinion expressed or conclusion drawn therein but approve the thesis only for the purpose for which it is submitted. COMMITTEE ON FINAL EXAMINATION FOR EVALUATION OF THE THESIS
4 Declaration This is to certify that this thesis work entitled A Tool for Software Project Management for Estimation, Planning & Tracking and Calibration which is submitted by me in partial fulfillment of the requirement for the completion of M.Tech. in Information Technology specialization in Software Engineering to Indian Institute of Information Technology, Allahabad comprises only my original work and due acknowledgement has been made in the text to all other material used. Nilesh Chandra Shukla M.TECH (Information Technology) Specialisation in Software Engineering MS I
5 Abstract Software engineering is the discipline which paves the roadmap for development of softwares within given schedule and effort and with the desired quality. The process begins with estimating the size, effort and time required for the development of the software and ends with the product and other work products built in different phases of development. Model based technique is one of the best techniques used for estimation. The technique uses different parameters for estimation. For the estimates to be accurate, these parameters needs to be stratified with the organizations' past projects' experience, failing to which leads to wrong estimates and consequently results in software crisis. Handling large volume of data for these processes is a tiresome task. The tools available for automating some of the activities are great help in the whole development process. However these tools isolate the process of estimation, planning & tracking and calibration. Secondly Software Engineering is a nascent discipline and still the metrics introduced for quantifying the attributes of softwares are not sufficient enough to brush off experts' judgments. The goal of the thesis is to develop a web based tool for integrating the estimation (using COCOMO II model), planning & tracking and calibration process. The emphasis of the calibration process is to combine experts' judgments and organization s past projects' experience. II
6 Acknowledgements Completing a task is never one-man effort. It is often the result of valuable contribution of a number of individuals in direct or indirect manner that helps in shaping and achieving an objective. This thesis would have been nothing if I didn t have help and inputs from my supervisor Dr. Ratna Sanyal. Her direction, supervision and constructive criticism were indeed the source of inspiration for me. It has been a privilege to study at Indian Institute of Information Technology, Allahabad. The first and the foremost person who comes into my mind to express my deep sense of gratitude whole heartily is Dr. Winsor Brown, Assistant Director, USC Center for Software Engineering, Lecturer, Dept. of Information Technology. He was there to help me out through the thick and thin of this thesis. I express my indebtedness to my batch mates for the constant encouragement given throughout the thesis. Some of them need special mention. Anand Arun Atre and Imran Khan were always there to help me when I was succumbed to mathematical intricacies of the thesis. Kamal Sawan and Vineet Chauhan are the names important for me when it comes to logical design and programming of the tool. Pankaj Kandpal and Dinh Ngoc Lan were the masters of database without whom database design was impossible for me. Abhay Pawane, Sampath Mada, Prabhat Singh Saheja and III
7 Adish Kumar were always there to encourage me when I was facing any kind of hurdle in the thesis. There is a large and continuing debit owed by me to Dhirendra Pratap Singh, Mahindra Giri Vasi Reddy and Alkesh Patel who even not being of my branch sought time for teaching me MatLab. I also express my deep sense of gratitude to Mr. Balwant Singh for his efforts in giving timely support for hardware and software requirements. Lastly I would like to express my gratitude to my parents for their unbound love and priceless support throughout my life. Nilesh Chandra Shukla 25-jun-2007 IV
8 TABLE OF CONTENTS DECLARATION I ABSTRACT II ACKNOWLEDGMENT III 1 INTRODUCTION THE CURRENT SCENARIO DRAWBACKS IN THE CURRENT SCENARIO THE PROPOSED SOLUTION OVERVIEW BENEFITS OF PROPOSED SOLUTION SYSTEM OVERVIEW SYSTEM DEVELOPMENT PHASES Phase I Phase II Phase III SYSTEM DEVELOPMENT SCHEDULE DEVELOPMENT ENVIRONMENT Softwares Required Hardware Required LIST OF FUNCTIONALITIES TO BE PROVIDED BY THE PROPOSED SOLUTION NON-FUNCTIONAL REQUIREMENTS CHAPTER SUMMARY LITERATURE REVIEW PROJECT MANAGEMENT PURPOSE OF PROJECT MANAGEMENT DESCRIPTION OF ESTIMATION Size Estimation Effort Estimation Schedule Estimation Cost Estimation DESCRIPTION OF PROJECT PLANNING MONITORING THE PROJECT (TRACKING) PURPOSE OF CALIBRATION FUNCTION POINT ANALYSIS Overview Types of Function Points Development Project Function Point Count Enhancement Project Function Point Count Application Project Function Point Count Function point counting process Components in function point counting External Inputs External Outputs External Inquiry External Interface Files Internal Logical Files File Type Referenced, Data Element Type and Record Element Type Six Step Counting Process Benefits of FPA When Not to Use FPA COCOMO II Overview COCOMO II over COCOMO COCOMO II Models Application composition...29 Indian Institute of Information Technology, Allahabad V
9 Early Design Converting function points to SLOC Cost Drivers Post-Architecture Adjustment for Reuse DESCRIPTION OF CALIBRATION Multiple Regression Method Bayesian Analysis CHAPTER SUMMARY APPLICATION DEVELOPMENT ARCHITECTURE JAVA SERVER FACES DESIGN PATTERNS USED Data Access Object Pattern Singleton Pattern MODULE SEQUENCE Size Estimation Module Effort Estimation Module Calibration Module Report generation Module Utilities PACKAGE ORGANIZATION User Utility MailerPkg ClientPkg ProjectPkg ModulePkg TaskPkg ScaleFactorsPkg ActivityPkg Reports MasterValues Calibration CHAPTER SUMMARY DATABASE DESIGN AND ORGANIZATION FUNCTIONAL TABLES DESIGN STANDARD VALUES TABLES DESIGN DESCRIPTION OF FUNCTIONAL DATABASE TABLES Projects Modules Tasks Size_Details Transaction_fp Developer Cost_drivers Scale_Factors Client Linked_Content Activity Productivity_of_EM DESCRIPTION OF STANDARD VALUES DATABASE TABLES Phase_Ratio Language Master_Effort_Multipliers Master _FP_Columns Master _FP_Rows Master _FP_Map Indian Institute of Information Technology, Allahabad VI
10 4.4.7 Master_FP_Values CHAPTER SUMMARY STUDY OF OTHER TOOLS COSTAR CONSTRUX ESTIMATE COCOMO II SLIM-ESTIMATE COMPARISON OF THE TOOLS CHAPTER SUMMARY UNRIVALLED FEATURES OF THE TOOL CALIBRATION USING BAYESIAN THEOREM TRACKING REPORT GENERATION CHAPTER SUMMARY A ROAD AHEAD CONCLUSION APPENDIX A DELPHI METHOD APPENDIX B REGRESSION ANALYSIS APPENDIX C PROJECT MANAGEMENT SOFTWARE- GUI REFERENCES Indian Institute of Information Technology, Allahabad VII
11 LIST OF FIGURES FIGURE 1.1: SOFTWARE ESTIMATION TECHNIQUES... 2 FIGURE 1.2: SYSTEM OVERVIEW... 6 FIGURE 2.1: TRIANGULAR RELATIONSHIP FIGURE 2.2: DEPENDENCIES OF EI, EO, EQ, EIF AND ILF ON DET, RET AND FTR FIGURE 2.3: SIX STEP FUNCTION POINT COUNTING PROCESS FIGURE 2.4: RELATIONSHIP BETWEEN PHASES AND ESTIMATION RANGES FIGURE 2.5: EFFECT OF REUSE...32 FIGURE 3.1: MODEL-VIEW-CONTROLLER ARCHITECTURE FIGURE 3.2: JAVA SERVER FACES ARCHITECTURE FIGURE 3.3: DATA ACCESS OBJECT PATTERN FIGURE 3.4: IMPLEMENTATION OF DATA ACCESS OBJECT PATTERN IN THE TOOL FIGURE 3.5: SEQUENCE DIAGRAM FOR DATA ACCESS OBJECT PATTERN IN THE TOOL FIGURE 3.6: SINGLETON PATTERN FIGURE 3.7: ORGANIZATION OF PACKAGES IN THE TOOL FIGURE: 4.1 : FUNCTIONAL TABLES DESIGN...49 FIGURE: 4.2: MASTER VALUES TABLES DESIGN FIGURE 5.1: COSTAR FROM SOFTSTAR SYSTEMS FIGURE 5.2: OUTPUT OF CONSTRUX ESTIMATE FIGURE 5.3: COCOMO DEVELOPED BY UNIVERSITY OF SOUTHERN CALIFORNIA FIG 5.4: ESTIMATES GENERATED BY SLIM-ESTIMATE FIGURE 6.1: THE CALIBRATION PROCESS FIGURE 6.2: TRACKING PROCESS FIGURE 6.3: REPORT GENERATION PROCESS FIGURE A.1: SCHEMATIC REPRESENTATION OF DELPHI ESTIMATION TECHNIQUE FIGURE C.1: LOGIN SCREEN FIGURE C.2: CREATING A PROJECT FIGURE C.3: CREATING A MODULE IN THE PROJECT FIGURE C.4: CREATING A TASK IN THE PROJECT FIGURE C.5: ADDING A DOCUMENT IN THE PROJECT FIGURE C.6: RESULT OF ESTIMATION : THE ESTIMATES FIGURE C.7: CREATING A NEW USER FIGURE C.8: CREATING A NEW CLIENT FIGURE C.9: SEARCHING A PROJECT OR CLIENT FIGURE C.10: SETTING PREFERENCES FOR A USER FIGURE C.11: SETTING THE STANDARD VALUES FOR SCALE FACTORS FIGURE C.12: SETTING THE STANDARD VALUES FOR EFFORT MULTIPLIERS FIGURE C.13: SETTING THE VALUE OF EQUIVALENT SLOC PER FUNCTION POINT FOR LANGUAGES.87 FIGURE C.14: INPUT FORM FOR ENTERING THE DAILY WORK DONE BY EACH DEVELOPER FIGURE C.15: CHANGING THE PHASE RATIO FOR EACH PHASE FIGURE C.16: TRACKING USING GANTT CHART FIGURE C.17: TIME TAKEN BY EACH ACTIVITY FIGURE C.18: SELECTING PROJECTS FOR CALIBRATION FIGURE C.19: INPUT EXPERTS JUDGMENT FOR CALIBRATION FIGURE C.20: INPUT EXPERTS JUDGMENT FOR CALIBRATION FIGURE C.21: LIST OF CLIENTS FIGURE C.22: CLIENT DETAILS AND THE PROJECT UNDER THE CLIENT Indian Institute of Information Technology, Allahabad VIII
12 LIST OF TABLES TABLE 2.1: EQUIVALENT SLOC PER FUNCTION POINT COUNT FOR DIFFERENT LANGUAGES TABLE 2.2: RELATIONSHIP BETWEEN EARLY DESIGN COST DRIVER AND POST- ARCHITECTURE COST DRIVERS TABLE 4.1: PROJECT TABLE TABLE 4.2: MODULE TABLE TABLE 4.3: TASK TABLE TABLE 4.4: SIZE DETAILS TABLE TABLE 4.5: TRANSACTION_FP TABLE TABLE 4.6: DEVELOPER TABLE TABLE 4.7: COST DRIVERS TABLE TABLE 4.8: SCALE FACTORS TABLE TABLE 4.9: CLIENT TABLE TABLE 4.10: LINKED CONTENT TABLE TABLE 4.11: ACTIVITY TABLE TABLE 4.12: PRODUCTIVITY OF EM TABLE TABLE 4.13: PHASE RATIO TABLE TABLE 4.14: LANGUAGE TABLE TABLE 4.15: MASTER EFFORT MULTIPLIER TABLE TABLE 4.16: MASTER_FP_COLUMNS TABLE TABLE 4.17: MASTER_FP_ROWS TABLE TABLE 4.18: MASTER_FP_MAP TABLE TABLE 4.19: MASTER_FP_VALUES TABLE TABLE 5.1: COMPARISON OF THE TOOLS TABLE 6.1: LIST OF ACTIVITIES IN EACH TASK Indian Institute of Information Technology, Allahabad IX
13 1 Introduction Software engineering is the discipline that aggregates the application of scientific and technological knowledge through the medium of sound engineering principles, to the production of computer programs, to the requirements definition, functional specification, design description, program implementation, and test methods that lead up to test the code [1]. Software engineering is about engineering the software development process. It requires highest degree of analyses, hard work and the management of the two. With the increasing size and complexity of softwares; software development has become a more clamorous process and hence needs to take care of even the simplest activity in the development process. The problems being faced in the software developments are cost overrun, schedule overrun and quality degradation. In the core of these problems lies the problem of poor estimation. Wrong estimation surely results a disaster in the development process. Effective estimation is essential for proper project planning and control and is one of the most critical and challenging task in the development process. Under-estimating a project leads to quality degradation, employee over exploitation and setting short schedule and hence results in missed deadlines. Over-estimating is even worse than the previous condition; allocating more resources to the project and thus increasing the cost of the project without any scope. Proper planning of the project and tracking the project development is the second essential task for assuring the success of the project. Once the estimates are available the next task is to assign the tasks to individuals. Regular feed back from the development process is helpful in determining the status of the task and the Indian Institute of Information Technology, Allahabad 1
14 project. Tracking gives opportunity to the project manager to take care of any unexpected situation while development. As stated earlier, estimation plays the key role in the management of the development process, it is essential that the model or the method being used should be correct and stratified with the most recent data available and if standard parameters are being used in the method then those parameters should be well calibrated with the available data. This chapter comprises the discussions on the current scenario of typical software project management and drawbacks of the current scenario, proposed solution overview, benefits of proposed solution, system overview, system development phases, system development, schedule, and development environment. 1.1 The Current Scenario In this section the current scenario for the software estimation, planning and tracking and calibration is discussed. Figure 1.1 shows five categories of software estimation techniques in practice. Software Estimation Techniques Model based - SLIM, COCOMO, SEER Learning based- Neural, Case-Based Regression based - OLS, Robust Expertise based - Delphi, Rule-Based. Composite based- Bayesian- COCOMO II Figure 1.1: Software Estimation Techniques Indian Institute of Information Technology, Allahabad 2
15 Various estimation techniques have been developed in the past which follows mathematical model for estimation. SLIM (Software life cycle model), COCOMO (Constructive Cost Model), SEER (System Evaluation and Estimation of Resources) are some of the model based techniques for software estimation. Projects related data is used as input in these techniques and past projects data is used for calibrating the models. When past projects data is not available then experts knowledge is used for estimation. Delphi and Rule-based techniques comes under this category. Delphi technique is based purely on the experts judgment whereas rule based technique is adopted from the artificial intelligence domain in which a set of rules work together to get the output i.e. the estimates. A lot of work has been devoted for the development of learning based techniques for estimation. Neural networks defined by three entities neurons, interconnection structure and the learning algorithm, is one of the popular learning based technique. Case-Based technique is another kind of learning based techniques in which a database of completed projects is maintained and new project s cost is estimated by comparing the new project with similar projects in the database. Standard or ordinary least squares (OLS) method and Robust regression are the regression methods used for estimation. Robust regression resolves the most common problem of outliers in software engineering data. Model based techniques are most widely used in the industry due to its independence to any previous information and due to the fact that it works on certain parameters pertinent to the model, being used in the estimation. In the model based techniques the values for different standard parameters are fetched according to the project being developed and using the equations defined in the model the estimates are calculated. Various tools are available in the market for automating the process of estimation. To name a few are Construx estimate, Costar 7d, QSM [2, 3, and 4]. Indian Institute of Information Technology, Allahabad 3
16 While planning the project development, the estimates and the productivity of the developers are considered as the baseline and the task is assigned to developers according to their abilities. Developers have to give the information about the work done by them on a daily basis. The information is used to track the status of project and answers the question How much task has been completed till now? The model based technique is based on the parameters of the method being used. These parameters need to be calibrated according to the past data available for different projects in the organization. Calibration has importance because it is going to affect the overall process in future and hence needs great care. The data collected is first checked for consistency, correctness and completeness. And then the approved data is used for calibrating the parameters of the model and new values are assigned to the parameters. 1.2 Drawbacks in the Current Scenario Study of tools [2, 3, 4] has revealed the following drawbacks in the current scenario. 1. Tools available for the above activities are isolated to each other i.e. the tools available are either estimation tools or for planning and tracking. 2. The tools available for planning used to send the information of task assigned to individuals through mails and the information pertinent to the assigned task is kept in some version control system. 3. Any supporting documents or reports should be available to the person in the organization like SRS for the project, design specification. Current tools do not have this feature. 4. During the development, the management needs to keep track of information about the status of project; the tools available do not have such features. 5. Reports at any stage of development are needed another important feature absent in available tools. 6. While calibration, past projects data need to fetched manually. 7. The method used for calibration of tools does not incorporate the expert s judgment in the resulting parameter values. Indian Institute of Information Technology, Allahabad 4
17 1.3 The Proposed Solution Overview The major problem in the current scenario is the isolated estimation, planning & tracking and calibration, so the solution would be Project Management Software that will combine these activities. The proposed system will first stores the details of the projects, clients and developers which are right now in paper form or if available in electronic form are in isolation to each other. The information about the projects, clients, developers would be available easily. The system will automate the process of the estimation using the COCOMO II model [5] for effort estimation. The system will also help in tracking the status of project by taking daily input from each developer in the organization and will show the status in the form of a Gantt chart. The system will generate the reports for the projects. While calibrating the model the system will incorporate the experts judgment in the final values of parameters of the model. The system will give the information about the activities in the organization and the time taken in each activity. 1.4 Benefits of Proposed Solution 1. Clumsy calculation for estimation is no longer needed. 2. Planning and tracking would rather be a simpler task. 3. Information about the projects, clients and developers are no longer needed to be stored in other forms. 4. Activity details would be available easily. 5. The reports could be generated with a single mouse click. 6. Notification on various conditions can be customized according to the users choice. 7. Data for calibration would be available in the tool itself and no manual data entry is required for calibration. 8. The calibration would be more accurate and hence the estimation too. 9. With all the information available management will have an edge in improving the conditions in the project development. Indian Institute of Information Technology, Allahabad 5
18 10. Solution will be available at low cost. 11. The system could be extended to meet any future requirement easily. 1.5 System overview The proposed system is devised by using COCOMO II method for estimation called PROJECT MANAGEMENT SOFTWARE and the Bayesian method for calibration of the model. Bayesian analysis is used for testing the hypotheses on the basis of available sample data. Figure 1.2 shows the system at a glance. The modules in the diagram are: 1. Size and Effort Estimation Module 2. Scheduling Module 3. Tracking Module 4. Calibration Module Project details Size & Effort Estimation SIZE EFFORT Scheduling FPA AND COCOMO II factors Daily Work done REPOSITORY Tracking New Values for factors Calibration Old Project Data Developer Admin Figure 1.2: System Overview All the information about the projects is fetched to the estimation module. It stores the information in the database for estimation and for future use. The module with the help of the stored standard values of the parameters used in the COCOMO Indian Institute of Information Technology, Allahabad 6
19 II model calculates the size estimates in source lines of code (SLOC), Function points or in Adapted source lines of code (for reusable component) and effort estimates in person-months for the intended project. Scheduling module uses the estimated size and effort values for the estimation of the time required for the given project in months. Tracking module takes input from the developers daily about the work done by them on the task assigned to them. The input is stored in the database is used for comparison between the estimated time and actual time. Once past projects data is available, the standard values of parameters can be calibrated through calibration module. The calibration module uses the experts judgment obtained through the Delphi method [6] and the data obtained from the regression method i.e. sample data and then the sample data and experts judgment are combined using the Bayesian analysis. 1.6 System Development Phases The proposed System has three development phases Phase I Phase I was dedicated to the database design, designing the system and for developing the part which estimates the size, effort and schedule for the project along with the programs for inserting the data into the backend and for its manipulation. Major work was done in this phase due to intricacies in the estimation model. An interactive and user friendly interface with an accurate estimation model was the goal of this phase. Indian Institute of Information Technology, Allahabad 7
20 1.6.2 Phase II Being estimation model at its place the next point of focus was development of planning and tracking module. Phase II was concerned about developing system for taking inputs from the developers and comparing them and showing them in useful forms such as Gantt charts and Bar charts. Dividing task into activities and then showing details of the status of the task was the purpose of this phase Phase III The last but the most important phase was phase III with the implementation of calibration method using the regression analysis [7] and Bayesian analysis. An effective sum up of the experts judgment and the sample project data available was the purpose of this phase. Indian Institute of Information Technology, Allahabad 8
21 1.7 System Development Schedule The schedule followed for the development of tool is given below. MILESTONES Literature Survey, Study of FPA and COCOMO II JUL, 2006 AUG, 2006 SEP, 2006 OCT, 2006 NOV, 2006 DEC, 2006 JAN, 2007 FEB, 2007 MAR, 2007 APR, 2007 MAY, 2007 Requirement Analysis Architecture Detailed Design Implementation of the tool and Unit Testing Integration and Testing Final Documentation Shows Planned Activity Shows Actual Activity Indian Institute of Information Technology, Allahabad 9
22 1.8 Development Environment Softwares Required The softwares used in the tool are as following: 1. Java 5 [8] 2. Java server faces [9] 3. MySql [10] 4. MySQL -connector [11] 5. Jfreechart [12] 6. chartcreator rc1 [13] 7. Tomahawk [14] 8. Jdom-1.0 [15] 9. Struts [16] 10. JMatlink130 [17] 11. Matlab All the softwares except Matlab mentioned above are freely available Hardware Required The minimum hardware requirement for the software is: 1. Intel or AMD mother board 2. Pentium IV or above processor 3. 1 GB RAM and GB Hard disk. Indian Institute of Information Technology, Allahabad 10
23 1.9 List of Functionalities to be Provided by the Proposed Solution 1. Client Management Management of client details like the name, URL, contact number, address, address, projects from the client and the details of the projects under clients. 2. Developer Information Management Management of information about the developer and the task assigned to the developer. It includes the list of projects, modules and tasks assigned to the developer, date of task assignment, and status of the task and clients name. 3. Estimation Estimation of Size, Effort and Time for the given project. 4. Report generation Report creation according to the given project or according to the clients. 5. Tool s Calibration Calibration of model used in the tool, using data from the past project and the judgment of the experts. 6. Activities summary Summary of the activities (details of activities is given in chapter 5) and the time given to each activity in the organization Non-Functional Requirements 1. Security The tool has three privileges and only authorized person can access the facilities in the tool through his/her user id and password. The password changing function would be accessible to the individuals for their own passwords and the administrator in the tool will have privilege for changing others password in case of the person has forgotten the password. Indian Institute of Information Technology, Allahabad 11
24 2. User friendly The system has a user friendly interface for accessing various functionalities in the tool. Minimum training time is required for the tool. 3. Reliability The system gives accurate result for the given input and if the given input is not correct then the system gives alerts for them. 4. Maintainability The system is open for additions in the future. 5. Performance Since the system is a web application thus its performance should be high. Performance issues are handled in the development of the tool Chapter Summary In this chapter current trends in estimation, planning and tracking and calibration; drawbacks in current scenario along with the proposed solution overview and its benefits has been discussed. Overview of the System developed as a result of thesis and its functional and non-functional requirements is presented in the chapter. The chapter also contains concise details of System development phases and hardware and software requirements for the tool. Next chapter contains discussion about project management, its purpose, the estimation, planning, tracking and calibration. The chapter also contains excerpts of function point analysis, COCOMO II model and the regression analysis and Bayesian method for calibration of COCOMO II. Indian Institute of Information Technology, Allahabad 12
25 2 Literature Review The chapter discusses project management, its purpose, the estimation, planning, tracking and calibration. The chapter also has a concise description of function point analysis, COCOMO II model and the regression analysis and Bayesian method for calibration of COCOMO II. 2.1 Project Management A project is an effort put towards achieving an objective. Its mission is to outcome as a constructive product or service. Project Management is the organization and management of resources in such a way that all the work required to complete a project can be done within defined scope, quality, and time and cost constraints [18]. 2.2 Purpose of Project Management Resources and activities are the key players in any organization for completion of any project. The purpose of project management is to first find out the activities needed to take the project to its end and secondly to allocate resources to these activities in a planned way. The word project management is a combination of following activities [18]: 1. Analysis & Design of objectives 2. Organizing the work 3. Estimating resources 4. Planning the work or objectives 5. Allocation of resources 6. Acquiring human and material resources 7. Assigning tasks 8. Directing activities Indian Institute of Information Technology, Allahabad 13
26 9. Controlling project execution 10. Tracking and reporting progress 11. Analyzing the results based on the facts achieved 12. Defining the products of the project 13. Forecasting future trends in the project 14. Quality Management 15. Issues Management 16. Issues solving 17. Defect prevention 18. Project Closure meets 19. Communicating to stakeholders Project management is a vast area which includes all the activities in the above list. The scope of thesis is limited to project estimation, planning and tracking. The triangle of relationship of the project management is shown in figure 2.1. Time Project Management Effort Quality Figure 2.1: Triangular Relationship Quality, effort and time are inter-related. If the project demands a higher quality then it is going to use more resources and the effort required will be high and the effect will percolate to time. The first challenge that project management faces is to ensure that the project is delivered within time and budget and with the desired Indian Institute of Information Technology, Allahabad 14
27 quality. The second challenge is more crucial and grueling one for optimizing the resource requirements. These challenges make the project management a taxing and conspicuous task for any organization. 2.3 Description of Estimation Management in any project starts with estimation. An effective estimation is the back bone for the development of any project. Without effective estimates proper project planning and tracking is impossible. If the estimates are too low then the project management tries to employ more personnel in order to expedite the development process; that eventually results in poor quality product and employee dissatisfaction [19]. Basic steps in software estimation are as follows: 1. Estimation of the size of the intended project. This results in either source lines of code (SLOC) or function point counts (FPC) or new object points (NOP) for the project but other measures for the size are also available. 2. Estimation of the effort for the project in man-months or man-hours. 3. Estimation of the schedule in calendar-months. 4. Estimation of the cost in local currency Size Estimation A sound size estimate could be a good foundation for the software estimation. The information source for estimation can be the project proposal, system specification or software requirement specification. If the size estimation is being done in the later stages such as design or during coding, then design specifications and other work products can be used as information source for estimation [19]. The two ways in which estimation can be done are as follows: Indian Institute of Information Technology, Allahabad 15
28 1. By Analogy: If similar projects have been experienced by the organization then with the help of past experience the size for the new project can be estimated. This is performed by dividing the new project into small modules and comparing those modules with the past project data. This method can give almost the accurate estimate for the project size if the past projects were similar to the new one [19]. 2. By Parametric Measurement: The size could be estimated by counting features of the project and using them as parameter for any parametric measurement approach like object point analysis or function point analysis. Even if the organization has no experience of the intended project, the features of the project can be used for parametric measurement Effort Estimation Once the size estimates are available, effort can be estimated for the project. The ways by which effort could be estimated for the project are as follows [19]: 1. By Using Past Projects Data: The best way to derive the estimates for the project is to use data of the past projects. For this approach it assumed that the organization maintains the data properly and documenting the relevant information. This approach also assumes that the organization has done similar projects earlier. 2. By Using Parametric Measures: If data of the past projects is not available then parametric models can be used for effort estimation. These models consider the features of the new project and use the standard values for these parameters for calculating the size of the software Schedule Estimation Estimation of schedule includes the number of people who will work on the project, what work they will do, what are the start time and end time for them. Once the effort estimates are available schedule can be laid out in calendar months. Total calendar time can be computed using rule of thumb [19]. Indian Institute of Information Technology, Allahabad 16
29 Schedule in months = 3.0 * (effort-months) 1/3 Or parametric models like COCOMO can be used for estimating the calendar schedule Cost Estimation Software cost estimation includes many factors to be pondered like hardware cost, labor cost, tools cost etc. How the cost would be estimated depends on the organization. Labor cost for developing the software constitutes the major portion of the total cost. Once we have effort in man-months we can calculate the cost of the software using the salary of the individual employee employed in the project. 2.4 Description of Project Planning Projects are expensive in terms of both time and money. Ineffective planning may take decades to complete a project with mediocre complexity. We can do careful planning before and during the development of the project. That planning helps in avoiding serious mistakes. After the first phase, when requirements collection for the project is over; the next step is to identify the dependencies among the various modules and tasks, and to pave a road map for the development process. Assigning right task to the right person is a major challenge in this phase. Available estimates play a key role in whole planning process by providing the information about the time and effort required for the project and for various tasks in the project. 2.5 Monitoring the Project (Tracking) When project is under development it is necessary to take feedback from the development process and analyze the status of project. This helps in detecting any problem occurred during development or any schedule or cost slippage and signals the project management about the problem so that necessary actions could be taken to rectify the problems. Indian Institute of Information Technology, Allahabad 17
30 While tracking the status of the projects, the estimated values are compared with the actual values collected during development. Gantt charts are the most widely used tool for such analyses. 2.6 Purpose of Calibration Model based techniques use various parameters, pertinent to the proposed software, for the estimation of the project size, effort and time required for the development of the project. For example COCOMO II uses 5 scale factors (Precedentedness, Development flexibility, Architecture/Risk resolution, Team cohesion and Process maturity) and 17 effort multipliers in the estimation of effort for any project [5]. These factors have certain predefined values and these values are used in the estimation process. These values were given by the developer of model on the basis of the study of the projects available when model was being designed. Every organization has its own set of process and the standard values for these factors, which varies from company to company. Hence it is necessary to stratify the parameters value of the models according to the data of the past projects of those particular organizations. The complete process is known as calibration. The more past data we have for the calibration more accurate the estimates would be. 2.7 Function Point Analysis The section constitutes the description of function point analysis, types of function points, function point counting process, benefits of Function Point Analysis (FPA) and a comparison between the traditional SLOC method and the FPA Overview As the system grows in size, it is really hard to estimate the size of the software early in the development. Divide and conquer has been the best strategy for tackling bigger problem for decades. Function point analysis, introduced by Allan J Albrecht Indian Institute of Information Technology, Allahabad 18
31 of IBM in late 1970s, follows the concept of divide and conquer strategy for estimating the size of any software [19]. FPA breaks the system into smaller pieces so that intricacies of the systems become more visible and can be analyzed better. Function point analysis measures size of the software on the basis of the functionalities to be provided by the software. The method quantifies the functionalities of software by the information provided by the user based on logical design. FPA estimates the size of softwares in terms of function point counts (FPC) which can be converted into SLOC easily if the equivalent SLOC for unit FPC is available Types of Function Points Size estimation is critical issue for all kind of projects i.e. for development, enhancement and application projects. On the basis of the categories of the projects function points can also be categorized into following categories [19]: Development Project Function Point Count When the project is under development, the amount of information about the project varies from phase to phase. Development Project Function Point Count is useful for the projects under development. It can be used in any phase. Using FPA in every phase of development allows tracking of size overrun Enhancement Project Function Point Count Every software goes through the enhancement stage either addition of new functional requirement or a non-functional requirement. Enhancement project function point count tries to estimate the size of enhancement projects. It helps in understanding the movement of a project from development stage to enhancement stage. Indian Institute of Information Technology, Allahabad 19
32 Application Project Function Point Count Once the application is developed function points can be calculated to make baseline for future uses. It can be used for predicting maintenance size Function point counting process Components in function point counting The section gives a high level view of the steps for counting the function points. Conceptually function point analysis defines data in two levels; data at motion and data at rest [19]. Every application has numerous elementary processes which includes various transactions for data movement. It includes transactions bringing data into the application domain and transactions taking data out of the application domain. These are referred as transaction functions. The data maintained by the application or by another application are known as data at rest and referred as data functions. Following are the types of data and transaction functions: External Inputs External Inputs (EI) is the process in which data comes from outside of the application domain. The data may come from the input screen or from other application. Control and business data both are counted as EI. The input can manipulate one or more files maintained by the application. If an input is performing insertion, updation and deletion then it is counted as three external inputs. Indian Institute of Information Technology, Allahabad 20
33 External Outputs The process in which any derived data crosses the boundary of application from inside to outside is known as external outputs (EO). Derived data here means the processed data not the data through simple retrieval from the external interface files or internal logical files. It usually is result of some calculation or algorithmic operation External Inquiry The process with both input and output components which retrieves data either from the internal logical files or from the external interface files. External inquiry (EQ) does not update any internal logical files or external interface files External Interface Files User identified logically related data stored outside the application boundary is known as external interfaces files (EIF). The file containing the logically related data can be counted as external interface files or internal logical files but not both. Each EIF should have at least one EI or EO for it Internal Logical Files User identified logically related data maintained inside the application through external inputs in known internal logical files (ILF). These files should have at least one external input for it File Type Referenced, Data Element Type and Record Element Type File type referenced (FTR) is a file reference by any transaction. It should be either an internal logical file or external interface file. Data element type (DET) is unique information in FTR. DET could be information for the instigation of any information or could be additional information about the transaction. Record element type (RET) is a unique sub group of data in FTR. DET, RET and FTR are used in the calculation of number of EI, EO, EQ, EIF and ILF. Dependencies of EI, EO, EQ, EIF and ILF on DET, RET and FTR are shown in figure 2.2. Indian Institute of Information Technology, Allahabad 21
34 DET RET FTR EI EO EQ ILF EIF Figure 2.2: Dependencies of EI, EO, EQ, EIF and ILF on DET, RET and FTR Six Step Counting Process Required information for counting is obtained from the software requirement specification. The steps for counting function points are as following: 1. Identify data functions (External Interface files and Internal Logical Files) and rate them. 2. Identify transaction functions (External Input, External Output and External Inquiry) and determine there complexity. 3. Compute unadjusted function points. Number of EI, EO, EQ, ILF and EIF for each complexity level (Simple, Average and Nominal) is obtained and the corresponding weight for each complexity level is multiplied with the count to finally get the unadjusted function point count. Details of function point count are available in appendices. 4. Determine the ratings of 14 general system characteristics. 5. Calculate value adjustment factor (VAF). VAF = (TDI * 0.01) Where, TDI = Total Degree of Influence obtained by multiplying the ratings of general system characteristics. 6. Compute function point counts. Indian Institute of Information Technology, Allahabad 22
35 FPC = UFP * VAF Figure 2.3 shows the steps for counting function points. Software Requirements Specification Identify data functions and determine the complexity. Internal Logical Files External Interface Files Identify transaction functions and determine the complexity. External Input External Output External Inquiry Compute Unadjusted Function Points Rate 15 General System Characteristics Compute Value Adjustment Factor (VAF) Compute Function Point Count Figure 2.3: Six step function point counting process Indian Institute of Information Technology, Allahabad 23
36 2.7.4 Benefits of FPA 1. Technology Independence Function point estimates the size of software on the basis of the functionality provided by the software irrespective of the tool or technology used for its development. Languages like COBOL; FORTRAN can be used easily for development of any software but will have more SLOC than JAVA, VB etc. But the functionality provided by the software would be the same and hence the size in function point count would be same [20]. 2. Consistency and Repeatability The rules defined by International Function Point User Group (IFPUG) have increased the consistency of FPA. Even if the person counting the FPC is changed the result will be same. Since the rules are well documented, the count can be repeated [20]. 3. Data Normalization Property of function points of being dependent on functionalities rather than the SLOC has made FPA useful for normalizing data like cost, effort, schedule, staff, defects etc. For example using other measures like SLOC, time taken by two applications can not be compared because complexity of applications may defer and SLOC does not take the fact into account. Whereas in case of FPA it can be concluded that application 1 took more time than application 2 in implementing 1 function point count [20]. Indian Institute of Information Technology, Allahabad 24
37 4. Estimation Since function points needs only the details about the functionalities of the software makes it useful early in the development than compared to SLOCs [20]. 5. Beneficial for Managers Function point helps project managers to dig the project up to a greater depth and to define scope of system more accurately. It also helps project managers to communicate clients the cost of the enhancement and the change proposed by them [20] When Not to Use FPA Function points are not suitable measures for maintenance work. Maintenance work is less in a development work and more in an interrogation. This means understanding the existing product becomes a major work rather than adding new things to it or making some corrective modifications while maintenance. This depends more on individual skills. A highly skilled person takes less time in understanding the existing code and identifying the problem in it. Whereas FPA has nothing to do with performance but the functionality hence are not useful under such circumstances [19]. 2.8 COCOMO II Budgeting, planning and tracking, risk analysis and return on investments analyses are some of the uses of software cost, schedule and effort estimation. COCOMO II is one of the most widely used parametric models, for effort and schedule estimation. The section contains a short description of details of the COCOMO II model and its comparison with COCOMO 81[5], effort estimation process. Indian Institute of Information Technology, Allahabad 25
38 2.8.1 Overview COCOMO II was first published in Annals of Software engineering in Purpose behind the research of COCOMO II was to accommodate the development culture of new generation i.e. COTS (Commercial of the Shelf), use of reusable components, rapid development processes. Figure 2.4 shows the range of effort estimates at different stages of development when COCOMO II is used [22]. The estimates obtained at the feasibility analysis phase may vary by a factor of 4 and as soon as development progresses information becomes finer and thus increases the accuracy of the estimations [5]. 4x 2x 1.5x Estimation Ranges 1.25x x 0.75x 0.5x 0.25x Feasibility Analysis Requirement Analysis High Level Design Detailed Design Development and Testing Figure 2.4: Relationship between phases and Estimation Ranges Indian Institute of Information Technology, Allahabad 26
39 2.8.2 COCOMO II over COCOMO 81 COCOMO 81 was the model of 1980s. This section compares both the flavors of COCOMO [5]. 1. In the era of COCOMO 81, softwares were developed with a limited scope and reusability was not a popular concept and hence there was no such concept in COCOMO 81 to accommodate these new features. COCOMO II incorporates the features mentioned and adjusts the estimates for reuse. 2. The estimation model needs to be consistent with the information available for the projects. In COCOMO 81 there is only three models organic, semi-detached and embedded and these models describe the nature of projects. These models do not give any explanation about the phase of the development. COCOMO II has three models application composition, early design and post-architecture to be used according to the phases of development. 3. COCOMO 81 gives output in the form of an exact value, which in most of the cases is not accurate. COCOMO II gives output in the form of ranges (optimistic, pessimistic and most likely) according to the phases of development, which is a better way to plan the development process. 4. B is the constant used in both versions of COCOMO. In COCOMO 81 B is a constant value that depends on the type of project (organic, semi-detached, and embedded). Whereas in COCOMO II B is the result of equation containing five scale factors. 5. The earlier version has 15 cost drivers for rating various attributes of the intended software whereas in the other version 17 effort multipliers are present. Indian Institute of Information Technology, Allahabad 27
40 2.8.3 COCOMO II Models Development market in future can be divided into following categories [5]: 1. End-User programming: Increased literacy has increased the number of end users. New tools available in market allows user to develop there own software for simple uses or for information processing. Some examples are spreadsheets, query browsers, planning tools etc. 2. Application Generators: The area which generates the readymade solution which need to be customized according to user. 3. Application Composition: The problems which can not be solved through single prepackaged solutions needs to be generated by combining different reusable components. Such development comes under the category of application composition. 4. System Integration: Large scale softwares requiring high degree of system engineering and can not be generated by application composition comes under this category. 5. Infrastructure: The area concerned with the development of operating system, database management systems etc. comes under this category. The first category (end user programming) does not need COCOMO II for estimation because its applications are easy to develop with very low complexity and can be developed within hours. For other four sectors COCOMO II has three models of estimation. Indian Institute of Information Technology, Allahabad 28
41 Application composition This model is useful for application which can not be generated through application generators but can be created by combining prepackaged solutions [5]. Examples are GUI builders, query browsers, database managers etc. The model uses object points for size estimation. It estimates the size of any tool on the basis of the number of screens, reports and 3 GL components. The output of object point analysis is number of object points. Person-Month for the application can be calculated as: PM = NOP PROD Where, PM = effort in person-months NOP = new object points PROD = Developers experience and capability Early Design This model can be used for application generators, system integration and for infrastructure development sectors. The model is used early in the development when very little is known about the project. The model uses unadjusted function points for the size estimation. Size estimation using function point is explained in chapter Converting function points to SLOC COCOMO II early design and post architecture model use SLOC in effort estimation. Hence the unadjusted function points need to be converted into equivalent SLOC. This conversation is performed on the basis of available table for different languages such as Table 2.1 given below: Indian Institute of Information Technology, Allahabad 29
42 Table 2.1: Equivalent SLOC per function point count for different Languages Language Equivalent SLOC C 75 C++ 53 COBOL 107 DELPHI 5 18 HTML 14 JAVA 2 46 SQL DEFAULT 13 VISUAL BASIC Cost Drivers COCOMO II uses 17 cost drivers for adjustment of effort. Early Design model uses a reduced set of cost drivers in equation 1. These cost drivers are obtained by combining different cost drivers of post-architecture model. If the ratings of cost drivers are between two levels, the rating near to nominal is selected i.e. if the rating of any driver is between very low and low then low is selected. The Table 2.2 below shows the relationship between early design cost drivers and there post architecture counterparts. Table 2.2: Relationship between early design cost driver and post-architecture cost drivers. Early Design Cost Drivers Post-Architecture Counterpart RCPX (Product Reliability and RELY, DATA, CPLX, DOCU Complexity) Required Reuse (RUSE) RUSE Platform Difficulty (PDIF) TIME, STOR, PVOL Personnel Experience (PREX) AEXP, PEXP, PCON Facilities (FCIL) TOOL, SITE Schedule (SCED) SCED Personnel Capability (PERS) ACAP, PCAP, PCON Indian Institute of Information Technology, Allahabad 30
43 Post-Architecture This model is suitable for application generators, system integration and infrastructure development sector. It has same granularity as of COCOMO 81 and uses all the 17 cost drivers for estimation. The model uses unadjusted function point and source lines of code as size measures. Scale factors are used in both the early design and post architecture model in the same form. Concisely COCOMO II has three models for application generators, system integration and infrastructure. For the early phases of spiral model where prototyping is one of the major activity application composition is most suitable model. The next phase includes exploring design alternatives and better quality of data is available. Early design model fits into this criterion. The phase when the project is ready to develop and maximum information is available for determination of the values of cost drivers, then post-architecture model is the best option Adjustment for Reuse COCOMO II adjusts the nominal effort for the reuse by adding size to task. Function points or source lines of code are as the size metrics for adjusting reuse. Early design and post architecture both follows the same method for adjusting reuse. Figure 2.5 shows the effect of reuse on the cost of the software. The dotted line shows the usual assumption about the relationship between cost and amount of modification in the reusable component and solid line shows the original relationship between the two. The diagram shows that the line does not start from the origin but some point above the origin. This difference is because of the assessment and assimilation effort required for the reusable component [5]. Indian Institute of Information Technology, Allahabad 31
44 1.0 Relative Cost Usual Linear Assumption Amount Modified Effect of Reuse Figure 2.5: Effect of Reuse COCOMO II has the feature of ASLOC (Adapted Source Lines of Code) for estimating the reuse along with three modification factor Design Modified, Code Modified and Interface Modified. Other factors affecting the estimates for reuse are as following: 1. SU (Software Understanding): Describes the structure, application clarity and self-descriptiveness of the component being reused. 2. AA (Assessment and Assimilation): It shows the amount of work required for searching the appropriate component for the reuse. It includes test and evaluation effort for the component. 3. UNFM (Unfamiliarity with the component). Effort estimates for reuse is calculated with the following formulae: AAF = 0.4(DM) + 0.3(CM) + 0.3(CM) ASLOC[AA + AAF( (SU)(UNFM))] ESLOC = 100,AAF 0.5 ESLOC = ASLOC[AA + AAF + (SU)(UNFM)],AAF > Indian Institute of Information Technology, Allahabad 32
45 Where, AAF = Adaptation Adjustment Factor ESLOC = Estimated Source Lines of Code 2.9 Description of Calibration Since the scope of thesis is limited to the COCOMO II model, only the method used for the calibration of the COCOMO II model is discussed in this section Multiple Regression Method Multiple regression [22] is the method used for curve-fitting. It expresses the output in the form of n predictor variables. The co-efficient of variables are then estimated using the least square method. A regression model can be represented as: y t = α 0 + α 1 x t1 + α 2 x t α n t nk Where x t1... x tn are the values of predictor variables for the t th n observation, th α 0... α n are the co-efficient to be estimated. y t is the output for t n observation. COCOMO II model has the following form: 5 17 Effort = A X [Size] SFi X EM i Equation 1 i=1 i=1 Where, A = multiplicative constant Size = Size of the software in thousands of source lines of code. SF = Scale factors EM = Effort Multipliers The above equation can be turned into the linear equation by taking logarithm on both the sides. Indian Institute of Information Technology, Allahabad 33
46 ln (Effort) = α 0 + α ln(size) + α 2. SF 1. ln(size) + + α 6. SF 5. ln(size) + α 7. ln(em 1 ) + + α 23. ln(em 17 ) Regression method for calibration is effective [22] when: 1. The observations available for the use should be large relative to the number of variables in the model. Data collection has always been a challenge for software engineering. The main cause for this is immature processes and lack of cost related data released by the organizations. 2. The observations are free of outliers. Opposite to this, Software Engineering is full of unexpected cases and has a large number of outliers. 3. Independent variables are not highly correlated to each other. The cost related data is obtained from past projects and not by experiments hence the correlation among them is high. COCOMO II model does not satisfy all the above conditions. This results in incorrect calibration of the model and hence finally leads to wrong estimates Bayesian Analysis Software engineering is a new and emerging field and the metrics available for the measurements of software attributes are not complete and are not as accurate as the metrics in other area. Secondly the data collection has always been an activity done in sloppy way. Under these circumstances experts judgment can not be disregarded or we can say is even more important in case of estimation. Bayesian analysis combines the output of the regression method and the experts judgment and thus helps in the calibration process. Bayesian analysis is used for testing the truthfulness of any hypothesis. In this method observations are used for making the decision whether the hypothesis is true or false. General form of Bayes theorem is following [23]: Indian Institute of Information Technology, Allahabad 34
47 P(H 0 E) = P(E H 0 ). P(H 0 ) P(E) Where, H 0 is the hypothesis called null hypothesis that has to be tested against the observations. P(H 0 E) = Posterior probability of the hypothesis P(E H 0 ) = Conditional probability of the evidence when H0 is given. P(E) = Marginal probability i.e. the probability of E under all the mutually exclusive hypotheses. P(H 0 ) = Prior probability of H 0. This method can be used for incorporating the experts judgment with the value obtained from the past data. The final values would be the combination of the two sources of information i.e. the sample data and the experts judgment [22]. Posterior = Sample X Prior Sample information would be the values estimated from the past projects data and prior information would be the information collected from the experts (Delphi method). Obtaining Sample Information Regression method is used for obtaining the sample data. Productivity ranges of all the factors of COCOMO II are used for this calculation. Regression analysis is performed on the past projects value. Using normal probability density function on the projects data the values for the probabilities of the sample values is calculated. These values represent nothing but the new values for the productivity ranges for the factors in COCOMO II model. Productivity Range = Highest Rating / Lowest Rating Obtaining Prior Information (Experts judgment) Indian Institute of Information Technology, Allahabad 35
48 Once the new productivity ranges are available, the next step is to collect the experts judgment. Delphi method is used for collecting the experts judgment. Using mean of the responses received from the experts and the deviation in the values new productivity ranges are defined. These values define the prior information part in the Bayesian analysis. Combining The Prior And The Sample Information The values obtained from the regression method and form the experts are used in the Bayes theorem. The output gives the probability that the value given by the expert is the accurate new value for the productivity range of the given factor Chapter Summary The chapter has laconically discussed the project management, its purpose, the estimation, planning, tracking and calibration. The chapter also has represented the intricacies of function point analysis, COCOMO II model and the regression analysis and Bayesian method for calibration of COCOMO II in a simple way. The next chapter contains the architecture of the tool and the design patterns used in the tool along with the packages created in the tool and their organization. Indian Institute of Information Technology, Allahabad 36
49 3 Application development The chapter discusses the architecture of tool, design patterns used in the tool and the description of packages and the services provided by them. 3.1 Architecture The proposed tool follows the Model-View-Controller (MVC) architecture. Figure 3.1 shows the Model-View-Controller Architecture. MVC is an architectural pattern used in software designing. With the development of large scale softwares the development has become more complex. The most volatile part in any software is user interface, because it is the face that is visible to the user directly. In a highly coupled design it is very difficult to make even the smallest change in the code. The need for the separation of business logic from the user interface is a major concern to any designer. The MVC Pattern decouples the Model, View and Controller, and hence is a solution with the desired flexibility. It is often appropriate if one or more of the following statements are true [24]: 1. Different representations of the same application data like table representation and graph representation are needed. 2. Different Graphical User Interface (GUI) is needed perhaps for different environment (Different Operating systems) without affecting the rest of the application. 3. Events generated by the user must immediately update application data or other components of the application, while the change in application data must be reflected to the user interface components immediately. 4. Reusability of one or more GUI components is needed independent of application data. Indian Institute of Information Technology, Allahabad 37
50 Controller View Model Figure 3.1: Model-View-Controller Architecture There are three main players in MVC architecture: 1. Model: Represents the application data and functional logic in the form of a component. 2. View: The part visible to the user is known as view. A model can have more than one view according to the choice of the users. For example data can be represented in the form of tables of can be represented as charts. 3. Controller: View generates user events according to the action of the users. Controller is responsible for the processing of events and for any action taken for that action. It may or may not result in manipulation of user interface. Views rely on the Model to display and render information to users but they do not change the Model directly. When changes occur in the Model, Views are notified and may then query the Model for additional information. This provides Views with the opportunity to immediately synchronize themselves with changes in the Model [24]. Views and Controllers are loosely coupled with the Model via this change notification mechanism. Views and Controllers register themselves with the Model, Indian Institute of Information Technology, Allahabad 38
51 which in turn keeps an internal list of registered observers of changes. When changes to the Model occur, Views and Controllers are notified as necessary. 3.2 Java Server Faces Java Server Faces (JSF) is a framework introduced by Sun Microsystems [24] which follows MVC architecture. In its model it keeps the java classes used in the software for the implementation of business logic. Figure 3.2 shows Java Server Faces Architecture. JSP pages are used as view in the framework. JSP pages use the models for rendering of data and for processing the business logic. JSP pages cannot directly perform operation on the model. They generate events and controller lies in between the model and view to listen these events and for sending the request to the appropriate class. A servlet named Faces Servlet is used as the controller in the framework. The information about the navigations in the application and about the beans used in the application is kept loosely coupled from the controller and is kept in a configuration file named faces-configure.xml. Faces-ConFigure.xml Faces Servlet JSP Java Classes Figure 3.2: Java Server Faces Architecture Indian Institute of Information Technology, Allahabad 39
52 3.3 Design Patterns Used Design patterns are the outline of reusable solutions to a general problem encountered in a particular context. Each pattern describes a problem which occurs over and over again in our environment and then describes the core of the solution to that problem [25]. The section discusses the design patterns implemented in the tool. Following are the details of the design patterns used in the development of tool Data Access Object Pattern Data Access Object (DAO) is a standard J2EE pattern. In large application where the separation of data from its data access method is necessary for maintaining the complexity low and the maintainability high, DAO is very useful. It separates lowlevel data access methods from high level business logics [26]. Business logic DAO DAO DAO DB DB DB Figure 3.3: Data Access Object Pattern DAO separates the data access methods from the business logic as shown in figure 3.3 and hence improves the maintainability by adding the flexibility to change only the DAO without changing the business classes and vice versa. Indian Institute of Information Technology, Allahabad 40
53 In the tool a XML file is used named as MyQueries.xml. The file contains all the queries for accessing the database and for any operation on it. Use of DAO pattern in the tool is shown in figure 3.4 below: Client DAO DAO Implementation DB XmlParser MyQueries.xml Figure 3.4: Implementation of Data Access Object Pattern in the tool The DAO implementation in the tool has three elements working together to fulfill the desired goal. 1. DAO: The data access interface that defines the methods to be implemented for accessing the data from the data. 2. DAO Implementation: Defines the methods declared in the DAO and is responsible for the connection to the database. 3. XmlParser: Reads the information necessary for the database access from MyQueries.xml and sends the result back to the DAO Implementation. Client is any class that needs to extract information from the database. It uses the DAO for the retrieval of data from the database. Each client has to have a DAO for each of the database in the application. Indian Institute of Information Technology, Allahabad 41
54 The tool is implemented using the MySQL database for the sake of keeping the overall cost low. If the need for a change of database arises, then this change can be easily accommodated without touching the source code and with a meager change in the MyQueries.xml. Even if the change in some of the queries occur (but not in the parameters of the queries) same method can be used to accommodate the desired change. The figure 3.5 below shows the sequence of activities in DAO implementation. Client DAO Implementation XmlParser create () getinstanceof () getconnectiondetails () getdata () OpenConnection () getquerystring() executequery () Figure 3.5: Sequence Diagram for Data Access Object Pattern in the tool Benefits of DAO pattern is as following: 1. Centralizes control with loosely coupled handlers 2. Enables transparency 3. Enables easier migration 4. Reduces code complexity in clients Indian Institute of Information Technology, Allahabad 42
55 5. Organizes all data access code into a separate layer Singleton Pattern In software systems it is very common to find classes for which only one instance is needed. These classes are known as singleton classes. Singleton pattern is the solution for getting only a single instance of any class. Singleton *singleton : Singleton -Singleton () *+getinstance () : Figure 3.6: Singleton Pattern Figure 3.6 above shows the class diagram for the singleton classes. Singleton is a class which has three elements in it. 1. singleton: An instance of its own with static privilege. 2. Singleton(): Constructor of the class with the private access. 3. getinstance(): A public static function which creates an instance of the class if there is no existing instance of the class or if there is any existing instance of the class then it does not instantiate the class but return the existing object to the client asking for the instance of the class. In the tool following classes are singleton classes: 1. DBConnection 2. DAOImplementation (for each package) 3. XmlParser DBConnection is the class responsible for the connection to the database. For each user there should be a separate connection to the database or in other words, each session needs a separate connection to the database and hence a separate DBConnection instance for each session. But there should only be one instance of Indian Institute of Information Technology, Allahabad 43
56 connection, otherwise it will result in excessive connections to the database and hence in the system s break up. Each package has onus of certain operations as defined in the next section. Each package has a separate DAO Implementation class for accessing the database. To avoid excessive connections to the database it is necessary that there should be exactly one connection for each package. XmlParser is responsible for reading the information from the MyQueries.xml. Only a single instance of the class would suffice the need. Hence the class is designed as a singleton class. 3.4 Module Sequence Whole application can be categorized into following modules Size Estimation Module The module is responsible for the calculations pertinent to size estimation. It includes the estimation of Function point analysis and estimation for reuse (Adapted source lines of code) Effort Estimation Module The module contains the implementation of COCOMO II for estimation of effort and schedule Calibration Module The onus of calibration of the tool is on this module. It performs the regression analysis on the data which are gathered from past projects and combines the values with the judgments of the experts Report generation Module Generating reports according to the projects and according to the clients are the tasks done by the module. Indian Institute of Information Technology, Allahabad 44
57 3.4.5 Utilities This module carries out validation, parsing of MyQueries.xml, database connectivity, new projects, clients creation etc. 3.5 Package organization The system is developed in 8 packages. A package is a collection of classes used in the application. The package organization of the tool is shown in figure 3.7. The figure contains only the packages used for business logic and does not include classes used for GUI. User Utility ProjectPkg ClientPkg Calibration MailerPkg ModulePkg MasterValue s TaskPkg ScaleFactors Pkg ActivityPkg Reports Figure 3.7: Organization of packages in the tool Indian Institute of Information Technology, Allahabad 45
58 Concise descriptions of the packages are given below: User User is the origin of the application. The application starts with the user authentication. The package contains a set of classes responsible for authentication purpose and for retrieving the details of the user such as the privilege of the user Utility The package contains the classes responsible for non-business logic functions. The classes in the package are responsible for validation, connection to the database and for parsing the xml file containing the details for database access MailerPkg The tool has the ability for sending mails to the developers triggered under the conditions selected by the developers. The package is responsible for sending mails ClientPkg The package contains classes for the operations related to the clients like addition of a new client, searching a client, finding a client s information etc ProjectPkg The package has a collection of classes responsible for the persistence of the information pertinent to the projects. It includes projects normal information like description, client name, project leaders name etc. and the information necessary for the estimation of the size of the projects, effort and schedule ModulePkg The package contains the classes for keeping information about the module and for its size, effort and schedule estimation. Indian Institute of Information Technology, Allahabad 46
59 3.5.7 TaskPkg The package contains the classes for keeping information about the task and for its size, effort and schedule estimation ScaleFactorsPkg The package keeps track of the information of the scale factors in the task ActivityPkg While developing the tool, a task is further subdivided into 24 activities. The classes in the package are responsible for operating on the activities and for drawing the bar chart representing the time taken by each activity in the organization Reports The package is responsible for generating the reports according to the project or according to the client MasterValues The package is responsible for retrieval and manipulation of master values of the factors used in the COCOMO II model and in function point analysis Calibration The classes in the package perform the calibration process of the tool. The classes deal with both the regression analysis and Bayesian analysis. 3.6 Chapter Summary The chapter has discussed the MVC architecture, Java Server Faces, DAO pattern, Singleton pattern and the packages developed in the thesis in a concise way. The next chapter contains the details of database design and the tables used in the tool. Indian Institute of Information Technology, Allahabad 47
60 4 Database Design and Organization The estimation process for any project is a data driven activity which calculates the estimates on the basis of stored data and through the interaction with new data pertinent to individual projects. Hence the database design is one of the most crucial activities in the whole development process of the tool. More than this some of the information in the system is big enough to be stored as Binary Large Objects (BLOB). The schema design is broadly classified into two categories: 1. Functional Tables design 2. Standard values Tables design 4.1 Functional Tables Design Functional database design focuses on the storage of data pertinent to the projects and is useful for estimation process. It stores information about the projects, modules, tasks, linked contents; cost drivers rating, team details and size details. Tables like productivity of EM, linked contents etc. are used to store the information that are relevant to project, module or task. The description about the table is given below. Figure 4.1 shows the Functional Tables Design. Indian Institute of Information Technology, Allahabad 48
61 Team Details PROJECTS Client Notes productivity of EM Modules Linked Contents T asks Cost Drivers Developer Size Details Activities Data FP Transaction FP Figure 4.1: Functional Tables Design 4.2 Standard Values Tables Design Standard values Tables design deals with non-functional data. It stores the standard values used in the tool. The varieties of standard values are the rating of various cost drivers, the values for scale factors used in COCOMO II, values for various ratings of FPA (function point analysis) ratings. Figure 4.2 shows the Master Values Tables Design. Indian Institute of Information Technology, Allahabad 49
62 Master FP Values Master Effort Multipliers Master FP Map Phase Ratio Master FP Rows Master FP Columns Language Figure 4.2: Master Values Tables Design 4.3 Description of functional Database tables Projects Projects table stores the information about the project. Table 4.1: Project Table Attribute Name Data type Remarks Proj_id String Primary key that uniquely identifies the projects. Proj_Name String Can not be a numerical value. Description String Project description and will store text only. Development_URL String A string value of length 45. Created_on Date The date when project was first Indian Institute of Information Technology, Allahabad 50
63 introduced in the tool. Started_on Date The date that indicates the date when the work was started on project. Client_Id String Id of the client of length 25 of string type. Status Integer Could be either 0,1 or 2 signifying not started, uncompleted and complete status for any project respectively Modules Modules table stores the information about the project. Table 4.2: Module Table Attribute Name Data Type Remarks Proj_id String The foreign key referencing Projects table that uniquely identifies the projects. Module_id String Primary key that uniquely identifies the module in a project. Module_Name String Can not be a numerical value. Description String Module description and will store text only. Development_URL String A string value of length 45. Created_on Date The date when Module was first introduced in the tool. Started_on Date The date that indicates the date when the work was started on Module. Status Integer Could be either 0,1 or 2 signifying not started, uncompleted and complete status for any Module respectively. Ɨ Combination of Proj_Id and Module_Id is the primary key for the modules table. Indian Institute of Information Technology, Allahabad 51
64 4.3.3 Tasks Tasks table stores the information about the task. Table 4.3: Task Table Attribute Name Data type Remarks Proj_id String The foreign key referencing Projects table that uniquely identifies the projects. Module_id String Primary key that uniquely identifies the module in a project. Task _Name String Can not be a numerical value. Description String Module description and will store text only. Development_URL String A string value of length 45. Created_on Date The date when Task was first introduced in the tool. Started_on Date The date that indicates the date when the work was started on Task. Status Integer Could be 0, 1 or 2 signifying not started, uncompleted and complete status for any Task respectively. Assigned_To String Field of length 25 indicating the id of developer to whom the task is assigned. Assigned_By String The id of person assigning the task and will be string value of length 25. Assigned_on String The date on which the work will be assigned to the developer Size_Details The table stores the information pertinent to the size of the project, module or task. Table 4.4: Size Details Table Attribute Name Data type Remarks Id String The field that uniquely identifies the project, module or task. Estimated_Size Double Field that stores the size after FPA. Estimated_Effrot Double Field that stores the effort after FPA. Estimated_ Time Double Field that store the time after FPA. Actual_Size Double Field that stores the actual size of the project, module or task. Actual_Effrot Double Field that stores the actual effort taken by the project, Indian Institute of Information Technology, Allahabad 52
65 module or task. Actual_ Time Double Field that stores the actual time taken by the project, module or task Transaction_fp TRANSACTION_FP stores the information about the transaction functions in function point analysis. Table 4.5: Transaction_FP Table Attribute Name Data type Remarks Task_Id String The foreign key referencing Tasks table and is of string type. DET Double Field that stores the number of data element types in task. FTR Double Field that stores the number of file type referenced in task. UFP Double Field that store the unadjusted function point after FPA. Function_ID String Field that stores whether the given function type is EI, EO, EQ, EIF or ILF. Name String Stores name of the function. Rating String Stores the rating of the function in the task. Ɨ Combination of Task_Id and Function_Type is the primary key for the table Developer Developer table stores the information about the developer. Table 4.6: Developer Table Attribute Name Data type Remarks User_Id String The primary key and can have string value only. Password String An encrypted filed in string format. Name String Can not have numerical values. Mail_Id String Stores ID of the developer. Work_Phone Integer Field that stores whether the given function type is EI, EO, EQ, EIF or ILF. Mobile Integer Stores name of the function. Indian Institute of Information Technology, Allahabad 53
66 Fax Integer Stores the rating of the function in the task. Privilege Integer Cost_drivers The table stores ratings of all the 17 effort multipliers used in the COCOMO II model for the project. Table 4.7: Cost Drivers Table Attribute Name Data type Remarks ID String The primary key and can have string value only and uniquely identifies the project, module or task. 1 Double The rating for RELY. 2 Double The rating for DATA. 3 Double The rating for CPLX. 4 Double The rating for RUSE. 5 Double The rating for DOCU. 6 Double The rating for TIME. 7 Double The rating for STOR. 8 Double The rating for PVOL. 9 Double The rating for ACAP. 10 Double The rating for PCAP. 11 Double The rating for AEXP. 12 Double The rating for PEXP. 13 Double The rating for LTEX. 14 Double The rating for PCON. 15 Double The rating for TOOL. 16 Double The rating for SITE. 17 Double The rating for SCED. Indian Institute of Information Technology, Allahabad 54
67 4.3.8 Scale_Factors The table stores ratings of scale factors for the projects, modules and tasks and there corresponding ratings. Table 4.8: Scale Factors Table Attribute Name Data type Remarks ID String The primary key and can have string value only and uniquely identifies the project, module or task. PREC String Stores rating of precedentedness for projects, modules and tasks. FLEX String Stores rating of development flexibility for projects, modules and tasks. PMAT String Stores rating of process maturity for projects, modules and tasks. RESL String Stores rating of architecture and risk resolution for projects, modules and tasks. TEAM String Stores rating of team cohesion for projects, modules and tasks. PREC_VALUE Double Stores value of rating of precedentedness for projects, modules and tasks. FLEX_VALUE Double Stores value of rating of development flexibility for projects, modules and tasks. PMAT_VALUE Double Stores value of rating of process maturity for projects, modules and tasks. RESL_VALUE Double Stores value of rating of architecture and risk resolution for projects, modules and tasks. TEAM_VALUE Double Stores value of rating of team cohesion for projects, modules and tasks. CONSTANT_IN_B Double Stores the value of constant used in the calculation of B in equation 1. Indian Institute of Information Technology, Allahabad 55
68 4.3.9 Client The table stores information about the client. Table 4.9: Client Table Attribute Name Data type Remarks Name String Stores name of the client and can have string values only. User_Id String Stores Client ID and is the primary key of the table. URL String Stores the available link for contacting user and can have null value. Mail_Id String Stores ID of the client s representative Linked_Content The table stores the content linked to project, module or task. Table 4.10: Linked Content Table Attribute Name Data type Remarks Id String The key for project, module or task id. Content_Id String the id for the content. Content BLOB Stores files pertinent to the id. Added_On Date The date on which the file was added. Added_By String Id for the person who will add the content Ɨ Combination of Id and Content_Id is the primary key for the table Activity The table stores the data about the activity in the task. Activity is the subdivision of the task in the tool. Table 4.11: Activity Table Attribute Name Data type Remarks Id String The key for project, module or task id. Activity_Id String The id for the activity and is a string value. Indian Institute of Information Technology, Allahabad 56
69 Time Double Stores number of hours given to a particular activity. SLOC Double Stores the SLOC coded on particular day on particular project. Date Date stores the current date only. Ɨ Combination of Id and Activity_Id is the primary key for the table Productivity_of_EM The table stores the ratio of the highest rating to the lowest rating of effort multipliers for each project. Table 4.12: Productivity of EM Table Attribute Name Data type Remarks Proj_Id String The primary key and can have string value only and uniquely identifies the project. RELY Double Stores the ratio of very high to very low rating. DATA Double Stores very high to low ratings ratio. CPLX Double Stores extra high to very low ratings ratio. RUSE Double Stores extra high to low ratings ratio. DOCU Double Stores very high to very low ratings ratio. TIME Double Stores extra high to nominal ratings ratio. STOR Double Stores extra high to nominal ratings ratio. PVOL Double Stores very high to low ratings ratio. ACAP Double Stores very high to very low ratings ratio. PCAP Double Stores very high to very low ratings ratio. Indian Institute of Information Technology, Allahabad 57
70 4.4 Description of Standard Values Database tables Phase_Ratio The table stores the ratio of effort for different phases. Table 4.13: Phase Ratio Table Attribute Name Data type Remarks Phase_Name String Name of the Phase and primary key for the table. Ratio Double Stores the ratio of effort for the phase Language Language table stores the equivalent SLOCs for different languages. Table 4.14: Language Table Attribute Name Data type Remarks Language String Name of the Language and primary key for the table. SLOC Integer Equivalent source lines of code for the language Master_Effort_Multipliers The table stores the standard values for various ratings of the effort multipliers in COCOMO II model. Table 4.15: Master Effort Multiplier Table Attribute Name Data type Remarks EM_NAME String The name of effort multiplier and can have only string values and is the primary key for the table. Very_Low Double The value for very low rating of effort multiplier. Low Double The value for low rating of effort multiplier. Indian Institute of Information Technology, Allahabad 58
71 Nominal Double The value for nominal rating of effort multiplier. High Double The value for high rating of effort multiplier. Very_High Double The value for very high rating of effort multiplier. Extra_ High Double The value for extra high rating of effort multiplier Master _FP_Columns The table stores the standard values of columns in the Function point analysis table. Table 4.16: Master_FP_Columns Table Attribute Name Data type Remarks Function_Name String The name of function in the task and can have only string values and is the primary key for the table. C1 Integer The value for ranges in column one for FPA calculation [19]. C2 Integer The value for ranges in column two for FPA calculation [19]. C3 Integer The value for ranges in column three for FPA calculation [19] Master _FP_Rows The table stores the standard values of rows in the Function point analysis table. Table 4.17: Master_FP_Rows Table Attribute Name Data type Remarks Function_Name String The name of function in the task and can have only string values and is the primary key for the table. R1 Integer The value for ranges in row one for FPA calculation [19]. R2 Integer The value for ranges in row two for FPA calculation [19]. R3 Integer The value for ranges in row three for FPA calculation [19]. Indian Institute of Information Technology, Allahabad 59
72 4.4.6 Master _FP_Map The table is used to store the rows and columns values in the Master_FP_Rows and Master_FP_Columns. Table 4.18: Master_FP_Map Table Attribute Name Data type Remarks Row String Represents the row numbers in the table for calculation of data functions and transaction functions [19]. C1 Integer The value in the first column and respective row. C2 Integer The value in the second column and respective row. C3 Integer The value in the third column and respective row Master_FP_Values The table is used to store the values for ratings of data functions and transaction functions in function point analysis. Table 4.19: Master_FP_Values Table Attribute Name Data type Remarks Function_Name String Stores the Function name and is the primary key of the table. Low Double The value for very low rating of data and transaction functions. Nominal Double The value for nominal rating of data and transaction functions High Double The value for high rating of data and transaction functions 4.5 Chapter Summary The chapter has presented the database design and the tables used in the tool along with the details of the attributes in the tables. The next chapter has discussion about other tools studied during the thesis and comparison of the tools on the basis of six attributes has been presented in the last section. Indian Institute of Information Technology, Allahabad 60
73 5 Study of Other Tools The chapter contains an overview of some of the tools studied in the thesis and gives an overview of the drawbacks of those tools. The tools studied are CoStar 7.0 developed by SoftStar Systems, Construx Estimate 2.0 developed by Construx Software Builders, COCOMO II developed by University of Southern California and SLIM-ESTIMATE suite developed by Quality Software Management. Last section of the chapter contains a table which contains the comparison of the tools on six attributes. 5.1 CoStar Costar is a software estimation tool based on COCOMO II. The tool is useful for generating estimates for size, effort, time duration and staffing level. This tool can generate reports for all the phases of development lifecycle, for the cost drivers, reports for schedule etc. Costar 7.0 runs under Windows 95, Windows 98, Windows NT 4, Windows 2000, and Windows XP. CoStar is a complete estimation tool and does not have any feature for management. It comes with its own calibrator called Calico which uses multiple regression method for calibration or the USC calibration tool can be use for its calibration. The report generated by the tool includes estimated information only like the estimated size of a component, estimated time required in each phase, schedule estimates etc. CoStar is a perfect example of isolation of estimation process and Indian Institute of Information Technology, Allahabad 61
74 management process in currently available tools. For calibration, CoStar does not store any past project data. Past projects data needs to be feed in its calibrator i.e. Calico. Regression method needs a larger number of past projects data for getting an accurate estimate, feeding which manually is a tiresome task. CoStar does not provide any facility for any kind of project tracking. Figure 5.1: CoStar from SoftStar Systems Figure 5.1 shows the wizard for creating estimates in the CoStar estimation tool. As shown in the diagram the tool has tabs for taking input about the projects size details, estimation model to be used, scale drivers rating and for the input of cost drivers rating. 5.2 Construx Estimate Construx Estimate is also a software estimation tool based on COCOMO II. The tool provides the user with 10 project types and subtypes, according to which the tool decides which COCOMO model should be used for estimation. Some of the project Indian Institute of Information Technology, Allahabad 62
75 types are business system, control system, internet systems, real time systems (embedded and avionics) etc. It also has 10 phases of development for calculating the estimates accurately according to the phase. It also provides feature for adjusting the priority for schedule and effort. The outputs (estimates) are displayed in both the graphical form (as shown in figure 6.2) and in the text format. Figure 5.2: Output of Construx Estimate The study of the version 2.0 was carried out in the thesis whose limited version is available for free. In the limited version calibration is done in the same way as in CoStar i.e. feeding the values to the calibrator manually. Similar to CoStar, Construx Estimate is also a pure estimation tool without any feature for project management. Project tracking is also missing in the tool. Report is generated for the projects but with the estimates only not with the current status of the projects. Indian Institute of Information Technology, Allahabad 63
76 5.3 COCOMO II This is a tool developed in University of Southern California comprises of estimation and calibration. It provides user with the facility to estimate size using three methods; function point analysis, source lines of code or adaptation source lines of code. It also provides the feature for estimating for the maintenance phase. The best feature of the tool is its flexibility that a user can even change the parameters values used in the equation directly. Figure 5.3: COCOMO Developed by University of Southern California Figure 5.3 shows the GUI of the tool used for entering the details of any project. It includes the module names, size of the modules, labor rate (in $/month), effort adjustment factor (EAF), nominal effort required for development, estimated effort for development, productivity of the team in the module, cost and risk in the modules. Indian Institute of Information Technology, Allahabad 64
77 The tool does not have any management or tracking facilities but in calibration it can import data from any source file or the data stored in the tool of the earlier project (in this case still actual size and effort needs manual entry). The calibration method used in the tool is again multiple regression method which has its own drawbacks [22]. The tool does not generate any kind of report. 5.4 SLIM-ESTIMATE SLIM-ESTIMATE is a tool developed by Quality Software Management and used for estimation. This tool is available with its aide for planning, tracking and calibration. QSM is a tool based on SLIM estimation model. It has its own calibration and control module. It provides user with five solution options; detailed input method, quick estimate, solve for productivity index, solve for size and create solution from history. If little information about the project is available then quick estimate is used otherwise detailed input method can be used for a detailed estimate. If user has the schedule, effort and is given the size then user can use solve for productivity index method which gives the required productivity index for the project s development within the given schedule and with the given effort and size. If the project s size is only the missing information then solve for size method can be used to get the size estimate that can be built with the given effort, productivity index and within the given schedule. Report generation is the only feature missing in the tool. Figure 6.4 shows the estimate generated by QSM. Indian Institute of Information Technology, Allahabad 65
78 Fig 5.4: Estimates Generated by SLIM-ESTIMATE 5.5 Comparison of the Tools Table 5.1: comparison of the tools CoStar Construx Estimate COCOMO II SLIM- ESTIMATE Estimation Estimation Model COCOMO II COCOMO II COCOMO II SLIM Used Project Planning x x x and Tracking Report Generation x x Calibration Calibration Method Used Regression Method Regression Method Regression Method Regression Method Indian Institute of Information Technology, Allahabad 66
79 The tool developed as the result of thesis stores the information about the estimates and the actual data of the projects in Size_Details table (table 4.3.4) to be used later by the tracking and calibration module and hence solves the problem of tracking and manual data feeding for calibration. Report generation, a missing feature in the above discussed tools is taken care by the Project table (table 4.3.1), Module table (table 4.3.2), Task table (table 4.3.3) and Client table (table 4.3.9). These tables stores information about the projects, modules, tasks and about the clients and are used in the generation of project report or client report dynamically when desired. 5.6 Chapter Summary The chapter has presented concise details of the estimation tools CoStar, Construx Estimate, COCOMO II , SLIM-Estimate. The comparison of the tools on the basis of six attributes has been done to better understand the missing features in the tools. In the next chapter novel features of the tool has been discussed. Implementation of the features has been depicted succinctly in the chapter. Indian Institute of Information Technology, Allahabad 67
80 6 Unrivalled features of the tool The proposed tool provides solutions to overcome the drawbacks in the current scenario mentioned in chapter 1.The chapter contains the description of the features and gives an overview of the implementation of these features. The features explained are: calibration of the tool using Bayesian theorem, projects status tracking and report generation. 6.1 Calibration using Bayesian Theorem The entire functioning of calibration and sequence of operation is depicted in this section. Operational database is maintained in MySQL by the project management software while project estimation and tracking process. This is the major source of input for calibration. The values used for parameters for each project in the estimation process are stored in the database. User in the application is a sophisticated user having knowledge of FPA and COCOMO II. Only the user with administrator privilege in the tool can instigate the calibration. User gives the judgments of the experts obtained through Delphi method. Calibrator is a program written in java which retrieves data from the operation database and organize them in useful form i.e. in form of array according to the parameters. User selects the projects whose data should be used for calibration. The calibrator then generates arrays of 26 variables containing the standard values of parameters and size and effort estimates for the selected projects. Indian Institute of Information Technology, Allahabad 68
81 The data is then fed to the MatLab which performs multiple regression on the given data and returns an array of size 24 containing the new values for the parameters sent to it. These values are returned back to the calibrator where the probabilities of these data are calculated in the selected projects, assuming that the probability distribution is normal. The result of the calibrator is used in the Bayesian analysis along with the information obtained from experts. The operation gives support for the judgments given by the experts for each parameter as output. The user can view the data and can choose to go for another iteration of the whole process or can accept the new values given in the output as the new values for the parameters. Figure 6.1 shows the Calibration Process. User Operational Database 3 4 Calibrator Operational Database User MatLab Figure 6.1: The Calibration Process 1. Input experts judgment and activate the calibration process. 2. Retrieve data from the operational database. 3. Organize data and send them to MatLab. 4. Perform regression and return the results to calibrator. 5. Calculate probabilities of experts judgments and of the values retrieved from MatLab. 6. Perform Bayesian analysis and send values to the user. Indian Institute of Information Technology, Allahabad 69
82 6.2 Tracking One of the most important activities in the whole management process is to keep track of the status of the project regularly to avoid any problem, and hence is a tiresome activity. Figure 6.2 shows overall tracking process. GUI Developer 2 Developer GUI 1 Operational Database Chart Generator 3 Project Manager GUI Developer Figure 6.2: Tracking Process 1. Developers insert number of hours worked by them on each daily using a GUI into the operational database. 2. Data from operational database is retrieved for the project selected by the user. 3. Gantt chart is generated when project manager wish to check the project s status. The tool divides each task into 24 activities and developers enters the hours spend by them on each activity daily. List of activities are given in Table 6.1 below. Indian Institute of Information Technology, Allahabad 70
83 Table 6.1: List of activities in each task. Activity ID RSA RORS DDA DDR HLDA HLDR DaDA DaAA RDD WFR PSTA RARR RADDR RAHLDR RADaR RWFSP CWR RACWT UTP IUT RAIUT RAAT RAIT RAST Activity Name Requirement Specification Activities Review Of Requirements Specifications Detailed Design Activities Detailed Design Review High Level Design Activities High Level Design Reviews Database Design Activities Database Administration Activities Review Of Database Design Waiting For Resources Project Specific Training Activities Rework After Requirements Review Rework After Detailed Design Review Rework After High Level Design Review Rework After Database Review Research Work For Specific Project Code Walkthrough / Review Rework After Code Walkthrough Unit Test Planning Includes Documentation, Test Case Planning, Test Data Setup, Tools & Procedures, Review & Rework Test Plans, Log & Close Defects Independent Unit Testing Rework After Independent Unit Testing Rework After Acceptance Testing Rework After Integration Testing Rework After System Testing A GUI is provided to developers to input daily work done by them on each activity of the tasks assigned to them into the database. Chart generator is a class implemented in java which first retrieves data for each task of the project from the database and organizes them into an array to be used by the chart generator for generating Gantt chart. Jfreechart is an API used in the tool for generating charts. Arrays are sent to the Jfreechart which generates Gantt chart and sends it to the project manager. Indian Institute of Information Technology, Allahabad 71
84 6.3 Report Generation Reports are required in every phase of development. It is useful for both the development team and the project management team. The tool facilitates the generation of two kinds of reports either for project or for the client. Figure 6.3 shows the overall report generation process. User in the process is either project manager or the administrator. User instigates the operation by selecting the options i.e. report type (client report or project report) and respective name. Report generator is a class responsible for retrieval of data according to the options selected by the user. For project report it retrieves data about the project (project name, ID, description, URL, project leader name, date of project creation, client name and status of the project), modules in the project (name, description, module leader name, date of module creation, status of the module, estimated size, estimated time) and tasks in the project (task name, description, developer name, date of task creation, status of the task, estimated size, estimated time). For client report it retrieves data about the client (client name, client id, mail id, URL, address, phone number) and the data about the projects under the client (project name, description, date of project creation, status of the project). User Operational Database Report Generator Desired Report Figure 6.3: Report Generation process. Indian Institute of Information Technology, Allahabad 72
85 1. Select the options for report generation i.e. either client report or project report and respective name. 2. Retrieve data from the operational database. 3. Organize data into an array to be sent to the API and generate report. 4. Send report back to the user. Data retrieved through report generator is sent to the itext API for generating reports in PDF format. The report is then sent back to the browser where user can see the report and can store it. 6.4 Chapter Summary Novel features implemented in the tool have been represented in the tool along with concise details of the implementation. Indian Institute of Information Technology, Allahabad 73
86 7 A Road Ahead The tool developed in the thesis is a result of one year endeavor and enhancements can be done in the tool. Some of the enhancements suggested are given below: 1. The tool currently is not capable in estimating the size, effort and schedule for the prototypes and for the real time applications. This can be done with the implementation of application composition model of the COCOMO II and object point analysis. 2. Dependency among the tasks is given manually by entering the start dates of the modules. PERT and Critical path implementation can help in deciding the schedule for the projects and could be major amendment in the tool. 3. The tool stores the information about each activity in the task. This information with some data mining algorithm can be useful for inferring the problematic activity in organization and can indicate the administrator for modifications in the activity to overcome the problems occurred. 4. Stored information about the projects estimates and developers input can be used to identify the productivity of the developers. 5. The tool gives the estimates for the projects. A feature for estimating the best option under the given constraints (time, effort and productivity) can be added. If a user has some fix schedule then the productivity of the developers needed for the project to complete within schedule is a desired and common output for any organization. Indian Institute of Information Technology, Allahabad 74
87 8 Conclusion Software estimation helps project management to plan the project. Tools available for the project estimation are great helps in the process. But estimating the project and then planning it without caring about the status of project at any instant of time is a problem worth to be considered. The process known as tracking is an important process that needs to be integrated with the estimation and planning process. Everything changes with time the team, the process, and the life-cycle. So a static model can not be used. There are much severe consequences of using a static model for estimation. The core of software crisis starts with the wrong estimation. Thus the calibration of the model being used for the estimation, with the past projects data experienced by the organization, is an activity of utmost importance. Hence the calibration of the estimation model against organization, team and project should be done regularly. Instead of being one of the lately introduced branches of Engineering, Software Engineering has grown enough to invent successful software development models, estimation techniques, designs, architectures and testing methods. After so many findings still the metrics needed to measure softwares precisely are not complete. It should always be remembered that metric values calculated should be used as guidelines, not rules. Under such circumstances experts judgment can not be ignored, but it requires time, work and money. Hence the CASE tools available should incorporate features to combine the experts judgments with the available metrics. The thesis has represented the specification and implementation details of a web based tool for integrating estimation, planning and tracking and calibration process. The tool was developed with the intention to be used by project management team which bears the responsibility of completing the project within time, budget and with the specified quality. Indian Institute of Information Technology, Allahabad 75
88 APPENDICES Indian Institute of Information Technology, Allahabad 76
89 Appendix A Delphi Method In the earlier ages of software industry when very little was known about the softwares to use model based techniques for estimation, experts opinion was the best option. Many refer it as an alias for conjecturing but it is much more then just guessing. Single expert s judgment could be a biased decision but it makes sense to take opinion from multiple experts and then combine the estimates. Barry Boehm suggested that combining experts judgments using average is not a good idea and could lead to severe results and having a group discussion of the experts may lead to one or two experts over influencing the estimates. Delphi method first introduced by Rand Corporation is a modification in technique to combine different experts judgments. Figure A.1 shows the schematic of Delphi technique. The process starts with identifying the team for the estimations. The system developer explains the intricacies of the system to the estimators. The team with the system developer defines the components in the system and assumptions for the system in tandem. The whole process is known as System BreakDown Structure (SBS). The team with the system developer jointly defines the difference in the estimates that should be accepted. The estimators then perform the estimation independently. The independence of the estimators is the key for the success of Delphi technique. The estimators are not allowed to discuss while estimating among them. The estimates are then sent to the coordinator of the process who returns back the whole story of the process in a nutshell to all the estimators. The components with outside the acceptable level of difference are discussed. The estimates for the components are not discussed but the components themselves are discussed. The Indian Institute of Information Technology, Allahabad 77
90 whole process then repeats again until all the components are within the acceptable level of difference. Identify Estimation Team Discuss the System Identify components. Define SBS and discuss Assumptions Agree on Acceptable Level of Difference Perform Individual Estimates Consolidate & Prepare Estimation Summary Yes Are Estimates of all Components within Acceptable level of Difference? No Discuss Components with large Variances Estimates Figure A.1: Schematic Representation of Delphi Estimation Technique Indian Institute of Information Technology, Allahabad 78
91 Appendix B Regression Analysis Scientists and engineers often want to represent the data obtained from observations into a model based on mathematical equations. The model is helpful in finding important characteristics of the data, such as the rate of change anywhere on the curve (first derivative), the local minimum and maximum points of the function (zeros of the first derivative), and the area under the curve (integral). Curve fitting is the method of finding the parameter values that most closely match the observations data. To perform fitting, we define some function which depends on various parameters that measures the closeness between the data and the model. This function is then minimized to the smallest possible value with respect to the parameters. The parameter values that minimize the function are the best-fitting parameters. Regression Analysis is the method used for finding the values of the parameters. In linear regression model the dependent variable is assumed to be a linear function of independent variables. Simple form of linear regression is as follows: y t = β 0 + β 1 x t1 + β 2 x t β n t nk Where y is the dependent variable and x1, x2 xn are independent variables. The motto behind regression analysis is to find the unknown values of β 0, β 1, β 2, β n.. Ordinary Least Square (OLS) is the method used for estimation of the values of β. The principle behind the method is to reduce the error between the observed output and the output obtained through the model. Indian Institute of Information Technology, Allahabad 79
92 e = y i - y i Where e is the error and y and y are output obtained from observation and output obtained through the model respectively. Regression method is effective [22] when: 1. The number of data points are large relative to the number of model parameters (i.e. there are many degrees of freedom). Unfortunately, collecting data has and continues to be one of the biggest challenges in the software estimation field. This is caused primarily by immature processes and management reluctance to release cost-related data. 2. There are no outliers. Extreme cases frequently occur in software engineering data because there is lack of precision in the data collection process. 3. The predictor variables (cost drivers and scale factors) are not highly correlated. Unfortunately, because cost data is historically rather than experimentally collected, correlations among the predictor variables are unavoidable. Indian Institute of Information Technology, Allahabad 80
93 Appendix C Project Management Software- GUI This section contains the GUIs of the Project Management Software. Figure C.1: Login Screen Indian Institute of Information Technology, Allahabad 81
94 Figure C.2: Creating a Project. Figure C.3: Creating a Module in the Project. Indian Institute of Information Technology, Allahabad 82
95 Figure C.4: Creating a Task in the Project. Figure C.5: Adding a document in the Project. Indian Institute of Information Technology, Allahabad 83
96 Figure C.6: Result of Estimation : The Estimates. Figure C.7: Creating a New User. Indian Institute of Information Technology, Allahabad 84
97 Figure C.8: Creating a New Client. Figure C.9: Searching a Project or Client. Indian Institute of Information Technology, Allahabad 85
98 Figure C.10: Setting Preferences for a user. Figure C.11: Setting the Standard Values for Scale Factors. Indian Institute of Information Technology, Allahabad 86
99 Figure C.12: Setting the Standard Values for Effort Multipliers. Figure C.13: Setting the value of Equivalent SLOC per Function Point for languages. Indian Institute of Information Technology, Allahabad 87
100 Figure C.14: Input Form for entering the daily work done by each developer. Figure C.15: Changing the phase ratio for each phase. Indian Institute of Information Technology, Allahabad 88
101 Figure C.16: Tracking using Gantt Chart. Figure C.17: Time Taken by each Activity. Indian Institute of Information Technology, Allahabad 89
102 Figure C.18: Selecting Projects for Calibration. Figure C.19: Input Experts Judgment for Calibration. Indian Institute of Information Technology, Allahabad 90
103 Figure C.20: Input Experts Judgment for Calibration. Figure C.21: List of Clients. Indian Institute of Information Technology, Allahabad 91
104 Figure C.22: client details and the project under the client. Indian Institute of Information Technology, Allahabad 92
105 References [1] McGraw-Hill Dictionary of Scientific and Technical Terms, 6th edition, published by The McGraw-Hill Companies, Inc. [2] Construx Estimate tool, [3] CoStar tool, [4] SLIM-ESTIMATE tool, [5] COCOMO II Model definition manual, version 1.4, University of Southern California. [6] Swapna kishore and Rajesh Naik, Software Requirements and Estimation, Tata McGraw-Hill, New Delhi, [7] Bradford Clark, Sunita Devnani-Chulani and Barry Boehm, Calibrating the COCOMO I1 Post-Architecture Model, 1998 IEEE [8] Java, Sun MicroSystems, java.sun.com/javase/downloads/index.jsp [9] Java Server Faces, java.sun.com/javaee/javaserverfaces/download.html [10] MySQL, dev.mysql.com/downloads/mysql/4.1.html [11] MySQL jdbc connector, dev.mysql.com/downloads/connector/j/3.1.html [12] JFreeChart, Others/JFreeChart.shtml [13] ChartCreator, sourceforge.net/project/showfiles.php?group_id= [14] MyFaces, apache.tradebit.com/pub/myfaces/binaries/ [15] Jdom, [16] Struts, struts.apache.org/download.cgi [17] JMatLink,ftp2.uk.freebsd.org/sites/download.sourceforge.net/pub/sourceforge/j/j m/jmatlink [18] Kathleen Peters, Berkun, Scott, Art of Project Management. Cambridge, MA: O'Reilly Media. ISBN Software Project Estimation (White paper), Software Productivity Centre Inc. (SPC) in Vancouver, British Columbia, Canada, [19] David Longstreet, Function Points Analysis Training Course, [20] SEAN FUREY, Why We Should Use Function Points, 1997 IEEE Indian Institute of Information Technology, Allahabad 93
106 [21] Sunita Chulani, Barry Boehm, Bert Steece, Bayesian Analysis of Empirical Software Engineering Cost Models, July/August 1999 (Vol. 25, No. 4) [22] Sunita Devnani-Chulani, A dissertation, University of Southern California, Bayesian Analysis of Software Cost and Quality Models, May [23] J. O. Berger, Statistical Decision Theory and Bayesian Statistics. Second Edition. Springer Verlag, New York. ISBN and also ISBN , [24] Bill Dudney, Jonathan Lehr, Bill Willis and LeRoy Mattingly, Mastering Java Server Faces, Wiley publications, [25] Timothy C. Lethbridge and Rober Lagniere, Object-Oriented Software Engineering, Tata McGraw-Hill, [26] Data Access Object Pattern, Indian Institute of Information Technology, Allahabad 94
APPLYING FUNCTION POINTS WITHIN A SOA ENVIRONMENT
APPLYING FUNCTION POINTS WITHIN A SOA ENVIRONMENT Jeff Lindskoog EDS, An HP Company 1401 E. Hoffer St Kokomo, IN 46902 USA 1 / 16 SEPTEMBER 2009 / EDS INTERNAL So, Ah, How Big is it? 2 / 16 SEPTEMBER 2009
PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING
PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING PMI PMBOK & ESTIMATING 03-23-05 Christine Green, PMI PMBOK and Estimating EDS, Delivery
Fundamentals of Function Point Analysis
Fundamentals of Function Point Analysis By [email protected] Abstract Systems continue to grow in size and complexity. They are becoming more and more difficult to understand. Improvement of coding
Introduction to Function Points www.davidconsultinggroup.com
By Sheila P. Dennis and David Garmus, David Consulting Group IBM first introduced the Function Point (FP) metric in 1978 [1]. Function Point counting has evolved into the most flexible standard of software
Why SNAP? What is SNAP (in a nutshell)? Does SNAP work? How to use SNAP when we already use Function Points? How can I learn more? What s next?
1 Agenda Why SNAP? What is SNAP (in a nutshell)? Does SNAP work? How to use SNAP when we already use Function Points? How can I learn more? What s next? 2 Agenda Why SNAP? What is SNAP (in a nutshell)?
Software Development: Tools and Processes. Lecture - 16: Estimation
Software Development: Tools and Processes Lecture - 16: Estimation Estimating methods analogy method direct estimating method Delphi technique PERT-type rolling window Constructivist Cost Model (CoCoMo)
Counting Infrastructure Software
Counting Infrastructure Software Dr. Anthony L Rollo, SMS Ltd, Christine Green EDS Many function point counters and managers of software counts believe that only whole applications may be sized using the
Measuring Change Requests to support effective project management practices.
Measuring Change Requests to support effective project management practices. Roberto Meli Abstract Some of the major reasons for software project failures relay in the area of the management of project
Fundamentals of Measurements
Objective Software Project Measurements Slide 1 Fundamentals of Measurements Educational Objective: To review the fundamentals of software measurement, to illustrate that measurement plays a central role
Basic Project Management & Planning
Basic Project Management & Planning Dr. David K. Potter Director & Don Pether Chair in Engineering and Management [email protected] 1 What is Project Management? A set of principles, methods, tools, and
CSSE 372 Software Project Management: Software Estimation With COCOMO-II
CSSE 372 Software Project Management: Software Estimation With COCOMO-II Shawn Bohner Office: Moench Room F212 Phone: (812) 877-8685 Email: [email protected] Estimation Experience and Beware of the
CISC 322 Software Architecture
CISC 322 Software Architecture Lecture 20: Software Cost Estimation 2 Emad Shihab Slides adapted from Ian Sommerville and Ahmed E. Hassan Estimation Techniques There is no simple way to make accurate estimates
Function Points Analysis Training Course
Function Points Analysis Training Course Instructor: David Longstreet [email protected] www.softwaremetrics.com 816.739.4058 Page 1 www.softwaremetrics.com Longstreet Consulting Inc Table of Contents
Software Engineering. Dilbert on Project Planning. Overview CS / COE 1530. Reading: chapter 3 in textbook Requirements documents due 9/20
Software Engineering CS / COE 1530 Lecture 4 Project Management Dilbert on Project Planning Overview Reading: chapter 3 in textbook Requirements documents due 9/20 1 Tracking project progress Do you understand
Measuring Software Functionality Using Function Point Method Based On Design Documentation
www.ijcsi.org 124 Measuring Software Functionality Using Function Point Method Based On Design Documentation Anie Rose Irawati 1 and Khabib Mustofa 2 1 Department of Computer Science, University of Lampung
MTAT.03.244 Software Economics. Lecture 5: Software Cost Estimation
MTAT.03.244 Software Economics Lecture 5: Software Cost Estimation Marlon Dumas marlon.dumas ät ut. ee Outline Estimating Software Size Estimating Effort Estimating Duration 2 For Discussion It is hopeless
Software project cost estimation using AI techniques
Software project cost estimation using AI techniques Rodríguez Montequín, V.; Villanueva Balsera, J.; Alba González, C.; Martínez Huerta, G. Project Management Area University of Oviedo C/Independencia
The principles, processes, tools and techniques of project management
Unit 34: Plan and manage a project. 341 The principles, processes, tools and techniques of project management An industrial project, big or small, must be managed effectively to ensure the project s objectives
Accounting for Non-Functional Requirements in Productivity Measurement, Benchmarking & Estimating
Accounting for Non-Functional Requirements in Productivity Measurement, Benchmarking & Estimating Charles Symons President The Common Software Measurement International Consortium UKSMA/COSMIC International
Sizing Application Maintenance and Support activities
October 2014 Sizing Application Maintenance and Support activities Anjali Mogre [email protected] Penelope Estrada Nava [email protected] Atos India www.atos.net Phone: +91 9820202911 Copyright
Calculation of the Functional Size and Productivity with the IFPUG method (CPM 4.3.1). The DDway experience with WebRatio
Calculation of the Functional Size and Productivity with the IFPUG method (CPM 4.3.1). The DDway experience with WebRatio This document contains material that has been extracted from the IFPUG Counting
Appendix G Technical Methodology and Approach Document
Appendix G Technical Methodology and Approach Document Technical Methodology and Approach Document CWS/CMS Technical Architecture Alternatives Analysis (TAAA) California Health and Human Services Agency
Using Productivity Measure and Function Points to Improve the Software Development Process
Using Productivity Measure and Function Points to Improve the Software Development Process Eduardo Alves de Oliveira and Ricardo Choren Noya Computer Engineering Section, Military Engineering Institute,
How to Decide which Method to Use
Methods for Software Sizing How to Decide which Method to Use 1 Why Measure Software Size? Software is the output product from the software development and/or enhancement activity that is delivered and/or
Software Cost Estimation: A Tool for Object Oriented Console Applications
Software Cost Estimation: A Tool for Object Oriented Console Applications Ghazy Assassa, PhD Hatim Aboalsamh, PhD Amel Al Hussan, MSc Dept. of Computer Science, Dept. of Computer Science, Computer Dept.,
Chapter 23 Software Cost Estimation
Chapter 23 Software Cost Estimation Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 23 Slide 1 Software cost estimation Predicting the resources required for a software development process
Project Cost & Schedule Monitoring Process Using MS Excel & MS Project
Project Cost & Schedule Monitoring Process Using MS Excel & MS Project Presented by: Rajesh Jujare About Us Solutions is founded with objectives a. To share its expertise and experiences to overcome the
Project Report s Synopsis On behalf of
Project Report s Synopsis On behalf of NORTHERN INDIA ENGINEERING COLLEGE Faizabad Road, Lucknow (U.P.) ON THE TOPIC ONLINE MOVIE TICKET BOOKING SYSTEM SUBMITTED BY: ANSHUL (0705613023) ANIL(07056130 1
A Case study based Software Engineering Education using Open Source Tools
A Case study based Software Engineering Education using Open Source Tools Sowmya B J Dept. of CSE M. S. Ramaiah Institute of Technology [email protected] Srinidhi Hiriyannaiah Dept. of CSE M.S. Ramaiah
How To Develop Software
Software Engineering Prof. N.L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture-4 Overview of Phases (Part - II) We studied the problem definition phase, with which
SIZING ANDROID MOBILE APPLICATIONS
SIZING ANDROID MOBILE APPLICATIONS GURUPRASATH S, CFPS Email: [email protected] Reviewed By: Purnima Jagannathan Prashanth CM Copyright 2011 Accenture All Rights Reserved. Accenture, its
Software cost estimation. Predicting the resources required for a software development process
Software cost estimation Predicting the resources required for a software development process Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 23 Slide 1 Objectives To introduce the fundamentals
A Comparative Evaluation of Effort Estimation Methods in the Software Life Cycle
DOI 10.2298/CSIS110316068P A Comparative Evaluation of Effort Estimation Methods in the Software Life Cycle Jovan Popović 1 and Dragan Bojić 1 1 Faculty of Electrical Engineering, University of Belgrade,
SOFTWARE PROJECT MANAGEMENT
SOFTWARE PROJECT MANAGEMENT http://www.tutorialspoint.com/software_engineering/software_project_management.htm Copyright tutorialspoint.com The job pattern of an IT company engaged in software development
TECH. Tracking Progress. Project Schedule PHASE 1 STEP 1 ACTIVITY 1.1. Activity and Milestone STEP 1 STEP 2 : ACTIVITY 2.2 : ACTIVITY 1.
CH03 Planning and Managing the Project * Tracking Progress * Project Personnel * Effort Estimation * Risk Management * The Project Plan * Process Models and Project Management Tracking Progress Questions
The IFPUG Counting Practices On-Going Effort in Sizing Functional Requirements. Janet Russac
The IFPUG Counting Practices On-Going Effort in Sizing Functional Requirements Janet Russac 2009 IFPUG s method for function point analysis is an ISO standard and must be conformant to ISO/IEC 14143-1:2007.
FUNCTION POINT ANAYSIS DETERMINING THE SIZE OF ERP IMPLEMENTATION PROJECTS By Paulo Gurevitz Cunha
FUNCTION POINT ANAYSIS DETERMINING THE SIZE OF ERP IMPLEMENTATION PROJECTS By Paulo Gurevitz Cunha Introduction In general, when we receive a request to implement a package, the first question that comes
AT&T Global Network Client for Windows Product Support Matrix January 29, 2015
AT&T Global Network Client for Windows Product Support Matrix January 29, 2015 Product Support Matrix Following is the Product Support Matrix for the AT&T Global Network Client. See the AT&T Global Network
Article 3, Dealing with Reuse, explains how to quantify the impact of software reuse and commercial components/libraries on your estimate.
Estimating Software Costs This article describes the cost estimation lifecycle and a process to estimate project volume. Author: William Roetzheim Co-Founder, Cost Xpert Group, Inc. Estimating Software
Estimating Size and Effort
Estimating Size and Effort Dr. James A. Bednar [email protected] http://homepages.inf.ed.ac.uk/jbednar Dr. David Robertson [email protected] http://www.inf.ed.ac.uk/ssp/members/dave.htm SAPM Spring 2007:
Copyright 2014 Alvin J. Alexander All rights reserved. No part of this book may be reproduced without prior written permission from the author.
How I Estimate Software Development Projects How I Estimate Software Development Projects Copyright 2014 Alvin J. Alexander All rights reserved. No part of this book may be reproduced without prior written
INSE 6230 Total Quality Project Management
Lecture 5 Project Cost Management Project cost management introduction Estimating costs and budgeting Earned Value Management (EVM) 2 IT projects have a poor track record for meeting budget goals Cost
Evolving a New Software Development Life Cycle Model SDLC-2013 with Client Satisfaction
International Journal of Soft Computing and Engineering (IJSCE) ISSN: 2231-2307, Volume-3, Issue-1, March 2013 Evolving a New Software Development Life Cycle Model SDLC-2013 with Client Satisfaction Naresh
Project Management Planning
Overview of Resource Planning Every organization has a limited number of resources to perform tasks. A project manager's primary role is to find a way to successfully execute a project within these resource
Software cost estimation
Software cost estimation Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 26 Slide 1 Objectives To introduce the fundamentals of software costing and pricing To describe three metrics for
Managing Projects with Practical Software & Systems Measurement PSM
Managing Projects with Practical Software & Systems Measurement PSM Mauricio Aguiar PSM Qualified Instructor TI Métricas Ltda. Av. Rio Branco 181/1910 Rio de Janeiro, RJ Brazil 20040-007 www.metricas.com.br
Software Engineering Reference Framework
Software Engineering Reference Framework Michel Chaudron, Jan Friso Groote, Kees van Hee, Kees Hemerik, Lou Somers, Tom Verhoeff. Department of Mathematics and Computer Science Eindhoven University of
FUNCTION POINT ANALYSIS: Sizing The Software Deliverable. BEYOND FUNCTION POINTS So you ve got the count, Now what?
FUNCTION POINT ANALYSIS: Sizing The Software Deliverable BEYOND FUNCTION POINTS So you ve got the count, Now what? 2008 Course Objectives The primary webinar objectives are to: Review function point methodology
Derived Data in Classifying an EO
itip Guidance from the Functional Sizing Standards Committee on topics important to you Derived Data in Classifying an EO itip # 07 (Version 1.0 08/08/2014) itips provide guidance on topics important to
Elite: A New Component-Based Software Development Model
Elite: A New Component-Based Software Development Model Lata Nautiyal Umesh Kumar Tiwari Sushil Chandra Dimri Shivani Bahuguna Assistant Professor- Assistant Professor- Professor- Assistant Professor-
Social Network Website to Monitor Behavior Change Design Document
Social Network Website to Monitor Behavior Change Design Document Client: Yolanda Coil Advisor: Simanta Mitra Team #11: Gavin Monroe Nicholas Schramm Davendra Jayasingam Table of Contents PROJECT TEAM
Best Practices Statement Project Management. Best Practices for Managing State Information Technology Projects
State of Arkansas Office of Information Technology 124 W. Capitol Ave. Suite 990 Little Rock, AR 72201 501.682.4300 Voice 501.682.4020 Fax http://www.cio.arkansas.gov/techarch Best Practices Statement
Student Attendance Through Mobile Devices
Student Attendance Through Mobile Devices Anurag Rastogi Kirti Gupta Department of Computer Science and Engineering National Institute of Technology Rourkela Rourkela-769 008, Odisha, India Student Attendance
Topics. Project plan development. The theme. Planning documents. Sections in a typical project plan. Maciaszek, Liong - PSE Chapter 4
MACIASZEK, L.A. and LIONG, B.L. (2005): Practical Software Engineering. A Case Study Approach Addison Wesley, Harlow England, 864p. ISBN: 0 321 20465 4 Chapter 4 Software Project Planning and Tracking
(Refer Slide Time: 01:52)
Software Engineering Prof. N. L. Sarda Computer Science & Engineering Indian Institute of Technology, Bombay Lecture - 2 Introduction to Software Engineering Challenges, Process Models etc (Part 2) This
A DIFFERENT KIND OF PROJECT MANAGEMENT
SEER for Software SEER project estimation and management solutions improve success rates on complex software projects. Based on sophisticated modeling technology and extensive knowledge bases, SEER solutions
How to Determine Your Application Size Using Function Points
EMBARCADERO HO ME LOCATION ENGLISH LOG ON Watch, Follow, & Connect with Us Share This COMMUNITIES ARTICLES BLOGS RESOURCES DOWNLOADS HELP Conferences» 2004 BorCon» Best Practices How to Determine Your
Strategies for a Successful E2E Systems Integration Test. Fiona Charles Let s Test May 9, 2012
Strategies for a Successful E2E Systems Integration Test Fiona Charles Let s Test May 9, 2012 This session Describes key project management strategies I have used to manage large- scale Systems Integration
Project Management Planning
The Project Plan Template The Project Plan The project plan forms the basis for all management efforts associated with the project. A project plan template is included in this document. The information
SSgA CAPITAL INSIGHTS
SSgA CAPITAL INSIGHTS viewpoints Part of State Street s Vision thought leadership series A Stratified Sampling Approach to Generating Fixed Income Beta PHOTO by Mathias Marta Senior Investment Manager,
B.Com(Computers) II Year RELATIONAL DATABASE MANAGEMENT SYSTEM Unit- I
B.Com(Computers) II Year RELATIONAL DATABASE MANAGEMENT SYSTEM Unit- I 1 1. What is Data? A. Data is a collection of raw information. 2. What is Information? A. Information is a collection of processed
Contents. Introduction and System Engineering 1. Introduction 2. Software Process and Methodology 16. System Engineering 53
Preface xvi Part I Introduction and System Engineering 1 Chapter 1 Introduction 2 1.1 What Is Software Engineering? 2 1.2 Why Software Engineering? 3 1.3 Software Life-Cycle Activities 4 1.3.1 Software
Virtual Desktops Security Test Report
Virtual Desktops Security Test Report A test commissioned by Kaspersky Lab and performed by AV-TEST GmbH Date of the report: May 19 th, 214 Executive Summary AV-TEST performed a comparative review (January
Define Activities Sequence Activities Estimate Activity Resources Estimate Activity Durations Develop Schedule Control Schedule
1 (Image) 2 The process required to manage timely completion of the project. Project time management start with planning by the project management team (not shown as a discrete process). In small project,
Estimation Tools. Seminar on Software Cost Estimation WS 02/03. Presented by Christian Seybold [email protected]
Estimation Tools Seminar on Software Cost Estimation WS 02/03 Presented by Christian Seybold [email protected] Requirements Engineering Research Group Department of Computer Science University of Zurich,
Project Planning and Project Estimation Techniques. Naveen Aggarwal
Project Planning and Project Estimation Techniques Naveen Aggarwal Responsibilities of a software project manager The job responsibility of a project manager ranges from invisible activities like building
Function Point Counting Practices Manual. Release 4.1.1
Function Point Counting Practices Manual Release 4.1.1 International Function Point Users Group (IFPUG) Function Point Counting Practices Manual Release 4.1.1 Chairperson, Counting Practices Committee
Online Tuning of Artificial Neural Networks for Induction Motor Control
Online Tuning of Artificial Neural Networks for Induction Motor Control A THESIS Submitted by RAMA KRISHNA MAYIRI (M060156EE) In partial fulfillment of the requirements for the award of the Degree of MASTER
A DIFFERENT KIND OF PROJECT MANAGEMENT: AVOID SURPRISES
SEER for Software: Cost, Schedule, Risk, Reliability SEER project estimation and management solutions improve success rates on complex software projects. Based on sophisticated modeling technology and
Time Management II. http://lbgeeks.com/gitc/pmtime.php. June 5, 2008. Copyright 2008, Jason Paul Kazarian. All rights reserved.
Time Management II http://lbgeeks.com/gitc/pmtime.php June 5, 2008 Copyright 2008, Jason Paul Kazarian. All rights reserved. Page 1 Outline Scheduling Methods Finding the Critical Path Scheduling Documentation
Measurement Information Model
mcgarry02.qxd 9/7/01 1:27 PM Page 13 2 Information Model This chapter describes one of the fundamental measurement concepts of Practical Software, the Information Model. The Information Model provides
Function Point Measurement from Java Programs
Function Point Measurement from Java Programs Shinji Kusumoto, Masahiro Imagawa, Katsuro Inoue Graduate School of Engineering Science Osaka University Toyonaka, Osaka, Japan {kusumoto, imagawa, inoue}@icsesosaka-uacjp
Software Engineering. Reading. Effort estimation CS / COE 1530. Finish chapter 3 Start chapter 5
Software Engineering CS / COE 1530 Lecture 5 Project Management (finish) & Design CS 1530 Software Engineering Fall 2004 Reading Finish chapter 3 Start chapter 5 CS 1530 Software Engineering Fall 2004
Baseline Code Analysis Using McCabe IQ
White Paper Table of Contents What is Baseline Code Analysis?.....2 Importance of Baseline Code Analysis...2 The Objectives of Baseline Code Analysis...4 Best Practices for Baseline Code Analysis...4 Challenges
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) 2 Fixed Rates Variable Rates FIXED RATES OF THE PAST 25 YEARS AVERAGE RESIDENTIAL MORTGAGE LENDING RATE - 5 YEAR* (Per cent) Year Jan Feb Mar Apr May Jun
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) CHARTERED BANK ADMINISTERED INTEREST RATES - PRIME BUSINESS*
COMPARISON OF FIXED & VARIABLE RATES (25 YEARS) 2 Fixed Rates Variable Rates FIXED RATES OF THE PAST 25 YEARS AVERAGE RESIDENTIAL MORTGAGE LENDING RATE - 5 YEAR* (Per cent) Year Jan Feb Mar Apr May Jun
Industry Environment and Concepts for Forecasting 1
Table of Contents Industry Environment and Concepts for Forecasting 1 Forecasting Methods Overview...2 Multilevel Forecasting...3 Demand Forecasting...4 Integrating Information...5 Simplifying the Forecast...6
Business Idea Development Product production Services. Development Project. Software project management
Page 1, 1/20/2003 Ivica Crnkovic Mälardalen University Department of Computer Engineering [email protected] Development Project Product Lifecycle Business Idea Development Product production Services
PROJECT MANAGEMENT METHODOLOGY SECTION 3 -- PLANNING PHASE
PROJECT MANAGEMENT METHODOLOGY SECTION 3 -- PLANNING PHASE Table of Contents Introduction...3-1 Overview...3-1 The Process and the Project Plan...3-1 Project Objectives and Scope...3-1 Work Breakdown Structure...3-1
PROJECT TIME MANAGEMENT
6 PROJECT TIME MANAGEMENT Project Time Management includes the processes required to ensure timely completion of the project. Figure 6 1 provides an overview of the following major processes: 6.1 Activity
plans based on actual cost, schedule and technical progress of work [1, 9].
An Analysis on Resource Planning, Estimation and Tracking of Project by Earned Value Management SHAIK MOHAMMAD MASOOD, DEVANAND.R, HARSHA H.N PG Student, Department of Civil Engineering, G.E.C, Hassan
Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138. Exhibit 8
Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 1 of 138 Exhibit 8 Case 2:08-cv-02463-ABC-E Document 1-4 Filed 04/15/2008 Page 2 of 138 Domain Name: CELLULARVERISON.COM Updated Date: 12-dec-2007
Evolving a Ultra-Flow Software Development Life Cycle Model
RESEARCH ARTICLE International Journal of Computer Techniques - Volume 2 Issue 4, July - Aug Year Evolving a Ultra-Flow Software Development Life Cycle Model Divya G.R.*, Kavitha S.** *(Computer Science,
Central Bank of Ireland Guidelines on Preparing for Solvency II Pre-application for Internal Models
2013 Central Bank of Ireland Guidelines on Preparing for Solvency II Pre-application for Internal Models 1 Contents 1 Context... 1 2 General... 2 3 Guidelines on Pre-application for Internal Models...
Project Management Process
Project Management Process Description... 1 STAGE/STEP/TASK SUMMARY LIST... 2 Project Initiation 2 Project Control 4 Project Closure 5 Project Initiation... 7 Step 01: Project Kick Off 10 Step 02: Project
SOFTWARE DEVELOPMENT STANDARD FOR SPACECRAFT
SOFTWARE DEVELOPMENT STANDARD FOR SPACECRAFT Mar 31, 2014 Japan Aerospace Exploration Agency This is an English translation of JERG-2-610. Whenever there is anything ambiguous in this document, the original
Finally, Article 4, Creating the Project Plan describes how to use your insight into project cost and schedule to create a complete project plan.
Project Cost Adjustments This article describes how to make adjustments to a cost estimate for environmental factors, schedule strategies and software reuse. Author: William Roetzheim Co-Founder, Cost
Impelling Heart Attack Prediction System using Data Mining and Artificial Neural Network
General Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Impelling
Project Management Guidelines
Project Management Guidelines 1. INTRODUCTION. This Appendix (Project Management Guidelines) sets forth the detailed Project Management Guidelines. 2. PROJECT MANAGEMENT PLAN POLICY AND GUIDELINES OVERVIEW.
Software cost estimation
Software cost estimation Sommerville Chapter 26 Objectives To introduce the fundamentals of software costing and pricing To describe three metrics for software productivity assessment To explain why different
Estimating the Size of Software Package Implementations using Package Points. Atul Chaturvedi, Ram Prasad Vadde, Rajeev Ranjan and Mani Munikrishnan
Estimating the Size of Software Package Implementations using Package Points Atul Chaturvedi, Ram Prasad Vadde, Rajeev Ranjan and Mani Munikrishnan Feb 2008 Introduction 3 Challenges with Existing Size
Project Management Toolkit Version: 1.0 Last Updated: 23rd November- Formally agreed by the Transformation Programme Sub- Committee
Management Toolkit Version: 1.0 Last Updated: 23rd November- Formally agreed by the Transformation Programme Sub- Committee Page 1 2 Contents 1. Introduction... 3 1.1 Definition of a... 3 1.2 Why have
Implementation of College Network Scenario Module by Using CCNA
Implementation of College Network Scenario Module by Using CCNA Jitender Singh 1, Anshu Rani 2 1 Scholar, Computer Science & Engineering, RPSGOI Mohindergarh, India 2 Scholar, Computer Science & Engineering,
Pattern-Aided Regression Modelling and Prediction Model Analysis
San Jose State University SJSU ScholarWorks Master's Projects Master's Theses and Graduate Research Fall 2015 Pattern-Aided Regression Modelling and Prediction Model Analysis Naresh Avva Follow this and
