Sizing Application Maintenance and Support activities



Similar documents
The ITIL Foundation Examination

The ITIL Foundation Examination

The ITIL Foundation Examination

The ITIL Foundation Examination Sample Paper A, version 5.1

Industry Metrics for Outsourcing and Vendor Management

Motivation & Competitiveness Framework for Application Support Teams

Tutorial: Towards better managed Grids. IT Service Management best practices based on ITIL

EXIN IT Service Management Foundation based on ISO/IEC 20000

The ITIL Foundation Examination

Industry Metrics for Outsourcing and Vendor Management

Introduction: ITIL Version 3 and the ITIL Process Map V3

LANDesk Service Desk Certified in All 15 ITIL. v3 Suitability Requirements. LANDesk demonstrates capabilities for all PinkVERIFY 3.

The ITIL Foundation Examination

The ITIL v.3 Foundation Examination

Which statement about Emergency Change Advisory Board (ECAB) is CORRECT?

IT Governance. Infocom India Presentation. Pathfinder Technology Solutions. December 6, 2006

Service Catalog Management: A CA Service Management Process Map

INCIDENT MANAGEMENT SCHEDULE

Testing Metrics. Introduction

HP Service Manager. Software Version: 9.34 For the supported Windows and UNIX operating systems. Processes and Best Practices Guide

Service Strategy. Process orientation Terminology Inputs and outputs Activities Process flow / diagram Process Roles Challenges KPIs

ITIL V3 Foundation Certification - Sample Exam 1

ITIL Asset and Configuration. Management in the Cloud

HP Service Manager. Software Version: 9.40 For the supported Windows and Linux operating systems. Processes and Best Practices Guide (Codeless Mode)

ITIL A guide to service asset and configuration management

ITIL vs. ISO/IEC 20000: Similarities and Differences & Process Mapping

Supporting GIS Best practices for Incident Management and Daily Operations

Siebel HelpDesk Guide. Version 8.0, Rev. C March 2010

Best Practices Report

FUNCTION POINT ANALYSIS: Sizing The Software Deliverable. BEYOND FUNCTION POINTS So you ve got the count, Now what?

Enabling ITIL Best Practices Through Oracle Enterprise Manager, Session # Ana Mccollum Enterprise Management, Product Management

HP Service Manager. Software Version: 9.34 For the supported Windows and UNIX operating systems. Incident Management help topics for printing

The Storage Capacity Design Dilemma

Estimating the Size of Software Package Implementations using Package Points. Atul Chaturvedi, Ram Prasad Vadde, Rajeev Ranjan and Mani Munikrishnan

SOFTWARE DEVELOPMENT STANDARD FOR SPACECRAFT

LANDesk Service Desk. Outstanding IT Service Management Made Easy

However, to use this technique, it is necessary for the organization to put in place certain prerequisites,

An ITIL Perspective for Storage Resource Management

SERVICE DESK EFFECTIVENESS SELF SERVICE REIGNS SUPREME

Jenny Obee, Head of Information Management Tel: Micailah Fleming, IT Director

The ITIL Foundation Examination

Calculation of the Functional Size and Productivity with the IFPUG method (CPM 4.3.1). The DDway experience with WebRatio

A Symptom Extraction and Classification Method for Self-Management

Problem Management Overview HDI Capital Area Chapter September 16, 2009 Hugo Mendoza, Column Technologies

BETTER BUSINESS FORESIGHT THROUGH SHARPER TECHNOLOGICAL INSIGHT.

Free ITIL v.3. Foundation. Exam Sample Paper 1. You have 1 hour to complete all 40 Questions. You must get 26 or more correct to pass

SAP HANA In-Memory Database Sizing Guideline

Application Performance Management. Java EE.Net, Databases Message Queue Transaction, Web Servers End User Experience

Data Audit Solution. Data quality visibility in 5 days for improving corporate performance. TABLE OF CONTENTS. Purpose of this white paper

The Future of Best Practices in IT Service Management - ITIL Version 3 Explained

Transforming IT into an App Store Service Catalog Part I: Why Should I care?

An Example of Using Key Performance Indicators for Software Development Process Efficiency Evaluation

NYSED DATA DASHBOARD SOLUTIONS RFP ATTACHMENT 6.4 MAINTENANCE AND SUPPORT SERVICES

GAIA Service Catalogs: A Framework for the Construction of IT Service Catalogs

How to Classify Incidents

How to Decide which Method to Use

Effective Implementation of Problem Management in ITIL Service Management

ITIL Intermediate Capability Stream:

Closed Loop Incident Process

EXIN.Passguide.EX0-001.v by.SAM.424q. Exam Code: EX Exam Name: ITIL Foundation (syllabus 2011) Exam

Analyzing Helpdesk Operation for an IT Function A Case Study of a Private University in Malaysia

OPERATIONAL SERVICE LEVEL AGREEMENT BETWEEN THE CLIENT AND FOR THE PROVISION OF PRO-ACTIVE MONITORING & SUPPORT SERVICES

CHAPTER 01 THE SCOPE OF SOFTWARE ENGINEERING

The ITIL v.3 Foundation Examination

REMOTE IT INFRASTRUCTURE MANAGEMENT SERVICES

Preparation Guide. IT Service Management Foundation Bridge based on ISO/IEC 20000

Supporting the CMMI Metrics Framework thru Level 5. Márcio. Silveira. page 1

ITSM Process Description

Request for Proposal for Application Development and Maintenance Services for XML Store platforms

SERVICE BASED COSTING AND DEMAND MANAGEMENT

IT Help Desk Management Survey Questionnaire January 2007

IMT Performance Metrics and Qualitative Feedback

Which ITIL process or function deals with issues and questions about the use of services, raised by end users?

Analysis of Cloud Solutions for Asset Management

02 General Information. 03 Features. 06 Benefits.

Recent Advances in Automatic Control, Information and Communications

Creating and Maturing a Service Catalog

Introduction. What is ITIL? Automation Centre. Tracker Suite and ITIL

Sample Exam. IT Service Management Foundation based on ISO/IEC 20000

Moving from ISO9000 to the Higher Levels of the Capability Maturity Model (CMM)

CA Service Desk Manager

Breaking the project down into components and constructing the WBS

November, 16th & 17th, 2011

HP OpenView AssetCenter

CALL VOLUME FORECASTING FOR SERVICE DESKS

Benchmarking Software Quality With Applied Cost of Quality

G-Cloud Service Definition. Atos Oracle Database Upgrade

At the beginning of my career as a desktop support manager, I searched everywhere

Capacity & Demand Management Processes within the ITIL 2011 Update

ASIAN PACIFIC TELECOMMUNICATIONS PTY LTD STANDARD FORM OF AGREEMENT. Schedule 3 Support Services

Key-Words: - The improved model of ITIL 2011 framework, new process model, Cabinet Office, final upgrade of ITIL, Service Portfolio Management.

Improving Residual Risk Management Through the Use of Security Metrics

Through various types of technology solutions, support organizations are empowering

This document contains the following topics:

Global Shared Support Service:

+ < We call it Integrated Operations Management. Optimized Processes Assured Customer Satisfaction

Service Management. A framework for providing worlds class IT services

Stanford / MIT Benchmarking IT Help Desk

ITIL v3 Service Manager Bridge

Continual Service Improvement How to Provide Value on an Ongoing Basis

Transcription:

October 2014 Sizing Application Maintenance and Support activities Anjali Mogre anjali.mogre@atos.net Penelope Estrada Nava penelope.estrada@atos.net Atos India www.atos.net Phone: +91 9820202911 Copyright owned by Atos (2014). This content may not be published, distributed, modified or used in any other way, whether in writing or orally, except with the approval of Atos. Acknowledgment: We thank Penelope Estrada Nava, Goedegebuure, Simon, V Kalpalatha, Swaminathan Nagarajan, Sudhakara Rao and many others in Atos for their contribution in design & deployment of the sizing model. The views expressed herein are of the authors and not necessarily reflect the views of Atos.

2 Sizing Application Maintenance and Support activities Anjali Mogre, Penelope Estrada Nava A key to ensure success in any software project is correct estimation. Correct estimation of effort at the time of project initiation helps in planning, resourcing and budgeting for the project. Proper tracking and monitoring the project as per plan will eventually make the project successful. For software estimation, typically two methods are used, Expert judgment / Delphi method or Scientific sizing method measure the size of software in terms of IFPUG function points, Cosmic Function points and multiply by the organization specific productivity measures based on the historical data. The effort estimate is size * productivity (Hours / Unit size). Issues in Estimation of Software Maintenance and Support activities Software maintenance is defined in the IEEE Standard for Software Maintenance, IEEE 1219, as the modification of a software product after delivery to correct faults, to improve performance or other attributes, or to adapt the product to a modified environment. The objective is to modify the existing software product while preserving its integrity. IEEE has defined three types of maintenance Adaptive maintenance, Corrective maintenance, Perfective maintenance. Maintenance is needed to ensure that the software continues to satisfy user requirements. Maintenance must be performed in order to: Correct faults Improve the design Implement enhancements / Interface with other systems Adapt programs so that different hardware, software, system features Migrate legacy software Retire software The software in production needs end user support. The end user support includes maintaining data, resolving user queries, resolving service requests. Both; the software maintenance requests and support requests are typically referred to as tickets and flows through a ticketing tool from customer to the maintenance team. IFPUG function points by design cannot be applied for Software Maintenance and Support activities. IFPUG recommends organization specific estimation guidelines for software maintenance work. (IFPUG Part 3 Counting Practice). Currently, there is no global standard available for estimating effort associated with software maintenance and support activities. Typically the software estimation for maintenance projects is done using expert judgment. This paper presents an approach to define a sizing model for maintenance and support activities.

3 Challenges In today s scenario, it is essential to maintain and support the existing software in production. The software maintenance and support activities face three different challenges 1. Measure of the work load and Estimation of the effort how to measure the size of the contract and how to forecast and estimate the efforts required for the support activities 2. Management Issues how to ensure that the cost of the software maintenance activities are kept within the budget with available resources 3. Technical Issues where to make the change, efforts for regression testing, and how to improve maintainability The technical issues are manageable and typically handled by the technical team. Training helps to groom new team members to effectively address technical issues. If the estimation is incorrect, it may be possible that the management issues gets blown out of proportion and may hamper the project profitability and project may incur losses. Moreover, an appropriate estimation of the team size can help in the definition of a proper service level agreement and consequently maintenance team will be able to meet the required service level agreements and customers will get the desired service. Estimation is a key to ensure that the project is under control. The two most popular approaches to estimating resources for software maintenance are the use of parametric models and the use of experience [ISO14764-99:s7.4.1]. The best approach for estimation of software maintenance activities is to combine empirical data and experience. There is no agreed method to size the application maintenance or support activities. Since size is not available, it is not possible to measure productivity and use the productivity values for estimation. International Standards - Productivity measures for Application maintenance Global communities have developed a few definitions which can be adopted for productivity measurements. IEE standard 1045 describes productivity to be computed in terms of efforts vs Lines of codes or function points. It does mention the productivity to be used for software maintenance projects; however do not characterize how it is to be measured. ISO/IEC standard 15939 which is used as a basis for CMMi (Capability Maturity Model Integration) model, defines base measures and derived measures. Productivity is a definitely a derived measure. However, the standard do not specifically mentions the computation of productivity. The CMMi model articulates the use of the CMMi model will improve the quality and productivity; however the model does not provide guidance on the methodology to be used for computation of productivity. IFPUG International Function Point Users Group - does address the method to be used to determine the Size of the software in terms of function points. For measurements of productivity, the inputs are measured in terms of efforts spent in hours while output is measured in terms of the size of the work packages delivered in terms of Function Point (FP). IFPUG provides a standard methodology to measure size of the software in terms of function points FP. However by inherent definition of the function point, it is not suitable for application maintenance type of software where the work packages are very small. Function point is suitable for large development packages.

4 Another prevalent measure of the size of software is Lines of codes. Simply measure the lines of code written in the software required to deliver required functionality. There is no dispute in the Lines of code measurement. It is easier to measure Lines of code than function points. However, it is not a good measure, as a badly written software program, which has more lines, becomes more productive than a good piece of software which has used object oriented approach and delivers the same functionality in less lines of code. Application maintenance projects will require change in a few lines of code, or add a few lines of code or delete a few lines of code. Hence for application maintenance projects hours / lines of code does not provide a correct measure of productivity. The International Software Benchmarking Standards Group Limited (ISBSG) provide productivity benchmark in terms of Maintenance rate in terms of Hours / FP or Hours / KLOC Application maintenance rate calculated as Maintenance Effort (in hours) normalized to one year period, divided by Application Size (in FP). However; typically the size of the entire software to be maintained in not known, hence the use of ISBSG benchmark has limited use. The rate at which change requests will flow will depend on the maintainability of the code or business situation; hence it may not be possible to estimate the effort solely from the size (function points) of the application to be maintained. Effort / ticket could be one of the measures for productivity. However, it is a skewed measure as it does not consider the type and complexity of the ticket. The project teams resolving complex tickets will have higher efforts per ticket compared to the project teams resolving simple tickets showing incorrect productivity picture. Hence it is not a correct measure that can be used to measure, compare the productivity and decide on productivity improvement actions. Proposed Solution There is a clear need to measure the size of work packages delivered as part of small application change requests and support request tickets. Once the size of the work is determined; it is possible to derive productivity of the project(s) which can later be used to understand productivity trends, diagnose the pain areas and define productivity improvement actions. The results of the action taken will get reflected in terms of improved productivity. The ITIL (Information Technology Infrastructure Library) is the most widely adopted approach for IT Service Management in the world. It provides a practical, no-nonsense framework for identifying, planning, delivering and supporting IT services to the business. The ITIL framework classifies a service request or a ticket into different categories such as, Service request, Query, Incident, Standard change, Small change, Problem. Our proposed solution measures the size of the maintenance and support tickets in terms of Atos specific size measure Work Point (WP) based on ITIL model. Work Point - WP is defined as a function of the ticket type and ticket complexity. The ticket types are aligned with the ITIL ticket definitions and we have defined 4 levels of complexity Simple, Medium, Complex and very complex.

5 The following table shows how WP s are defined. Work Point (WP) Definition Ticket Type GDP Complexity Call / Complaint Production / Monitoring Query / Information Service Request Incident Problem Standard Change Small Change Consultancy Simple 0.5 0.5 0.5 0.1 0.2 1 0.3 1 0.5 Medium 1 0.75 0.1 0.2 0.4 3 0.5 2 1 Complex NA 0.1 0.2 0.3 0.6 5 0.8 2.5 2 Very Complex NA 0.3 0.3 0.4 0.8 9 NA NA NA The WPs defined above are absolute numbers and represents the size or weight of a ticket in terms on standard Work Point. For example, a Simple Standard change is 0.1 WP while a Simple Small change is 1 WP. The initial WP definitions are done with the assumption that an average skill level team will spend the same amount of effort per ticket type and complexity level, with no dependence of the technology used or the type of application being maintained. At this point, as we are counting hours and not a sizing metric, we need to convert the efforts into WP, therefore we assume a base productivity of say; 10 hours / WP. The Subject matter experts knowledge along with historical data forms the basis to arrive at the WP definition. For example, historical data shows a Simple Standard Change takes one hour of effort, hence WP for Simple Standard Change is 0.1 while overall a Simple Small Change takes 10 hours of effort hence WP for Simple Standard change is 1 WP. The definition of complexity is a little tricky. The word Complexity is derived from Latin word Complexus which means twisted together while Oxford dictionary defines complexity as made of several closely connected parts. Complexity of work may depend on an individual's perception of work. However to have an objective measurement of the Size of work the complexity has to be defined objectively. In the WP model, the complexity is defined as a function of Priority, Severity, and Business impact, Type of change, Application age, and client maturity. The service level support is also used to define the complexity of the tickets: a service desk will normally not solve tickets with high complexity levels. Project Managers, with the help of subject matter experts, map the project specific ticket type to appropriate ITIL ticket type and the complexity guidelines defined in the model. The ticket type and complexity mapping agreed will be used during the entire execution of the contract. Actual effort for ticket resolution is used to initially align the project ticket type definition to the definitions given in the model if no historic data is available. The following table shows how the project specific ticket nomenclature gets aligned to standard WP model. It is expected that the ticket mapping gets discussed, reviewed and approved by senior managers and quality managers to ensure the correct size of the work is computed in terms of a standard Work Point (WP).

6 Example of Ticket type mapping Mapping of Project Specific Ticket type / Nomenclature to standard ticket type and complexity Refer to Ticket type definition and Complexity guideline No. Project Specific Ticket type WP Model Ticket Type WP Model Complexity 1 Data extracts - Simple Small Change Simple 2 Fault - Priority 1 Incident Simple 3 Fault - Priority 2 Incident Medium 4 Incident - Severity 1 Incident Complex 5 Incident - Severity 3 Incident Medium 6 Change request Small Change Medium 7 Problem Change Request - Priority 1 Incident Complex 8 Password change Service Request Simple 9 Query - How to Query / InformationSimple 10 Query - How to Query / InformationMedium Remarks Write SQL quaery to extract data and send report Prirority as defined by the customer Application has stopped working Data error - data needs to be updated A change request to correct the code Reset the password Information readlily available in FAQ Investigation is required to provide information Once the exercise of mapping and aligning the WP model ticket type to the project specific ticket type is completed, each ticket is classified as standard ticket type and standard complexity and the total size of the tickets resolved during the particular period can be captured. Each ticket is classified into one of the standard ticket types and as per the project specific complexity definition. As the ticket gets categorized, the size of a ticket is picked up from the WP definition table. Each ticket is assigned the size in WP as per predefined rules. Actual effort spent for ticket resolution is captured. It is difficult to manually capture the following data; generation of the data should be extracted as a report from the ticketing tool. Productivity Catalogue for AM projects - Ticket Details DO NOT CHANGE colored cells Contract Name Contract Name Contract ID A####.## Month/Year Aug,2012 Technology 0.000 Coordination Tasks Productivity: Activity Time/WP Efficiency: 8.20 Engineering Effort/WP 6.76 Ticket - Work Package No Ticket - Work Package Name Ticket Type Ticket Description UOW ComplexityNo.of Tickets WP Actual Hours Support Level Remarks 1 Ticket 1 Incident Simple 1 0.100 0.50 L1 2 Ticket 2 Query / Information Medium 2 0.100 1.00 L1 3 Ticket 3 Service Request Complex 1 0.200 2.00 L2 4 Ticket 4 Standard Change Complex 1 0.800 8.00 L2 5 Ticket 5 Small Change Simple 1 1.000 0.50 L3 6 Ticket 6 Problem Medium 3 6.000 5.00 L3 7 Ticket 7 Incident Complex 1 0.800 9.00 L2 8 Ticket 8 Query / Information Very Complex 2 0.400 3.00 L1 9 Ticket 9 Service Request Simple 1 0.050 1.00 L1 10 Ticket 10 Standard Change Medium 2 0.800 10.00 L2 Work Point (WP) Definition Ticket Type GDP Complexity Call / Complaint Production / Monitoring Query / Information Service Request Incident Problem Standard Change Small Change Consultancy Simple 0.5 0.5 0.5 0.1 0.2 1 0.3 1 0.5 Medium 1 0.75 0.1 0.2 0.4 3 0.5 2 1 Complex NA 0.1 0.2 0.3 0.6 5 0.8 2.5 2 Very Complex NA 0.3 0.3 0.4 0.8 9 NA NA NA

7 The sum of WP gives the total size of the work delivered during the reporting period. The total WP / total efforts spent to resolve the ticket gives the engineering productivity of the team. The projects total productivity can be computed by adding Project management, Quality Management efforts in the denominator. Remaining Service Support and Service Delivery Functions from ITIL definitions Configuration management, Release management, Capacity management - can be added in the WP model but it requires a highly mature process to capture the activities and the efforts related to these functions. Since all the projects use the SAME WP definitions, it is possible to add or compare different projects and productivity can be derived at the group of project level or at the Business unit level. It is possible to obtain productivity trends and arrive at specific productivity actions. Deep dive in the projects data will reveal the real pain areas the project is facing. Productivity can be computed at a ticket type level and it is possible to identify and take focused action for productivity improvement. Productivity Baselines Productivity baselines can be derived based on the data collected from multiple projects. The productivity data can be collected from all application maintenance projects every month; the tickets resolved during the month. All data is collected, outliers (if any) are removed. The projects with very low volume of data or very high or very low productivity figures with identifiable reason for variation are considered as outliers. The total number of projects, the total number of tickets, efforts can be consolidated at the quarterly or half yearly level. 2014-Q1 2013-H2 2013-H1 No. of Projects Submitting Productivity Data 126 110 105 Period Jan 2013 - March 2014 Jan-Dec 2013 Jan - June 2013 No. of data points (after removing outliers) 1333 810 540 No. of technologies 37 34 34 Skill Level 2.53 2.49 2.51 Total no. of tickets 444061 250,031 158,527 WP 170175 98,149 64,609 Total Effort 1463653 859,880 561,102 Engineering effort 1199861 710,693 470,393 % Project Management Effort 18% 17% 16% Total Productivity 8.60 8.76 8.68 Engineering Productivity 7.05 7.24 7.28 AET (Activity Time / Ticket) 3.30 3.44 3.54 AET (Engineering time / Ticket) 2.70 2.84 2.97 The results shown in the table above, reveals that the productivity of the contracts in 2013-H1 is 8.68 hours per WP. This means that the teams have a better performance compared to the base productivity of 10 hours / WP as they spend 1.32 hours less to the same amount of work load.

8 Regarding the skill level, in 2013-H1 it is 2.51 so; we can derive the average skill level as 2.9. Therefore, the first productivity baseline of 8.68 hours / WP can be associated to a skill level of 2.9. Further measurements will provide information on how the productivity behaves regarding the ticket type distribution, or the skill level of the teams, or the technologies added or removed, etc. The details of the overall ticket distribution can be obtained, it is also possible to derive technology specific productivity baselines, or per industry sector, per contract relative size. The details of the overall ticket distribution can be obtained as follows: Based on the tickets data, it is possible to derive technology specific productivity baselines as follows: Primary Technology Data Points Total WP Time -Total Efforts (Hrs) The above productivity baselines to be used for Estimation of new bids Productivity comparison Setting up productivity targets Benchmarking project s performance Engineering Efforts Total # of tickets Team Skill Level Productivity (Total Time / WP) Engineering Productivity (Engineering Time / WP) SAP 241 61,497 537,602 424,809 172,943 2.64 8.74 6.91 Java 200 30,070 215,085 180,061 37,918 2.42 7.15 5.99 Telecom BSS 12 15,049 150,182 127,894 49,720 2.57 9.98 8.50 Dotnet 191 13,960 116,270 96,216 28,170 2.69 8.33 6.89 BSCS 32 9,552 86,828 76,660 19,655 2.97 9.09 8.03 Oracle 91 9,291 76,978 63,591 34,728 2.61 8.28 6.84 Oracle Apps 83 6,357 58,759 46,364 7,055 2.64 9.24 7.29 Mainframe 74 4,244 42,479 36,684 8,844 2.34 10.01 8.64

9 Estimation The effort spent on measuring productivity is more effective if the productivity data is used for future estimation. Estimation of effort is possible at two different levels; estimation effort for a ticket associated with an ongoing project and second, for new bids. Estimating Efforts for Ongoing projects For ongoing projects the same model can be used to estimate the effort per ticket. As the ticket arrives, the ticket can be classified as per standard ticket type and complexity. This categorization will give the Size of the ticket in terms of WP. This WP multiplied by the project s baseline productivity value will give the estimated effort per ticket. Estimated Efforts - Per Ticket Contract Name Contract Name Contract ID A####.## Month/Year Using the above model, Project manager can set expectation about the estimated time required to resolve the ticket. Estimating Efforts for New Bids Productivity: Total Effort/WP Ticket - Work Package Engineering Productivity 9.11 7.55 Engineering Effort/WP No Ticket - Work Package Name Ticket Type Ticket Description UOW ComplexityNo.of Tickets WP Estimated Effort = Engineering Productivity * WP 1 Ticket 1 Incident Simple 1 0.100 0.755 2 Ticket 2 Query / Information Medium 1 0.050 0.4 3 Ticket 3 Service Request Complex 1 0.200 1.5 4 Ticket 4 Standard Change Complex 1 0.800 6.0 5 Ticket 5 Small Change Simple 1 1.000 7.6 6 Ticket 6 Problem Medium 1 2.000 15.1 7 Ticket 7 Incident Complex 1 0.800 6.0 8 Ticket 8 Query / Information Very Complex 1 0.200 1.5 9 Ticket 9 Service Request Simple 1 0.050 0.4 10 Ticket 10 Standard Change Medium 1 0.400 3.0 To estimate the efforts for new bids the estimation team; based on the Requirements documents and discussions with the customer; should be able to agree on the volume and nature of work.

10 The estimation team should estimate The total number of tickets to be resolved The typical distribution of the ticket as per the standard ticket type and complexity Technology Risks and contingency efforts required Ticket Type Default Ticket Distribution (%) Proposed Ticket Distribution (%) Ticket Distribution Simple Medium Complex Very Complex 25% 60% 10% 5% Call / Complaint 2% 2% 2 1 1 Production / Monitoring 3% 3% 3 1 2 0 Query / Information 5% 5% 5 1 3 1 Service Request 20% 20% 20 5 12 2 1 Incident 40% 40% 40 10 24 4 2 Problem 5% 5% 5 1 3 1 0 Standard Change 15% 15% 15 4 9 2 Small Change 5% 5% 5 1 3 1 Consultancy 5% 5% 5 1 3 1 Total 100% 100% 100 25 60 12 3 By entering the tickets and complexity distribution in the WP Model, one can arrive at the work size in terms of Work Point (WP). Once the size of work is estimated, estimated efforts can be derived based on the productivity data of Hours / WP for the required technology. If the estimation team is not able to determine the ticket distribution, the ticket distribution can be estimated from historical data (provided the same is available) The estimated efforts are computed as = WP * technology specific productivity baseline The productivity figures to be used for estimation can be adjusted based on the skills of the available team.

11 If the productivity baselines are not available for the particular technology for the project, either the productivity baseline for the technology similar to the required technology can be used or adjusted productivity baseline for all the projects irrespective of the technology can be used. The above effort estimate is for the engineering efforts or the efforts that will required by the technical team to resolve the tickets, project management efforts, Risks and any other efforts to be added in the engineering efforts to arrive at final costing of the project. Conclusion The Atos-WP model has proved that it is possible to start measuring the size of software maintenance and support activities which typically involves resolution of small tickets. The model has been rolled out in Atos globally and is used to baseline productivity at organization level to measure productivity improvement of a project to derive actions for continuous improvement to demonstrate year on year productivity improvement benefit that one can offer to the customer to set productivity targets most importantly to estimate the efforts for new as well as ongoing projects Roll out of this model not only helped in productivity measurement but also had a positive benefit in improvement of overall data quality of the projects. It brought transparency in the efforts that have been spent project. The projects are able to diagnose the pain areas and are able to decide on productivity improvement actions. In order to guarantee a correct and standard deployment of the model, Atos uses the same WP figures in all cases. The WP figures shown in this article are sample values.. Atos can offer the process, tools, training and consultancy to roll out the model and help other organization in the estimation and productivity measurement program for software maintenance and support activities. Anjali Mogre (Anjali.mogre@atos.net) References: 1. IEEE Standard for Software Maintenance, IEEE 1219 2. IFPUG counting manuals 3. ITIL Libraries