This White Paper describes how service delivery is managed: obtaining the required service levels and quality within agreed costs, managing performance and managing risks. The customer organization must be able to check that: the service does what it is required to do the service is being delivered to the agreed quality the costs are not higher than expected. Information in the contract about the service requirements includes a service specification describing: the operational items (objects) that the provider will manage the tasks that the provider will perform, described as service processes the level of quality or performance level that the provider will achieve, described as service levels the volume of IT assets to manage and tasks to perform. Managing the operational items The contract specifies which items are handed over to the provider to manage. These comprise facilities such as network cabling, hardware such as PCs and system software, which are managed via the desktop, WANs, data centers or applications. The table below lists the operational items. Services Managed objects Facilities Hardware Desktop Space for patch cabinets, fileand print servers, printer locations, network cabling, backup energy equipment PCs, including all related laptops, personal peripheral equipment, network components, file servers, print servers, e-mail servers, printers WAN Network- and fiberglass cables, energy equipment Routers Data center 1 Space for the hardware machinery, cooling, backup energy supply equipment Mainframes, application servers (OS/400, Unix, Linux, Windows), database servers, DTAP 2 environments N/A unless production environment is also included (see datacenter ) N/A unless of related DTAP environments is included (see datacenter ) 1 In practice, data centre is often split between mainframe and mid-range server (OS/400, Unix, Linux and Windows). 2 DTAP: Development, Test, Acceptance, Production.
Services Managed objects System software software Physical data Users' and administrators' manuals Job and procedure descriptions Desktop Network software mail server (Exchange, Lotus Notes), customer environment (Windows 2000, XP, Vista), remote access, firewalls, antivirus software Office environment, mail-customer, Internet, intranet, remote access, local personal applications (for example, route planners, address books) Local data such as documents, e- mails, templates, addresses, intranet sites Service desk instructions; explanation of the most important local applications; administrators' manual WAN Protocols, routing software N/A N/A Administrators manual Data center Operating systems, transaction monitoring, application server environments, workflow systems, database environments, applicationintegration (EAI) Production and acceptance environment for packages (for example, SAP), custom applications Data in databases, workflow systems, content systems Administrators manual Development and test environment, including 'application development' (AD) tools, testing tools Development and testing environments for packages and custom applications, including source code, parameter definition, and test scripts Test data Users' manual N/A 3 N/A N/A Descriptions of administrative organization and procedures 3 The job- and procedure descriptions are not applicable in the first three examples because these descriptions are primarily focused on procedures in the business process. This refers mostly to application-oriented services.
Services Managed objects Education and training programs Design documentation Table 1: Operational items Desktop Introduction course Only for customized local applications WAN Data center N/A N/A End-users' training N/A N/A Functional and technical design documentation Depending on what has been agreed in the contract and who owns the items, the provider will have varying responsibility for changes and improvements. This may constrain the provider s ability to improve the service through investment in updating infrastructure, which could be seen as a technology risk (risk is discussed further in the companion White Paper Risks in an outsourced IT service. Managing service levels The service activities described in the contract should be consistent with a process framework such as ITIL 4 or ASL 5, if following recommended best practice, as this will ensure that the provider s responsibilities are stated clearly. For example, technical data requires of databases and database structure. The customer is primarily involved with logical data ; the provider administers the databases. Finally, the required service levels are specified in a service level agreement (SLA) in the contract, which describes the service levels in unambiguous, quantitative and measurable terms. The most widely used service levels are: about the activities, specific to the service desk opening hours first-time resolution (percentage of incidents and requests that are not closed in the first call to the service desk) percentage of service calls taken (accessibility) response time (responsiveness) customer satisfaction (user surveys and trend analyses) about the IT objects availability of applications (as a percentage of the desired availability; if applicable, explicitly mention online-availability and batch availability) performance (start-up time; response time) resolution time of incidents and changes (differentiated by the severity of the incident: levels 1, 2, 3 and 4, for example) restore/recovery times. The service provider should have a service catalogue. This is a set of detailed descriptions of the services that the provider offers to all its customers. ITIL version 3 explains that the service catalogue describes the provider s operational capabilities and acts as the acquisition portal for customers, including agreed pricing and service levels. Providers with many major customers or different businesses may have multiple service catalogues. The table below provides an example of service level requirements. 4 IT Infrastructure Library 5 Service Library
Service Service desk Opening hours Availability Resolution capability Response and resolution times for incidents Desktop services New desktops Moves Authorizations Requirement Business days from 7 am to 6 pm 60 per cent of incoming calls will be answered within 30 seconds The service desk will resolve 60 per cent of the incoming requests during the initial phone contact Priority 1: start within 15 minutes, and 80 per cent resolved within 2 hours Priority 2: start within 30 minutes, and 70 per cent resolved within 4 hours Priority 3: start within half a business day, and 80 per cent resolved within 8 hours Priority 4: per specific incident, and 80 per cent resolved within two business weeks 90 per cent within 5 business days 90 per cent within 2 business days 95 per cent within 4 hours Data centre Availability Business-critical applications: 99.8 per cent 6 from 8a.m. to 10 p.m. Non-business-critical applications: 95 per cent Response times 95 per cent of online transactions within 2 seconds Disaster recovery In case of disasters, restoration within 1 business day Required patches Changes Within 1 week of availability Changes that require less than 1 day's effort to accomplish will be performed within 5 business days Table 2 Example of service level requirements Early in the life of the contract you will probably need to be flexible about achievement of the required service levels. Mutual requirements must be reasonable; both parties must be aware of each other s costs, benefits and risks in meeting (or not meeting) the required service levels; and there must be procedures for dealing with exceptions. SLAs are usually renewed annually and reviewed every quarter or six months, as appropriate to the service. 6 For availability requirements, it is essential that the parties agree on a clear measurement method. The measurement period must be considered (e.g., one month) and the timeframes for maintenance and recovery.
Managing service quality and costs Service quality is about how well the service is being delivered, which is closely linked with cost. Improving quality often means increased cost; reducing cost may compromise quality. Value for money is the right balance of quality and cost, which will be directly related to the priorities defined in the business case for the outsourced service. Service quality is measured through aspects such as completeness, fitness for purpose, fitness for use, timeliness, responsiveness and customer satisfaction. You should apply different quality standards to strategic and non-strategic services: strategic services are critical to the business and need to be done as well as possible, within agreed cost constraints. The better these services are done, the more business benefits will be achieved non-strategic services can be measured very simply: they either meet the requirement or they do not. Such services only need to be good enough ; higher quality will not achieve worthwhile benefits. Numerical measures You can measure some service aspects numerically for example, capacity, transaction volumes and accuracy. The service might be handling, for instance, insurance claims; you could define the capacity metric as the number of claims processed in a week and the accuracy metric as the number processed without errors in that week. These metrics give you a snapshot in time. Your metrics define a set numerical value or proportion that is acceptable. If the results show that the provider is providing a service of higher quality, you might be paying for more than you actually need. Simple numerical metrics give repeating snapshots of a service. To get a better picture of aspects such as reliability, accuracy and timeliness, you will have to bring these measurements together to show changes over time. You can calculate averages, proportions, percentages or ratios for example, the proportion of time that a service is operational or the ratio of claims that are handled accurately compared with those that are not. You should have clearly defined values that show when the service falls below an acceptable level for example, 100 per cent availability in business hours. You might want to specify planned improvements in your metrics perhaps 250 claims processed each week in the first three months, then 10 per cent improvement over the next three months. The baseline For the purposes of this White Paper, the baseline is assumed to be the service level that was achieved at the point when the service was outsourced, which is usually defined in the business case for the outsource. Your performance measures, and any planned improvements, are tracked against this baseline. The customer and provider must set the baseline accurately and fairly, to reflect the actual service at the time of the outsource in comparison with what is achieved over time with the new service. Managing performance You will need performance measures to cover all aspects of the contract, including: cost and value obtained performance and customer satisfaction service improvements and added value delivery capability and benefits realised strength and responsiveness of the relationship.
The set of performance measures should provide clear evidence of the success (or otherwise) of the contract. The measures should also demonstrate measurable improvement, so they must be tracked against an existing baseline (this topic is discussed further in the companion White Paper Managing contract improvements and changes). For partnerships, there should be three inter-related levels of performance assessment: the operational level is concerned with routine service delivery, using measures defined in the SLA the programme/project level measures the results of changes, improvements and infrastructure rollout during the life of the contract. Measures at this level will be based on the business cases for individual programmes and projects the strategic level measures the overall results or impact of the contract in business terms. Measures at this level will relate to the customer organization s long-term business objectives. There are many methods and techniques available for tracking and reporting on performance. Information can be gathered via: regular data collection such as statistics from the help desk, summarised from detailed reports planned checks to verify that events are occurring as expected regular inspection such as audits of processes against best practice customer surveys (for example, user groups or telephone samples) to find out users perceptions of the day-to-day service operation and to pick up problems that might not be visible from other checks. The Balanced Scorecard is widely used as a structured approach to performance, viewing performance from four perspectives customer, financial, internal business processes, and learning and growth in the context of the organization s vision and strategy. It is often used in combination with Operational Dashboards, with key performance indicators (KPIs) that are closer to day-to-day performance metrics. This White Paper is an extract from IT Outsourcing Part 2: Managing the Contract. References IT Infrastructure Library ITIL www.itil-officialsite.com Further reading IT Outsourcing An Introduction. Van Haren Publishing 978 90 8753 492 9 IT Outsourcing Part 1 Contracting the Partner. Van Haren Publishing 978 90 8753 0303 IT Outsourcing Part 2 Managing the Contract. Van Haren Publishing 978 90 8753 616 9 esourcing Capability Model for Client Organizations (escm-cl). Van Haren Publishing 978 90 8753 559 9 esourcing Capability Model for Service Providers (escm-sp). Van Haren Publishing 978 90 8753 561 2 Implementing Strategic Sourcing. Van Haren Publishing 978 90 8753 579 7