The Cherwell Software Education Series Making Metrics Matter



Similar documents
Demonstrating Service Desk Value Through More Meaningful Metrics

How to benchmark your service desk

ITSM Process Description

Creating Service Desk Metrics

Service Desk/Helpdesk Metrics and Reporting : Getting Started. Author : George Ritchie, Serio Ltd george dot- ritchie at- seriosoft.

ITIL A guide to incident management

1. Overview 2. Field Service Management Components 3. Joining the dots 4. Filling in the gaps 5. Implementing end-to-end Service Management

Terms of Use - The Official ITIL Accreditor Sample Examination Papers

ITIL v3 Incident Management Process

Service Desk Institute 10 Steps To Successful ITSM Tool Selection

ITIL v3 (Lecture III) Service Management as a Practice IT Operation

ITSM Maturity Model. 1- Ad Hoc 2 - Repeatable 3 - Defined 4 - Managed 5 - Optimizing No standardized incident management process exists

Monitoring, Managing, Remediating

Which statement about Emergency Change Advisory Board (ECAB) is CORRECT?

SD0-302 Service Desk Manager Qualification

Auxilion Service Desk as a Service. Service Desk as a Service. Date January Commercial in Confidence Auxilion 2015 Page 1

LANDesk Service Desk Certified in All 15 ITIL. v3 Suitability Requirements. LANDesk demonstrates capabilities for all PinkVERIFY 3.

Industry. Head of Research Service Desk Institute

Problem Management: The Game Changer The Value Creator

The Service Desk Manager is responsible for the performance of the Service Desk down to the individual level.

Avon & Somerset Police Authority

Infasme Support. Incident Management Process. [Version 1.0]

UNM Service Desk Standard

Creating and Monitoring Customer Satisfaction

Why you need an Automated Asset Management Solution

ITIL 2011 Lifecycle Roles and Responsibilities UXC Consulting

: SDI SD0-302 : SDI - SERVICE DESK MANAGER QUALIFICATION. Version : R6.1

Technical support in the healthcare industry is fast-paced and multifaceted. Support

Service Management from Serena Software. Orchestrated. Visibility, Flexibility and Ease of Use through Process-Based IT Service Management

CA Service Desk Manager

Quality Standard Customer Service Complaints Handling

HP Service Manager. Software Version: 9.40 For the supported Windows and Linux operating systems. Processes and Best Practices Guide (Codeless Mode)

Information Technology Infrastructure Library (ITIL )

Process Description Incident/Request. HUIT Process Description v6.docx February 12, 2013 Version 6

LANDesk Service Desk. Outstanding IT Service Management Made Easy

ANALYST ALERT: Gartner Magic Quadrant and Critical Capabilities for ITSSM 2014

Front Metrics Technologies Pvt. Ltd. Capacity Management Policy, Process & Procedures Document

06. Create a feedback loop. 01. Create a plan. 02. Improve People skills. 07. Get a tool that supports the workflow. 03. Keep your promises

sample exam IT Service Management Practitioner Support & Restore (based on ITIL ) edition July 2009

Mendix ExpertDesk, Change and Incident Management. Customer Support

Choosing IT Service Management Software

Communicate: Data Service Level Agreement. Author: Service Date: October 13. Communicate: Data Service Level Agreementv1.

Automated IT Asset Management Maximize organizational value using BMC Track-It! WHITE PAPER

CASE STUDY: Leicestershire County Council SECTOR: Local Government

ITIL by Test-king. Exam code: ITIL-F. Exam name: ITIL Foundation. Version 15.0

Cisco Network Optimization Service

Cisco Unified Communications and Collaboration technology is changing the way we go about the business of the University.

What are metrics? Why use metrics?

CA Service Desk On-Demand

White Paper. Incident Management: A CA IT Service Management Process Map

Metric of the Month: First Contact Resolution

CA Service Desk Manager

The Modern Service Desk: How Advanced Integration, Process Automation, and ITIL Support Enable ITSM Solutions That Deliver Business Confidence

A VIEW FROM THE TOP: DEVELOPING A QUALITY DRIVEN MEASUREMENT FRAMEWORK

Monitor and evaluate performance of HR information system

Building the Business Case for IT Service Management

Introduction to ITIL: A Framework for IT Service Management

An example ITIL -based model for effective Service Integration and Management. Kevin Holland. AXELOS.com

Yale University Incident Management Process Guide

Next Level Service Desk Strategies

Free ITIL v.3. Foundation. Exam Sample Paper 1. You have 1 hour to complete all 40 Questions. You must get 26 or more correct to pass

The Service Desk Survival Guide 2005 Peter McGarahan

STL Microsoft Dynamics CRM Consulting and Support Services

Making the Business Case for Unifying Channels

D6.1: Service management tools implementation and maturity baseline assessment framework

ITSM Reporting Services. Enterprise Service Management. Monthly Metric Report

Perform-Tools. Powering your performance

Wilhelmenia Ravenell IT Manager Eli Lilly and Company

EXIN.Passguide.EX0-001.v by.SAM.424q. Exam Code: EX Exam Name: ITIL Foundation (syllabus 2011) Exam

January Communications Manager: Information for Candidates

Reducing Support Costs with a Shift-Left Strategy: An Interview with Pete McGarahan

Customer Guide Helpdesk & Product Support. [Customer Name] Page 1 of 13

Integration Maturity Model Capability #5: Infrastructure and Operations

Appendix 1c. DIRECTORATE OF AUDIT, RISK AND ASSURANCE Internal Audit Service to the GLA REVIEW OF INCIDENT AND PROBLEM MANAGEMENT

10 Ways. Autotask Automates Your IT Business. Reduce Costs, Win More Business And Manage Growth More Effectively

Northgate Public Services

IT Help Desk Management Survey Questionnaire January 2007

Supportworks ITSM Enterprise IT Service Management For Business

ITSM Tool Specification Overview

IBM Global Business Services Microsoft Dynamics CRM solutions from IBM

-Blue Print- The Quality Approach towards IT Service Management

Choosing IT Service Management Software

The SMB IT Decision Maker s Guide: Choosing a SaaS Service Management Solution

Enabling Chat -- Key Success Factors in Chat Implementation

Continual Service Improvement: The Catalyst for Service Desk Excellence and Enterprise Productivity

HP Service Manager. Software Version: 9.34 For the supported Windows and UNIX operating systems. Incident Management help topics for printing

Nokia Siemens Networks Network management to service management - A paradigm shift for Communications Service Providers

Achieving Unified Oversight of Your Mission-critical IT Infrastructure

Transcription:

Recognized by Forrester as an Emerging Leader The Cherwell Software Education Series Making Metrics Matter Prepared by: Daniel Wood, Head of Research, Service Desk Institute Innovative Technology Built Upon Yesterday s Values

Metrics provide a tangible way for Service Desks to understand the quality of the service they deliver and the value they provide to the business. Business value metrics provide the business with real insight into the value IT provides. Introduction Metrics form a vital component of any Service Desk. They provide a tangible way for service desks to understand the quality of the service they deliver and the value they provide to the business. In this paper, we look at how to measure metrics, what metrics should be measured to improve service quality and the different ways of displaying and presenting metrics. The second part of this paper looks at business value metrics metrics that provide the business with real insight into the value IT provides. Business value metrics are different from more traditional metrics as traditional metrics are geared towards telling the business how well the service desk is performing and not necessarily the value it provides. The Common Problems with Metrics It is surprising how many Service Desks share the same problems when it comes to measuring and reporting metrics. Some of the issues a Service Desk encounters are: Confusion over how to translate customer satisfaction into a tangible business value Difficulty in proving the value of the services provided and demonstrating how and why the service desk operates as it does Problems in forecasting costs and expenditure due to too many variables Conflict between service desk KPIs and customer expectations Hard to appreciate service improvement opportunities when consumed with the day-to-day running of the service desk Problems arising from the lack of a common terminology and use of the wrong words and meanings Analysts reporting metrics in different ways or not reporting them at all No clear owner for reports and for metrics reporting Lack of communication with the business and difficulty translating metrics into terminology useful to the business Difficulty in understanding what metrics should be measured and what to do with the information when collected Some of these issues can be remedied through the adoption of business value metrics, discussed later in this paper. 2

What metrics should I measure? The Service Desk Standards advocate that there are 30 metrics that should be measured by every service desk. This section contains a brief description of each metric and how it can be measured. The dashboards within this section of the whitepaper come courtesy of Cherwell Software. Cherwell developed these draft dashboards and reports to demonstrate how a Service Desk can fulfil the requirements of these 30 metrics. All data shown is purely fictional and for representative purposes only. 1. Reporting Activities Accurate information is consistently produced and distributed to relevant stakeholders in order to support business objectives. 2. Business-related Metrics Business success is clearly incorporated into the service desk s metrics monitoring and reporting. 3. Number of Incidents and Service Requests This can be broken down by channel, i.e. phone, email, live chat, in-person etc, and typically, the volume of incidents and service requests will be recorded by your ITSSM tool. For this measure, you need to have a goal for incidents and service requests; make sure that every incident and service request is logged; trend data over a 3, 6, and 12 month period; and make sure the data is trending towards the goal Measuring the volume of calls enables you to create an effective and robust staffing model; allows you to see when your busy periods are by highlighting peaks and troughs; ensures you have enough resources; and helps you to understand through what channels your calls coming from. Average time to respond is a key indicator of how your Service Desk is performing. 4. Average Time to Respond The average time it takes to acknowledge an incident or service request by channel or method (phone, e-mail, user-logged, live chat, SMS, fax, etc.). For this measure, you need to record how long it has taken to respond to an incident or service request. This will typically be recorded by your ITSSM tool. This is a key measure because it is a key indicator of how your service desk is performing. If a customer has to wait a long time for their call to be acknowledged, this will likely lead to dissatisfaction. Working with this metric and breaking down time to respond by analyst or channel, enables you to make improvements and identify training needs. 3

Abandon Rate is one of the most important metrics you can measure because it informs you as to the availability of your Service Desk to respond to customers. 5. Abandon Rate The percentage of user telephone calls that are terminated prior to establishing contact with an analyst. The way this would be collected is either through your telephony system or your ITSSM (or indeed a combination of both). The abandon rate is one of the most important metrics you can measure because this informs you as to the availability of your Service Desk to respond to customers. Understanding the abandon rate will inform staffing and resource management and will allow you to better plan for peaks and troughs. If high abandon rates coincide with a major incident, it would be worth considering leaving an automated message to ensure customers know you are aware of the issue and that you are taking remedial action. 6. Incident Resolution Time This metric looks at how quickly you resolve incidents and compares these resolution figures to the goals in the SLA. Many service desks will categorise their priorities in order of severity or severity (P1, P2, P3 for example), with P1s having the lowest time to resolve (say 4 hours) and working back from there. It offers a clear indication of how your Service Desk is performing against the obligations and agreements you have with your customers. Knowing the first time fix rate is important as it gives you an understanding of the competency level of your analysts and the type and difficulty of the incidents your analysts grapple with. 7. First Contact Resolution Rate (FCR) The percentage of incidents and service requests resolved to the customer s satisfaction during the initial call or electronic exchange between end-users and the service desk. It is therefore fundamentally different to first level (or line) fix rate, which concerns incidents that are resolved at first level (service desk) without being escalated to a resolver team (2 nd and 3 rd line). This will be recorded or flagged in the ITSSM software. Customers who contact the Service Desk expect a swift a speedy resolution, and first contact resolution offers exactly that. Knowing the first time fix rate is important as it gives you an understanding of the competency level of your analysts and the type and difficulty of the incidents your analysts grapple with. FCR also allows you to understand the complexity of the incidents your service desk handls if the FCR is a high percentage, it suggests incidents are usually straightforward (or that your analysts have a high degree of technical knowledge). If it s low, it suggests incidents are complicated and technical (or that training is needed for analysts so they can resolve more incidents on first contact). 8. First Level Incident Resolution Rate The percentage of incidents and service requests resolved to the end-user s satisfaction at the Service Desk without escalating to other support groups. 4

This metric allows you to ascertain whether the priority levels are correct or if they re unobtainable. 9. Comparison of Overall Service Level Goals to Actual Results This measure will typically be recorded in the ITSSM tool and allows you to understand how you are performing against the agreements you have with your customers. Service Levels are often classed by priority (P1, P2, P3 etc.) with P1 being the highest priority with the lowest agreed time to fix. Measuring this metric allows you to ascertain whether the priority levels are correct or if they re unobtainable. For example, if you are consistently breaching priority levels, it could be the case that the agreed resolution times need to be changed or that you require more resources to make them achievable. This identifies possible training needed for staff to close incidents in a satisfactory way. 10. Re-opened Incident Rate This metric looks at incidents that have been closed in a way that is unsatisfactory to the customer or incidents that have been incorrectly closed. This is commonly recorded in the ITSSM tool, but it s beneficial to review incidents re-opened to gain a better understanding of why the re-open has occurred. Understanding why incidents have been re-opened is important because it identifies possible training needed for staff to close incidents in a satisfactory way. Examining re-opened incidents also helps to inform the process for closing incidents. Some Service Desks will have a fixed time period in which an incident can be re-opened (say 3 days) by the customer. If the Service Desk has not heard back from the customer within 3 days, the incident is automatically closed. Looking at the re-open incident process allows the desk to understand if the current incident closure process is effective. If lots of incidents are being re-opened, it suggests incidents are not being closed correctly if the reverse is true, it suggests the fixes provided are satisfactory or that incidents are not 5

being re-opened when they should be (it s being logged as a new incident) or that customers do not have a large enough window to offer their opinion on whether the fix was adequate. 11. Backlog Management The total number of open incidents or service requests compared to their age. This metric looks at the number of open incidents. This can either be recorded by the ITSSM tool or is a manual process. It s also worth considering assigning someone to monitor the backlog data to see why the calls are still outstanding and how they will be resolved. This is called the TRIAGE process. It allows you to identify if calls are closed correctly; if calls are escalated in the proper way; what action needs to be taken to resolve the open incidents; and why these incidents have not been resolved thus far. Is more information required or is more expertise needed? Backlog data can also identify if there is a lack of resource on the Service Desk, hence why so many incidents are still unresolved. This metric helps to identify training issues. 12. Hierarchic Escalations (Management) The percentage of incidents or service requests escalated to management in order to avoid a developing SLA breach. This metric measures the number of incidents escalated up to management so their expertise can deliver a fix. An incident might also be escalated because of a customer complaint or because the customer wants to speak to a team leader or manager. This will help to identify any training issues. Are incidents being escalated because analysts lack the knowledge required to solve an incident themselves? If so, perhaps knowledge can be cascaded down from management to analysts to enable more fixes without the need to escalate. It will also allow you to see how much resource is taken by management to fix incidents, handling customer complaints and feedback. Finally, by checking a selection of incidents hierarchically escalated, you can see if the escalation process is working and whether the correct type of incidents are being escalated. 6

13. Functional Escalations (Re-Assignment) The percentage of incidents and service or change requests transferred to a technical team with a higher level of expertise to avoid an SLA breach developing. Functional escalations are distinctly different from hierarchic escalations in that this type of escalation is to another team and not management. Functional escalations are incidents passed to resolver teams (2 nd and 3 rd line). This metric is useful in identifying training needs and those incidents most commonly requiring external assistance. Much like hierarchic escalation, functional escalation enables you to understand the number of incidents being passed to the resolver teams and this trend during a period of time. It enables you to see if training courses have been effective and can be useful in identifying future training needs. By manually looking through some of the incidents escalated, you will begin to understand what incidents most commonly require external assistance and whether training for the 1st line team will have benefits in terms of driving down the number of escalations. This metric helps you to see if priority categorisations are correct and if you are meeting your targets on a regular basis. 14. Average Resolution Time by Priority The average length of time taken to resolve incidents analysed by their priority. Much like the percentage of incidents resolved within SLA, this metric measures how you are performing against your different priority levels. It enables you to see if the priority categorisations are correct and if you are meeting your targets on a regular basis. It is important to look carefully at the exceptions (i.e. the ones that have breached) to understand why they have breached and what can to done in the future to prevent them from breaching again. 7

This metric allows you to see the most common incidents and how quickly they are resolved. Who is using remote support? What incidents is it most successful at fixing? 15. Average Resolution Time by Incident Category This metric looks specifically at incidents resolved within the set delineations the Service Desk has decided. These might typically include: password resets, e-mail problems, hardware errors, etc. This metric is distinctly different from resolution by priority type. Measuring the resolution time by incident category allows you to see the most common incidents and how quickly they are resolved. Like the resolution time by priority, it s important to look at the exceptions. Recording incidents by category also allows you to build a list of the most common incidents your Service Desk attends to, and again, you can track this over time to see any changes. You ll also be able to see what incidents take the most time to resolve and which ones are quick fixes.. 16. Remote Control and Self Help Monitoring Measured Against Goals This can be a difficult metric to record as not all ITSSM software has the functionality to automatically record when remote support has been used. Similarly, it can be equally as difficult to record when customers have made use of your self-help provisions. There are two ways to tackle this problem. One is to do some development work to include a flag that analysts can tick if they have used remote support. For self-help, you can either use Google Analytics or another web page analytics tool to ascertain how many times self-help pages were accessed and what links were used. There are other ways to discover self-help usage by including a question on your customer surveys; recording in the ITSSM tool when a customer says he has used self-help; or by putting a tick box on the self-help page that asks the customer if the solution was helpful. Measuring remote control usage is vital as it provides real insight into the abilities of your analysts. Some points you can deduce from remote support monitoring include: who is using remote support? What incidents is it most successful at fixing? What customers can remote support be used for are there some that refuse to allow analysts to connect to their machine in this way? Measuring who uses remote support is a great way to identify any nascent training needs if it s not being used, why isn t it? Are analysts unsure of how to use it or perhaps they are uncomfortable offering support in this way? These are revealing findings that will help to train and educate your service desk. In the same way, measuring self-help enables you to see what pages and articles are the most effective; how often self-help is being used; and if there is any potential for educating customers to encourage them to make more use of selfhelp and make this their first port of call before contacting the Service Desk. 17. Self-Logging Monitoring Measured Against Goals The percentage of incidents and service requests reported using self-logging channels; compares the result to its goals. 18. Self-help Monitoring Measured Against Goals The percentage of resolved incidents and fulfilled service requests assisted by self-help tools; compares the result to its goals. 8

19. Knowledge Usage The number of times knowledge is used. 20. Quality of Knowledge and Its Effectiveness The quality and effectiveness of knowledge. 21. Monitoring Incidents Caused by Changes Measured Against Goals The percentage of incidents caused by changes; compares the result to its goals. It s vital to understand how much the Service Desk costs to run. 22. Total Cost of Service Delivery The total cost of running its operation. The cost of delivering service per customer. Quite simply, it s vital to understand how much the Service Desk costs to run. Only through understanding these figures can you discover whether money is available for increasing resources or increasing spending in other areas. Measuring, tracking and trending the cost of ownership will enable you to ascertain if your desk has made any cost or efficiency savings 23. Average Cost Per Incident and Service Request by Channel (Method Received) The relative cost of Service Desk operations by channel i.e. telephone, email, live chat, SMS, fax, walk-ins etc. Below is a method for calculating the cost per call/e-mail: Average Cost per call/per e-mail This is the essential metric to use if you want to determine the value of your Service Desk, yet 75 per cent of desks do not. 1 Things to consider: What costs should be included in this calculation? To give an accurate and fair measurement, the cost of second and third line support should be included. What measures will be incorporated to give the final figure? For example, it might be decided that some intangible measures should be given a weighting and added to the final total such as call waiting time or informal peer support. You also need to know staff costs to get an accurate handle of call costs. A method for calculating cost per call/per e-mail Some companies use the actual budget of the service desk to calculate their cost per call. In essence, they include every cost involved in running their service desk and divide this by the number of calls they receive. This method is a little too simplistic for what we re really looking for, but it does highlight why a comparison of metrics is so difficult Service Desks that use this system will have a much larger cost than desks who measure it in one of the ways detailed below. Others will include every cost involved in taking the call: Postage costs if hardware needs to be replaced to rectify the user s problem The cost of using technicians or field agents The cost associated with the loss of productivity created by the user being on the telephone. 1 According to the 2011 SDI Benchmarking Survey (available at www.sdi-europe.com) 9

This is a much more involved way of measuring the metric, but it may also be more informative. If a value can be placed on productivity loss, it will be clear how vital the Service Desk is to the operation of the business. If you can report that your Service Desk saved X amount of productivity, this will place your Service Desk in a very strong position. The Formula As noted, there are lots of different ways in which this metric can be measured, but one of the best all-encompassing ways is outlined here. Understand your service desk staff cost, broken down into as small a unit as possible. The HR and Facilities departments will be able to tell you all the components needed to measure this: salary, benefits, heating, lighting, equipment and any other measures you think should be included.) From this data, you can then work out how much an analyst costs to employ per minute. Add to this figure the lifetime cost of software support, including support and maintenance. You can split the costs over three years to give you some idea of what it actually costs to run your systems. You might want to add hardware costs and the cost of using second and third line support (although of course, you could have analyst cost per call, second line cost per call etc.) Adding up the above will give you the cost per call/e-mail per minute, which then needs to be multiplied by the time duration of the call/e-mail. 24. Average Cost Per Incident and Service Request (Cost Per Contact) The cost per incident and service request of the Service Desk s operations (including people, support infrastructures, and overheads). For this metric, you can use the same formulas as those for cost per incident per channel, except in this instance, you re looking at the total cost of incidents and service requests. 25. People Satisfaction Feedback There is a feedback procedure to measure overall staff satisfaction through data collection and analysis, and it solicits suggestions to improve the work environment. Data collected is used to develop action plans. 26. Staff Turnover The Service Desk maintains adequate staff continuity to ensure service levels are consistently met. 27. Unpaid Absence Days Unplanned absence days are tracked separately from time lost due to planned absence, short-term disability or long-term disability. 28. Periodic Customer Satisfaction Measurement Periodic customer satisfaction measurement methods are in place to regularly assess the level of overall satisfaction in relation to the key metrics of the Service Desk (e.g., quality of support, accessibility, time to resolution, etc.). 29. Event-Based Customer Satisfaction Measurement An event-based customer satisfaction measurement procedure is in place to regularly assess satisfaction associated with individual incidents and service requests. 10

30. Complaints, Suggestions and Compliments All complaints, suggestions and compliments are routinely collected (and measured) from all possible sources and through all possible methods. Extracting Information These 30 measures require the actual metrics data in order to understand trends and patterns. In most cases, your ITSSM tool will have the ability to record all of these metrics; however, in some tools, it is not easy to extract this information or present it clearly or professionally. If this is the case, you ll probably need to look at tools such as Crystal Reports or Excel spread sheets to produce and present the information required. Both of these methods will involve manually working with the data, which will be more time-consuming. Presenting Metrics Information This screenshot is taken from Cherwell Software s new suite of reports that can quickly and easily pull together the 30 SDI standards performance measures. 11

Sample Graph To conform to the Service Desk Certificate (SDC) standard, each graph should be presented in a similar way to the graph below. It should contain data for a 12 month period; have a goal or target line; show the trend of the data; and be presented in a clear, concise and consistent standard format. Metrics should be presented in a format that is acceptable and clear to the business and adheres to corporate guidelines. 12

Business Value Metrics Presenting metrics in isolation can be misleading and can distort the realities of the service being delivered. The example graphs below show a different story when different aspects of service are compared, enabling the Service Desk to better review service quality and amend processes and procedures based on statistical data, which will contribute to an organisation s drive for continual service improvement. Business value metrics are all about what the Service Desk is doing for the business. These metrics provide a much fuller picture of how the service feel s for customers. Metrics have traditionally focused on how well the Service Desk is performing. Thus, the information service desks provide to the business is often along the lines of we handled X number of calls this month, we fixed X% of incidents first time, etc. There is definitely a place for these metrics as they offer a clear indicator of Service Desk performance (especially when trended over time). What these values do not convey is how the service is actually benefitting the business this is a gap that business value metrics attempt to fill. Business value metrics are all about what the Service Desk is doing for the business and provide a much fuller picture of how the service feels for customers. The metrics below offer some indication of the type of metrics that should be considered when exploring business value metrics This metric reports for how long IT services were unavailable to the business. Lost IT Service Hours This metric is important because it provides the business with a clear indication of for how long IT services were unavailable to the business. This is an example of a metric that provides real value and insight to the business and provides clear indications of performance. It will also provoke debate and discussion surrounding why hours were lost and what actions can be taken to prevent lost hours in the future. 13

This metric provides a fuller picture of the impact of IT failure. Different areas of the business have different levels of importance and can be affected to a greater or lesser degree by IT failure. Lost Business Hours In addition to lost IT service hours, lost business hours provides a fuller picture of the impact of IT failure. Of course, not all businesses are entirely dependent on IT to function and operate, and thus, lost business hours provide an insight into the impact of IT failure. Again, like lost service hours, this metric provides the business with a clear understanding of the importance that IT plays in their organisation. Lost business hours can then be further scrutinised to ascertain exactly how much revenue was lost due to IT failures. Business Impact Measuring business impact goes beyond the traditional metrics of saying how good the service provided actually is. By understanding that different areas of the business have different levels of importance and can be affected to a greater or lesser degree by IT failure, the Service Desk starts to create a mature and business focused view on the value it provides to the business. The example table below offers one such way to calculate the relative importance of each area of the business and in turn, calculates the impact of lost IT availability. Example: Note each IT service is weighted according to its business importance/value. Business Area Weighting Lost minutes Impact Rating Website 20% 300 6000 Sales 50% 10 500 Consultancy 15% 200 3000 Engineering 10% 30 300 Research 5% 350 1750 Looking at the business impact of the failure of each service allows you to understand that not all business areas are created equal and some have a much more marked and noticeable business impact than others. As the table shows, the biggest business impact was that the website was down for five hours, as this impact was calculated as 6000. However, even though the Sales team was only down for 10 minutes across the month, by virtue of it having the highest weighting, it was accountable for an impact of 500. Looking at the business impact of the failure of each service allows you to understand that not all business areas are created equal and some have a much more marked and noticeable business impact than others. Risk of Missing SLA Targets One of the traditional metrics included in the SDC s 30 measures is percentage of incidents fixed within SLA. This is a reactive metric as it looks at past events, and while this hindsight can be useful in planning future improvements, being proactive allows visibility before the event. The risk of missing SLA targets allows the business to prepare for the potential of missed targets and plan accordingly. If SLAs are going to be missed because of change, this can be explained to the business. Some of these changes will be unavoidable, but to strengthen the business and service desk relationship, it s important to show that IT is making the business aware. Some targets might be missed because of lack of resource. In this instance, this metric becomes not just about sharing information with the business but actually provides a critical opportunity to ask the business for extra resource to try to prevent targets from being missed. 14

Business metrics should be comprised of any measures that are beneficial to the business and provide a clear idea of performance and value. Business metrics place IT s business performance front and centre. Business value metrics provide real benefit for the Service Desk as they offer tangible data to support and augment business decisions. Understand what the business needs in terms of information. Ask what information is useful. Business metrics allow Service Desks to become proactive. Conclusion Traditional metrics have focused on telling the business how good the service desk is by detailing the number of calls answered, first time fix rate, incidents resolved within SLA, and a diverse array of many other metrics. These metrics have their place and ideally will be used to supplement some of the business metrics covered here. It is important to note that the business metrics listed in this paper are by no means exhaustive. Business metrics should be comprised of any measures that will be beneficial to the business and provide a clear idea of performance and value. Introducing business metrics can be a huge leap of faith as they move beyond measures of how well the IT department is doing. Business metrics place IT s business performance front and centre. Statistics such as how many business hours have been lost due to IT faults can be disconcerting and is perhaps something most service desks would not be comfortable sharing. However, looking forward, it is expected that more and more businesses will want access and visibility to these types of metric as they provide a crucial way to ascertain the value of the Service Desk and its place within the organisation. Business value metrics also provide real benefit for the Service Desk as they offer tangible data to support and augment business decisions. Justifications for extra budget or resource will be much more robust if supported by business value metrics. The idea is not to use the data to hide or disguise but to use these measures as a platform to introduce future improvements and advancements. It is clear there is a need to move beyond us and them: IT acts as a partner to and enabler for the business not as a barrier. Communicating value and metrics in a way that the business understands is a critical step in improving this relationship. Understand what the business needs in terms of information; ask what information would be useful to it. Establishing answers to these questions is a crucial step in building a bond between the business and the Service Desk. Here are three key points to consider: Create metrics that work together to provide direction on how the Infrastructure and Operations (I&O) group should change and improve (i.e. supporting CSI initiatives) Demonstrate through reporting a well-balanced story about the value of services being provided, giving a clear view of effectiveness versus productivity. Formalise reporting metrics using a business value dashboard as a means to highlight I&O s positive contribution to the business. Finally, business metrics allow Service Desks to become proactive. Anticipating future problems that will have a negative impact to the business should be shared. In doing so, the business might be able to offer a preventative solution if not, then at least the business has been made aware. Service Desks often complain that the business does not understand them and does not share in their trials and tribulations. Business metrics presented in ways that is easily accessible and relevant to the business offers a clear opportunity to help bridge this gap and increase the level of understanding. 15

About Cherwell Software Cherwell Software is the developer of Cherwell Service Management a fully integrated solution for IT and support professionals. Designed using Microsoft s.net platform and Web 2.0 technology, Cherwell delivers out-of-the-box 11 fully integrated ITIL v3 PinkVERIFY accredited management processes, including Incident, Problem, Change, Release, Configuration, SLA, Service Catalogue, Event and Knowledge. With a holistic approach to service management, Cherwell empowers IT and support departments to fully align themselves with the organisations they support. Being quick to deploy and easy to use, delivered as either a traditional On Premise solution or via an On Demand SaaS subscription, Cherwell delivers true enterprise power for a mid-market price. Headquartered in Colorado Springs, USA and with European offices in the UK, Cherwell Software was founded, and is managed, by a team of industry experts. Cherwell Service Management delivers a highly scalable and extensible development platform enabling customers to add new custom built applications through the use of customisable business process templates. Its unique wizard driven Codeless Business Application Technology (CBAT) platform has enabled customers to easily develop and build integrated business applications such as: Project Management, HR, Purchase Orders, Facilities Management systems, etc. Cherwell is committed to changing the rules of the game in this industry by offering more choices to its customers. Choice in financing (subscribe or purchase); choice in deployment (hosted by the customer or Cherwell); and choice in user-interface (rich-client, browser, mobile device, or Outlook integration). All of these choices are offered in the context of a compelling value proposition Enterprise power without Enterprise cost and complexity. 16