ITSM Reporting Services. Enterprise Service Management. Monthly Metric Report



Similar documents
ITIL: Foundation (Revision 1.6) Course Overview. Course Outline

ITIL Foundation for IT Service Management 2011 Edition

Free ITIL v.3. Foundation. Exam Sample Paper 1. You have 1 hour to complete all 40 Questions. You must get 26 or more correct to pass

Central Agency for Information Technology

EXIN IT Service Management Foundation based on ISO/IEC 20000

ORACLE IT SERVICE MANAGEMENT SUITE

ITIL V3 Foundation Certification - Sample Exam 1

The ITIL Foundation Examination

ITSM Process Description

Information Technology Infrastructure Library (ITIL )

The ITIL Foundation Examination Sample Paper A, version 5.1

The ITIL Foundation Examination

ITIL Roles Descriptions

The ITIL Foundation Examination

The ITIL Foundation Examination

Foundation. Summary. ITIL and Services. Services - Delivering value to customers in the form of goods and services - End-to-end Service

ITSM Maturity Model. 1- Ad Hoc 2 - Repeatable 3 - Defined 4 - Managed 5 - Optimizing No standardized incident management process exists

Combine ITIL and COBIT to Meet Business Challenges

Identifying & Implementing Quick Wins

ITIL v3 Service Manager Bridge

Problem Management: A CA Service Management Process Map

Hong Kong Information Security Group TRAINING AGENDA

The ITIL Foundation Examination

1 Why should monitoring and measuring be used when trying to improve services?

HP Service Manager. Software Version: 9.40 For the supported Windows and Linux operating systems. Processes and Best Practices Guide (Codeless Mode)

ITSM 101. Patrick Connelly and Sandeep Narang. Gartner.

itsmf USA Problem Management Community of Interest

Sample Exam. IT Service Management Foundation based on ISO/IEC 20000

Creating Service Desk Metrics

ITIL v3. Service Management

Project Management and ITIL Transitions

The ITIL Foundation Examination

Preparation Guide. IT Service Management Foundation Bridge based on ISO/IEC 20000

The ITIL v.3 Foundation Examination

Service Management Foundation

Creating and Maturing a Service Catalog

HP Service Manager. Software Version: 9.34 For the supported Windows and UNIX operating systems. Processes and Best Practices Guide

ITSM. Maturity Assessment

LANDesk Service Desk Certified in All 15 ITIL. v3 Suitability Requirements. LANDesk demonstrates capabilities for all PinkVERIFY 3.

Problem Management. Process Guide. Document Code: Version 2.6. January 7, Robert Jackson (updates) Ashish Naphray (updates)

ITIL v3 (Lecture III) Service Management as a Practice IT Operation

A VIEW FROM THE TOP: DEVELOPING A QUALITY DRIVEN MEASUREMENT FRAMEWORK

EXIN.Passguide.EX0-001.v by.SAM.424q. Exam Code: EX Exam Name: ITIL Foundation (syllabus 2011) Exam

Determining Best Fit. for ITIL Implementations

Which statement about Emergency Change Advisory Board (ECAB) is CORRECT?

Change Management: A CA Service Management Process Map. Peter Doherty

Tutorial: Towards better managed Grids. IT Service Management best practices based on ITIL

ITIL: Service Operation

BMC Software Consulting Services. Fermilab Computing Division Service Catalog & Communications: Process and Procedures

Process Automation Tools and Strategies for Implementing Incident Management

1. INCIDENT MANAGEMENT

1 What does the 'Service V model' represent? a) A strategy for the successful completion of all service management projects

IBM Certified Deployment Professional-Maximo V6 ITSM. Version: Demo. Page <<1/9>>

Introduction to ITIL: A Framework for IT Service Management

Problem Management Fermilab Process and Procedure

Terms of Use - The Official ITIL Accreditor Sample Examination Papers

White Paper. Incident Management: A CA IT Service Management Process Map

Problem Management Overview HDI Capital Area Chapter September 16, 2009 Hugo Mendoza, Column Technologies

How To Create A Help Desk For A System Center System Manager

Summit Platform. IT and Business Challenges. SUMMUS IT Management Solutions. IT Service Management (ITSM) Datasheet. Key Benefits

The ITIL Foundation Examination

IT Service Management

Auxilion Service Desk as a Service. Service Desk as a Service. Date January Commercial in Confidence Auxilion 2015 Page 1

ITIL Essentials Study Guide

ITIL V3 Service Lifecycle Key Inputs and Outputs

ITIL's IT Service Lifecycle - The Five New Silos of IT

Enterprise ITSM software

ITIL A guide to service asset and configuration management

Achieving Control: The Four Critical Success Factors of Change Management. Technology Concepts & Business Considerations

HP Service Manager software

SERVICE DESK MANAGEMENT REPORTS

BUSINESS PROCESS MANAGEMENT and IT. Helping Align IT with Business

ITIL 2011 Lifecycle Roles and Responsibilities UXC Consulting

HP Service Manager. Software Version: 9.34 For the supported Windows and UNIX operating systems. Incident Management help topics for printing

Free ITIL v.3. Foundation. Exam Sample Paper 4. You have 1 hour to complete all 40 Questions. You must get 26 or more correct to pass

ITIL Service Lifecycle Operation

WORK ORDER MANAGEMENT: PROCESS AND PRACTICE

The ITIL v.3. Foundation Examination

No Surprises! The Support Center s Role in Successful Change and Release Management

TechExcel. ITIL Process Guide. Sample Project for Incident Management, Change Management, and Problem Management. Certified

INSITE. Dimension Data s monitoring offering

Introduction: ITIL Version 3 and the ITIL Process Map V3

Preparation Guide. Microsoft Operations Framework Foundation

BCS Specialist Certificate in Business Relationship Management Syllabus. Version 1.9 March 2015

The ITIL v.3. Foundation Examination

ITSM Process Maturity Assessment

BSM Transformation through CMDB Deployment. Streamlining the Integration of Change and Release Management

SERV SER ICE OPERA OPERA ION

HP Service Manager software. The HP next-generation IT Service Management solution is the industry-leading consolidated IT service desk.

ITIL v3 Incident Management Process

Managed Desktop Support Services

The purpose of Capacity and Availability Management (CAM) is to plan and monitor the effective provision of resources to support service requirements.

The ITIL v.3 Foundation Examination

Knowledge Management 101 Better Support Decisions, Faster

ICTEC. IT Services Issues HELSINKI UNIVERSITY OF TECHNOLOGY 2007 Kari Hiekkanen

The Value of ITIL to IT Audit

OPERATIONAL SERVICE LEVEL AGREEMENT BETWEEN THE CLIENT AND FOR THE PROVISION OF PRO-ACTIVE MONITORING & SUPPORT SERVICES

CRAWL, WALK, RUN APPROACH - IT SERVICE CATALOGUE

CMDB Essential to Service Management Strategy. All rights reserved 2007

Transcription:

ITSM Reporting Services Monthly Metric Report October 2011

Contents Introduction 3 Background 3 Purpose 3 Scope 3 AI6 Manage Change 4 Number of Changes Logged 4 Number of Emergency Changes Logged 4 Percentage of Failed or Partially Successful Changes 5 Percentage of Rejected Changes 5 Number of Expired Changes 6 Number of Changes Logged Without a Configuration Item 6 DS8 Incident Management & Service Desk 7 Number of Incidents Logged 7 Number of Open Incidents 7 Average Age of Open Incidents 8 Average Cycle Time of Closed Incidents 8 Number of Incidents Closed 9 Percentile of Incidents Closed within Time Distributions 9 Number of Service Requests Logged 12 Number of Open Service Requests 12 Average Age of Open Service Requests 13 Average Cycle Time of Closed Service Requests 13 Number of Service Requests Closed 14 Percentile of Service Requests Closed Within Time Distribution 14 DS10 Manage Problems 17 Ratio of Problems per Incident 17 Number of Logged Incidents v Closed Problems 17 Number of Problems Logged 20 Number of Open Problems 20 Average Age of Open Problems 21 Average Cycle Time of Closed Problems 21 Page 2

Introduction Background ITSM Reporting Services operates within and ITIL framework and measures the maturity via the Cobit 4.1 Objectives and Key Measures. Purpose Standardizing the (ESM) process within ITSM Reporting Services ensures a consistent operating framework for all ESM processes. To support alignment this report defines and reports the baseline measures supported by CoBit 4.1 for which the maturity and effectiveness of the ESM process will be measured. Scope All ESM process utilized within ITSM Reporting Services. All data for this report has been obtained exclusively from the HP Open View Service Desk database. Page 3

AI6 Manage Change Number of Changes Logged This metric is established by counting the total number of Changes logged during the last calendar month. How many changes are logged and what the historical trend is shows the rate of change in the environment. The rate of change can be correlated to additional activities such as project deployment and customer base growth. Number of Emergency Changes Logged This metric is established by counting the total number of emergency changes logged during the last calendar month. Urgent changes are changes that bypass the normal CAB cycle. These are typically logged due to an urgent change in business requirements but are not related to a critical incident or problem. Page 4

Percentage of Failed or Partially Successful Changes This metric is established by comparing the total number changes to total number of partially successful or failed changes during the month. Failed and / or partially successful changes are an indicator of the quality of testing in our environment. A large number of failed / partially successful changes indicate a potential gap in our testing processes. Percentage of Rejected Changes This metric is established by comparing the total number of implementation rejected changes (not build rejected) to total number of changes logged during the last calendar month. A rise in the percentage of rejected changes may indicate changes are being subjected to closer scrutiny and as a result, are being rejected when they do not meet quality criteria. Page 5

Number of Expired Changes This metric is established by evaluating all approved changes past the implementation date that have not been completed. This metric helps explain how operational teams are managing their queues as well as identifies potential gaps in our understanding of the current state of a Configuration Item (CI). Number of Changes Logged Without a Configuration Item This metric shows the total number of changes logged without referencing a configuration item. A rise in this number indicates a gap in understanding the current state of our environment. Page 6

DS8 Incident Management & Service Desk Number of Incidents Logged This metric is established by counting the total number of incidents logged during the last calendar month. An increased volume in incidents indicates instability and lack of adherence to ESM processes. Number of Open Incidents This metric is established by counting the total number of incidents open at the end of each month. The volume of incidents which are open is a component of the Work in Progress measure (WIP). Measuring the volume of open Incidents looks to measure the stability of the IS&T environment and the resilience of the services. Page 7

Average Age of Open Incidents This metric is established by determining the average age of incidents open at the end of each month. Average Cycle Time of Closed Incidents This metric is established calculating the average cycle time for closed incidents. This calculation is taken from the point of the incident being created to point the incident is closed, this is inclusive of the incident moving across multiple ticketing systems and does not take into consideration SLA s, OLA or UC s. This an end to end measure from the customer s perspective. The cycle time of an incident reflects the maturity of the incident response process and stability of the IS&T environment. Page 8

Number of Incidents Closed This metric is established by counting the total number of incidents closed during each calendar month. An increased volume in closed incidents highlights an increase in the Work in Progress measure, it highlights where there may be in efficient resolution procedures, or resources distribution concerns. Percentile of Incidents Closed within Time Distributions This metric is established by counting the total number of incidents closed during the last calendar month by the given time period. Measuring the speed of which incidents are resolved reflects the first call resolution rates as well as automated system resolution rates. Page 9

Page 10

Page 11

Number of Service Requests Logged This metric is established by counting the total number of service requests logged during the last calendar month. An increased volume in service requests highlights an increase in Customer demand; this measure supports the overall plan and recovery processes. This measure also can be utilized to target request fulfillment automation initiatives as well as user self-fulfillment initiatives. Number of Open Service Requests This metric is established by counting the total number of service requests open at the end of each month. The volume of service requests which are open is a component of the Work in Progress measure (WIP). Measuring the volume of open service requests looks to measure the utilization of services as well as the effectiveness of the request fulfillment processes. Highlights areas of backlog or under or over utilization of resources, allows management to re-deploy resources to improve throughput or improve fulfillment procedures. Page 12

Average Age of Open Service Requests This metric is established by determining the age of Incidents open at the end of each month. Average Cycle Time of Closed Service Requests This metric is established by calculating the average cycle time for closed service requests. This calculation is taken from the point of the service request is created to point the service request is closed, this is inclusive of the service request moving across multiple ticketing systems and does not take into consideration SLA s, OLA or UC s. This an end to end measure from the customer s perspective. The cycle time of a service request reflects the maturity of the service request and fulfillment processes. Page 13

Number of Service Requests Closed This metric is established by counting the total number of service requests closed during the last calendar month. Measuring the volume of closed requests during the period ensures visibility of the underlying resource utilization and process effectiveness. Ensures any breaches of target baselines can be managed and support target service Improvement initiatives. Percentile of Service Requests Closed Within Time Distribution This metric is established by counting the total number of service requests closed during the last calendar month for the given time periods. Measuring the speed of which service requests are fulfilled reflects the first call resolution relates rates as well as automated system resolution rates. Service request and service request resolution rates must be measured as distinct values as the as not to confuse incident and request management maturity. Page 14

Page 15

Page 16

DS10 Manage Problems Ratio of Problems per Incident Measures the number of problem records created compared to the number of incident tickets raised. This measure looks at the effectiveness of the problem classifications and incident matching procedures. Number of Logged Incidents v Closed Problems Measures the number of problem records closed compared to the number of incident tickets opened within the time period. This metric measures the extent problem management is acting proactively and also how effective it is in resolving root causes and prevents reoccurring incidents. This is a measure of activity as well as effectiveness of the process. Page 17

Page 18

Page 19

Number of Problems Logged Measures the number of problem recorded, this metric measures the volume of problems being raised and the effectiveness of the process and its utilization. This measure is used to track the uptake of the as well as measuring the stability of IS&T Services. Number of Open Problems Measures the number of problems open within the selected time frame by severity and service. Measures the Work in Progress and the potential impact of open problems on service performance. Page 20

Average Age of Open Problems Measures the average age of open problems records. Problem record can vary greatly in age based on the complexity and problem type. This measure looks to ensure problems are being resolved and not stagnating or being abandoned. Average Cycle Time of Closed Problems This metric is established by calculating the average cycle time for closed problems. This calculation is taken from the point of the problem is created to point the problem is closed, this is inclusive of the problem moving across multiple ticketing systems and does not take into consideration SLA s, OLA or UC s. This an end to end measure from the customer s perspective. Page 21