Application Maintenance and Development Attachment C for RFP#



Similar documents
ITSM Process Description

means the charges applied by Ancar B Technologies Limited which recur annually;

BrandMaker Service Level Agreement

Software Configuration Management Plan

Appendix P: Service Level Agreements Page 1

APPENDIX 8 TO SCHEDULE 3.3

APPENDIX 8 TO SCHEDULE 3.3

How To Create A Help Desk For A System Center System Manager

B2B Software Technologies Ltd. Service Level Agreement (SLA)

APPENDIX 4 TO SCHEDULE 3.3

Florida Courts efiling Authority. User Forum Policy. Page 1 of 11 DRAFT

Auxilion Service Desk as a Service. Service Desk as a Service. Date January Commercial in Confidence Auxilion 2015 Page 1

COUNTY OF ORANGE, CA Schedule 2D Service Desk Services SOW SCHEDULE 2D SERVICE DESK SERVICES SOW. for. Date TBD

Exhibit E - Support & Service Definitions. v1.11 /

INCIDENT MANAGEMENT & REQUEST FULFILLMENT PROCESSES. Process Owner: Service Desk Manager. Version: v2.0. November 2014 Page 0

Service Level Agreement and Management By: Harris Kern s Enterprise Computing Institute

California Dept. of Technology AT&T CALNET 3. Service Level Agreements (SLA) 7.3 Network Based Managed Security

Service Level Agreements Category 7 Network Based Managed Security

Custom Application Support Program Guide Version March 02, 2015

Infasme Support. Incident Management Process. [Version 1.0]

ATTACHMENT V2. Transnet

Service Level Agreement between LCMS Plus, Inc. and [CLIENT SCHOOL]

MAILGUARD, WEBGUARD AND ARCHIVING SERVICE SCHEDULE

The State of Tennessee. Category: Enterprise IT Management Initiatives. Managing by Metrics, A Process Improvement Initiative

Program Lifecycle Methodology Version 1.7

Yale University Knowledge Management Process Guide

The remedies set forth in this SLA are your sole and exclusive remedies for any failure of the service.

FLORIDA COURTS E-FILING AUTHORITY HELP DESK POLICIES & PROCEDURES

CA DEPT OF TECHNOLOGY NWN CALNET 3

Ubertas Cloud Services: Service Definition

Service Level Agreement

Appendix D to DIR Contract No. DIR-SDD SYNNEX Corporation STATEMENT OF WORK / SUPPLEMENTAL AGREEMENT for <DIR CUSTOMER> END USER SERVICES

Yale University Incident Management Process Guide

Enterprise UNIX Services - Systems Support - Extended

State of Wisconsin Initial Incident Triage Service Service Offering Definition (SOD)

SERV SER ICE OPERA OPERA ION

LANDesk Service Desk Certified in All 15 ITIL. v3 Suitability Requirements. LANDesk demonstrates capabilities for all PinkVERIFY 3.

EXIN IT Service Management Foundation based on ISO/IEC 20000

ENTERPRISE IT SERVICE MANAGEMENT BUREAU OF ENTERPRISE SYSTEMS AND TECHNOLOGY ENTERPRISE SERVICE DESCRIPTION FOR. Ocotber 2012

Introduction to ITIL: A Framework for IT Service Management

Change Management Control Procedure

Outsourcing BI Maintenance Services Version 3.0 January With SourceCode Inc.

CA Nimsoft Service Desk

ADDENDUM 5 TO APPENDIX 4 TO SCHEDULE 3.3

Project Charter and Scope Statement

White Paper August BMC Best Practice Process Flows for ITIL Change Management

Module 1 Study Guide

Enabling ITIL Best Practices Through Oracle Enterprise Manager, Session # Ana Mccollum Enterprise Management, Product Management

MASTER SERVICE LEVEL AGREEMENT (MSLA)

1. INTRODUCTION AND CONTACT INFORMATION

The Weill Cornell Medical College and Graduate School of Medical Sciences. Responsible Department: Information Technologies and Services (ITS)

NYSED DATA DASHBOARD SOLUTIONS RFP ATTACHMENT 6.4 MAINTENANCE AND SUPPORT SERVICES

Development, Acquisition, Implementation, and Maintenance of Application Systems

Service Level Agreements Subcategory 1.5 Toll Free Calling

Project Management Guidelines

Standard Success Program

Magento Enterprise Edition Technical Support Guide

Voice Over IP Network Solution Design, Testing, Integration and Implementation Program Overview

Improving. Summary. gathered from. research, and. Burnout of. Whitepaper

Exhibit to Data Center Services Service Component Provider Master Services Agreement

Product Maintenance Services. General Terms for Product Maintenance

ITSM 101. Patrick Connelly and Sandeep Narang. Gartner.

The ITIL Foundation Examination

What is the Purpose of OA s Enterprise IT Helpdesk Procedure?... 2 How will Problems, Questions or Changes be entered?... 2

We released this document in response to a Freedom of Information request. Over time it may become out of date. Department for Work and Pensions

Best Practices Report

Veramark White Paper: Reducing Telecom Costs Why Invoice Management is the Best Place to Start. WhitePaper. We innovate. You benefit.

Which statement about Emergency Change Advisory Board (ECAB) is CORRECT?

Camber Quality Assurance (QA) Approach

Vendor Relations and Changing Software

ATTACHMENT G: HOSTING AGREEMENT

Attachment 2 Performance Metrics

An ITIL Perspective for Storage Resource Management

SOLUTION WHITE PAPER. Align Change and Incident Management with Business Priorities

ITIL v3 (Lecture III) Service Management as a Practice IT Operation

KMS Implementation Roadmap

Magento Enterprise Edition Customer Support Guide

Information Services. Standing Service Level Agreement (SLA) Firewall and VPN Services

HP Service Manager software. The HP next-generation IT Service Management solution is the industry-leading consolidated IT service desk.

The ITIL Foundation Examination

Blackboard Managed Hosting SM. Incident Report George Mason University August 27, 2010

Service Level Agreement (SLA) for Customer by. Cybersmart ISP. (Cloud Hosting Agreement)

Juniper Optimum Care. Service Description. Continuous Improvement. Your ideas. Connected. Data Sheet. Service Overview

Enforcing IT Change Management Policy

Project Management Plan for

Attachment E. RFP Requirements: Mandatory Requirements: Vendor must respond with Yes or No. A No response will render the vendor nonresponsive.

AGILENT SPECIFICATIONS INFORMATICS SOFTWARE SUPPORT AND SERVICES GOLD-LEVEL

Questionmark Terms and Conditions for providing Software Support and Consulting Services

Mastering Institutional Biospecimen Management

Nine Virtual Technologies 99.9% Uptime Guarantee

2. DATA VALIDATION, ESTIMATION AND EDIT

Assessing and Tax BS&A Application SLA April OAKLAND COUNTY MICHIGAN ASSESSING and TAX (BS&A) APPLICATION SERVICE LEVEL AGREEMENT (SLA)

Handshake Customer Support Handbook

PULSE SECURE CARE PLUS SERVICES

APPENDIX 3 TO SCHEDULE 3.3 SECURITY SERVICES SOW

Application Management Services (AMS)

SUPPORT POLICY SUPPORT POLICY

Program Summary. Criterion 1: Importance to University Mission / Operations. Importance to Mission

Service Level Agreement (SLA)

Requirements-Based Testing: Encourage Collaboration Through Traceability

Transcription:

Application Maintenance and Development Attachment C for RFP# Service Level Agreements and Operating Level Agreements Issue: 1.0 Issue Date: November 17, 2015 Issue 1.0 November 17, 2015 1

Copyright 2015 Independent Electricity System Operator. All rights reserved. Table of Contents 1. Introduction... 3 1.1 Purpose... 3 1.2 Severity Weights... 3 2. Service Levels... 3 3. Key Performance Indicators... 8 Issue 1.0 November 17, 2015 2

1. Introduction 1.1 Purpose 1 The purpose of this document is to specify the required Service Levels and Operating Levels that a required for Application Maintenance and Development services. 2 This document will be used in conjunction with the Request for Proposal (RFP) to identify and procure Application Maintenance and Development services. 1.2 Severity Weights There will be severity weights applied to the different Service Levels to be determined during SOW negotiations. If a Service Level is not achieved in any month and the failure is not excused, then the service credit that will apply will be calculated by multiplying the weighting factor that applies in the relevant time to such Service Level, by the total charges payable in respect of the relevant month; provided that in no event, will the total service credits applicable in respect of any month exceed 50% of the total charges for that month. 2. Service Levels AD01: Quality Assurance Effectiveness Service Level To provide for the introduction of IESO approved application and system development request resolutions into a production environment without abnormal application terminations, incorrect program results, inappropriate error messages or abnormal/unexpected application performance For the Respondent to thoroughly test and deliver reliable application and system development request resolutions (i.e. application Work Package) into a production environment with no abnormal application terminations, incorrect program results, inappropriate error messages or abnormal/unexpected applications performance resulting from deficiencies related to the QA testing procedures 1. Any application Work Package with more than 1 subsequent introduction in production resulting from deficiencies related to the QA testing procedures for the period; and 2. The percentage of application Work Packages with more than 1 subsequent introduction into production resulting from deficiencies related to the QA testing procedures = (total number of application Work Packages with more than 1 introduction into production resulting from deficiencies related to the QA testing procedures for the period / total number of application Work Packages introduced into a production environment per period) * 100 Issue 1.0 November 17, 2015 3

Abnormal/unexpected applications performance will be measured utilizing existing performance monitoring tools and baseline results. As per Data Capture, above The percentage of application Work Packages with more than 1 subsequent introduction into production resulting from deficiencies related to the QA testing procedures, as verified through the Respondent s root cause analysis, is not more than 10% of the total application Work Packages introduced into production for the period. The percentage of application Work Packages with more than 1 subsequent introduction into production resulting from deficiencies related to the QA testing procedures, as verified through the Respondent s root cause analysis, is more than 20% of the total application Work Packages introduced into production for the period. AMS01: Application Availability and Reliability Service Level To measure the amount of time during a reporting period that applications are available to the IESO. Unscheduled downtime is measured from the time an application becomes unavailable until the time it is available again to the IESO; Outages do not include the minutes in any month during which a given application is scheduled to be unavailable for use by the IESO due to such things as preventive maintenance or upgrades. Scheduled outages will be mutually agreed by the IESO and the Respondent. Available minutes in a month is defined to mean total wall clock minutes in a month minus scheduled downtime minutes. Excludes server or network outages, SaaS application(s), any business system component not managed by the Respondent or other applications as identified by the IESO. For each application for which there will be an availability calculation: Respondent will identify the start time and end time of the change activity within in the broader change window (e.g. 2am 6am); Respondent will then identify and use the individual start times and end times as the planned/scheduled downtime for each application; For any application change that commences after the change window has come to a close will be considered unplanned downtime (and the entire change is flagged as failed as it overruns the change window); For any application change that commences within the change window but completes after the change window has come to an end, the time within the window counts as planned downtime and the time after the window applies to unplanned downtime (and the entire change is flagged as failed as it overruns the change window. Reliability is defined as the number of downtime events for a given application. 24x7x365 for Critical business systems Business hours for Non-Critical business systems All applications identified in Attachment A Support Systems Overview Issue 1.0 November 17, 2015 4

Unit of Measure: IESO and the Respondent system logs and any other designated tracking systems. Scheduled available hours. Availability will be calculated using the following formula: Application Availability = ((TPM1 POM1) / TPM1) X 100% (calculated to 2 decimal places rounded) Where: POM1 = number of minutes of the outage per application TPM1 = total number of available minutes in a month Availability: Critical applications availability is equal to or greater than 99.8% (as specified in hours of availability) measured individually for each Critical applications and; Reliability: Critical applications experience no more than 1 outage measured individually for each Critical application, and; Availability: Non-Critical applications (vital applications) availability is equal to or greater than 99.6% (as specified in hours of availability) measured individually for each Non-Critical application, and; Reliability: Non-Critical applications experience no more than 2 outages measured individually for each Non-Critical application. Availability: Critical applications (infrastructure applications) availability is less than 98.8% (as specified in hours of availability) measured individually for each Critical application, or; Reliability: Critical applications experience more than 3 outages measured individually for each Tier 0 Application, or; Availability: Non-Critical applications (vital applications) availability is less than 98.5% (as specified in hours of availability) measured individually for each Non-Critical application, or; Reliability: Non-Critical applications experience more than 3 outages measured individually for each Non-Critical application. AMS02: Incident and Service Requests Service Level To measure the resolution time for different incident/service request priority levels for each application Tier. An incident any event that is not part of the standard operation of a service and that causes, or may cause, an interruption to, or a reduction in, the quality of that service. A service is a request from a user for information, or advice, or for a standard change or for access to an IT Service or any request that may consume resources. The request fulfillment process manages non-incident related service requests through a lifecycle and series of process steps similar to those of incident management. Service Requests must be based upon established and approved workflows which, when followed precisely, reduce the risks associated with the request Issue 1.0 November 17, 2015 5

24x7x365 for Critical business systems Business hours for Non-Critical business systems All applications identified in Attachment A Support Systems Overview IESO and Respondent system logs and any other designated tracking systems. Unit of Measure: Ticket closure elapsed time. Incidents: Item Priority Level Metric Target Accept Priority 1 1 hour (24x7) 90% Priority 2 4 business hours 85% Priority 3 8 business hours 80% Priority 4 24 business hours 80% Resolve Priority 1 4 hour (24x7) 90% Priority 2 Within same business day 85% Priority 3 5 business days 80% Priority 4 8 business days 80% Service Requests: Item Priority Level Metric Target Accept Priority 1 2 business hours 90% Priority 2 2 business hours 85% Priority 3 8 business hours 80% Priority 4 24 business hours 80% Resolve All priorities Using the pre-defined resolution plans resolve within the plans defined completion time (elapsed) 90% Incidents: Item Priority Level Metric Target Accept Priority 1 1 hour (24x7) 80% Priority 2 4 business hours 75% Priority 3 8 business hours 70% Priority 4 24 business hours 70% Resolve Priority 1 4 hour (24x7) 80% Priority 2 Within same business day 75% Priority 3 5 business days 70% Priority 4 8 business days 70% Service Requests: Item Priority Level Metric Target Accept Priority 1 2 business hours 80% Issue 1.0 November 17, 2015 6

Priority 2 2 business hours 75% Priority 3 8 business hours 70% Priority 4 24 business hours 70% Resolve All priorities Using the pre-defined resolution plans resolve within the plans defined completion time (elapsed) 80% DB01: Database Management Service Level To measure the time databases are accessible from the network and running as intended by the hardware/software manufacturer. Metric Notes: End-Users must be able to access database resources 7 x 24 except for scheduled downtime Unit of Measure: General Notes: 1. Approved Outages represents the time in any month during which each supported asset group is planned to be unavailable due to such things as preventive maintenance or upgrades. Approved Outages will be at specified times based on the IESO s business requirements and will be provided to the Respondent by the IESO. 2. In the event an outage lasts longer than the time allocated for an Approved Outage, any additional time beyond the Approved Outage time will be counted towards the service being unavailable. In the event an outage is shorter than the time allocated for an Approved Outage, the Approved Outage time will be reduced to reflect that actual outage time. 24x7x365. All Applications identified in the Application Inventory. IESO and Respondent system logs and any other designated tracking systems. Scheduled available hours. Availability will be calculated using the following formula: {[(System Availability Minutes - Total Outage Minutes) / (System Availability Minutes - Approved Outage Minutes)] X 100%}. Where: Approved Outage Minutes = number of minutes in a month during which there are Approved Outages. System Availability Minutes = number of minutes in the applicable measurement window Total Outage Minutes = number of minutes in the measurement window the applicable database is not available or cannot be contacted. Availability: Database availability is equal to or greater than 99.95% Availability: Database availability is less than 99.8% Issue 1.0 November 17, 2015 7

3. Key Performance Indicators ADK01: Modification Quality Key Performance Indicator To ensure that the Respondent completes Application Development activities accurately and within IESO s specification for each Application Development project. Parameter is used to assess the accuracy of Work Packages for acceptance. Parameter used to assess the effectiveness of completing change requests and quality improvements. This metric tracks the percentage of Work Packages that are accepted and not returned to the Respondent for additional re-work, correction or completion. This measurement is in relation to application deliverables and excludes agreed to outstanding issues within a given Work Package. Respondent system logs and any other tracking tools defined. [Total number Work Packages that have been submitted and accepted by Respondent without any rework required] divided by [Total number of Work Packages that have been submitted] Multiplied by 100 = [percentage of Acceptable Work Packages during such month] At least 95% of Work Packages introduced into the production environment contain no errors within the first 90 days. Less than 90% of Work Packages introduced into the production environment contain no errors within the first 90 days Issue 1.0 November 17, 2015 8

ADK02: Productivity Improvement Key Performance Indicator To ensure that the Respondent continually improves its performance in the allocation of discretionary hours for development of applications for the IESO. Quarterly increase in the availability in discretionary hours, as measured by the percentage increase in discretionary hours over the contracted resources hours. Quarterly Productivity improvement will be measured using: The number of hours spent on discretionary (non-maintenance activities) divided by total number of contracted resources hours. Quarterly Productivity improvement of no less than 2% per year. Productivity improvement is less than 1% per year. AMSK01: Modification Quality Key Performance Indicator To ensure that IESO requests for software modifications in the application maintenance environments are implemented into production successfully. The success of a modification occurs when the software modification, meeting predefined requirements, is put into production, with all supporting user documentation received by the IESO and no resulting defect occurs within 30 days of first promoting code Calculation: Modification success rate = 1-[(# of modifications requiring fixes installed per month) / (total number of modifications applied per month)] * 100. Respondent system logs and any other tracking tools defined. At least 95% of modifications introduced into the production environment contain no errors within the first 30 days Less than 85% of modifications introduced into the production environment contain no errors within the first 30 days Issue 1.0 November 17, 2015 9