Realizing True Data Integrity Through Automated Discrepancy Management

Similar documents
Extend the value of your service desk and integrate ITIL processes with IBM Tivoli Change and Configuration Management Database.

Integrated Inventory Unifies Network Management

PCR-360 Enterprise-level Communications and Technology Management

IBM Tivoli Network Manager software

Monetising FTTx Networks: FTTx rollouts give operators the opportunity to apply lessons learnt from earlier technology evolutions

IT Outsourcing s 15% Problem:

Software Asset Management on System z

Deploying the CMDB for Change & Configuration Management

How to Significantly Reduce the Cost of PBX and Voice Mail Administration

RIMS Connectivity Guide

Reconciliation and best practices in a configuration management system. White paper

ROUTES TO VALUE. Business Service Management: How fast can you get there?

Oracle Role Manager. An Oracle White Paper Updated June 2009

Awell-designed configuration management

BEYOND DIAGRAMS AND SPREADSHEETS DATA CENTER INFRASTRUCTURE MANAGEMENT (DCIM)

An Integrated Approach to FTTx Network Lifecycle Management. An Oracle Communications & Synchronoss Joint Solution White Paper March 2013

Avaya Virtualization Provisioning Service

Veramark White Paper: Reducing Telecom Costs Why Invoice Management is the Best Place to Start. WhitePaper. We innovate. You benefit.

{The High Cost} of Legacy ERP Systems. plantemoran.com

FIREWALL CLEANUP WHITE PAPER

CONDIS. IT Service Management and CMDB

Point of View: FINANCIAL SERVICES DELIVERING BUSINESS VALUE THROUGH ENTERPRISE DATA MANAGEMENT

Data Quality Assessment. Approach

Real-Time Traffic Engineering Management With Route Analytics

A Coordinated. Enterprise Networks Software Defined. and Application Fluent Programmable Networks

Preventive and Detective Data IntegritySolutions

Next-Generation IT Asset Management: Transform IT with Data-Driven ITAM

An Introduction to Data Center Infrastructure Management

IPv4 to IPv6 Network Transformation

SIEM Implementation Approach Discussion. April 2012

5 Best Practices for SAP Master Data Governance

ASON for Optical Networks

Explore the Possibilities

The Jamcracker Enterprise CSB AppStore Unifying Cloud Services Delivery and Management for Enterprise IT

Enterprise Services: Reconfiguring the Wheel Instead of Reinventing It

SACM and CMDB Strategy and Roadmap. David Lowe ActionableITSM.com March 20, 2012

can you improve service quality and availability while optimizing operations on VCE Vblock Systems?

The Firewall Audit Checklist Six Best Practices for Simplifying Firewall Compliance and Risk Mitigation

NNMi120 Network Node Manager i Software 9.x Essentials

Inventory Management

Principal MDM Components and Capabilities

APPENDIX 8 TO SCHEDULE 3.3

Real world experiences for CMDB Success

Understanding the Performance Management Process

Select the right configuration management database to establish a platform for effective service management.

Request for Proposal for Application Development and Maintenance Services for XML Store platforms

Improve Internal Customer Service While Administering Multi-Vendor Voice Systems

CA Service Desk Manager

Welcome to Metafile. Solving document issues for over 30 years. Matt Akin x 301

Align IT Operations with Business Priorities SOLUTION WHITE PAPER

COMIT s Benefits. COMIT Product Overview

Empowering the Enterprise Through Unified Communications & Managed Services Solutions

WHITE PAPER. Four Missing Components that Put Your Data Center Consolidation/Migration Project at Risk

Master Your Data and Your Business Using Informatica MDM. Ravi Shankar Sr. Director, MDM Product Marketing

Amdocs Data Integrity Management Suite. Rediscover your network

MSS Special Interest Group (SIG): RSDOD for Business Services

Intel Network Builders

Lifecycle Service Manager

FireScope + ServiceNow: CMDB Integration Use Cases

R3: Windows Server 2008 Administration. Course Overview. Course Outline. Course Length: 4 Day

Grid and Multi-Grid Management

Internal Control Guide & Resources

Application Performance Management

Keys to Successfully Architecting your DSI9000 Virtual Tape Library. By Chris Johnson Dynamic Solutions International

Ticket Management & Best Practices. April 29, 2014

Server Consolidation With VERITAS OpForce

Empower Human Ingenuity IT Process Automation Buying Guide

NetOp Suite. Figure 1) NetOp EMS Overview

Data Center Infrastructure Management (DCIM) Demystified

Nokia Siemens Networks Inventory Management

MPLS: Key Factors to Consider When Selecting Your MPLS Provider

Implementing a CMS. First Steps. A Case Study by. Raymond Sneddon, National Australia Group September Version: 1.

EM-SOS! from Sandhill Consultants

Cisco Change Management: Best Practices White Paper

smart grid communications Management

ACCOUNTING DEPARTMENT

IBM Tivoli Netcool Configuration Manager

Administration Challenges

Supply Chain Performance: The Supplier s Role

For more information about UC4 products please visit Automation Within, Around, and Beyond Oracle E-Business Suite

Testing Network Virtualization For Data Center and Cloud VERYX TECHNOLOGIES

IBM Tivoli Netcool network management solutions for enterprise

8000 Intelligent Network Manager

MSP Service Matrix. Servers

Best Practices for an Active Directory Migration

The CMDB: The Brain Behind IT Business Value

APPENDIX 8 TO SCHEDULE 3.3

Managing and Maintaining Windows Server 2008 Servers

5 Best Practices for SAP Master Data Governance

Data Migration for Legacy System Retirement

Analyzing MPLS from an ROI Perspective

Enterprise Business Service Management

Data Center Manager (DCM)

Service-Oriented Cloud Automation. White Paper

MDM Challenges and Solutions from the Real World

Simplifying. Single view, single tool virtual machine mobility management in an application fluent data center network

CA Service Desk Manager

Justifying a System Monitoring Solution. A White Paper

Domain 1 The Process of Auditing Information Systems

4 Key Tools for Managing Shortened Customer Lead Times & Demand Volatility

Transcription:

TELCORDIA IS NOW PART OF ERICSSON SINCE JANUARY 2012 white paper Realizing True Data Integrity Through Automated Discrepancy Management Abstract When service providers rely on multiple, overlapping databases, each with its own local view of the network, discrepancies are inevitable and operations suffer. Consequences can include stranded assets, revenue leakage, fraud, high provisioning fallout, poor troubleshooting, and customer frustration. This paper discusses a data integrity management strategy that benefits the entire operational continuum. Its essential starting point is a centralized inventory or database of record; its innovative end point is an automated approach to discrepancy identification, resolution, and prevention, leading to true data integrity.

Page 2 of 5 The Data Integrity Challenge It is typical to find many systems in a service provider s Operations Support System (OSS) environment with their own local databases of network information. After all, the most important source of information in a service provider s operation is the network itself. The network represents reality. Thus, it is understandable that operations systems would be equipped to maintain a network view, and it explains why so many service providers end up running operational systems with overlapping data. Over the years most service providers have implemented many OSS silos that are barely linked to each other. Unfortunately, discrepancies are virtually inevitable when operations rely on various network views. In this context, a data discrepancy means any difference between two views, for example: A node or a port exists in the network but not in the inventory system A service exists in the inventory but not in the network A circuit or Virtual Private Network (VPN) exists both in the network and in the inventory, but their configurations do not match, e.g. they have different bandwidth parameters A service exists in one inventory database, but not in another one. Discrepancies are the rule, not the exception, because it is so difficult for local databases to keep up with all the network changes to support new services and customers. Keeping up requires constant monitoring to ensure an inventory picture that matches, in real time, the topology of the network, available capacity, element configurations, new circuits, and new logical routes that have been added to support redundancy, extra capacity, or special services. The consequences of discrepancies can be enormously burdensome, including: Stranded assets and bandwidth Fraud Revenue leakage, due to un-billed or under-billed services Service order fallout, requiring manual cleanup Expensive manual audits Excessive mean-time-to-repair and service restoration intervals caused by inventory problems. No wonder typical operators today are stranding a staggering 20% to 30% of their network assets. Too many assets are simply not getting properly reflected in local OSS databases. Getting the House in Order Resolving these issues requires multiple steps. A first critical item is to define a strategy that replaces the various OSS silos with a next generation OSS architecture that covers all respective domains. Such a next generation OSS architecture also consolidates all local network views into a single database of record.

Page 3 of 5 Given the complexities of virtually every network environment and the business stakes involved, every service provider needs an authoritative repository of network inventory data. And fortunately, throughout our industry, most operators have implemented or are planning to implement such systems. However, a centralized database of record, while necessary, is by no means sufficient for true data integrity. Even with a centralized provisioning system, inventory inaccuracies can persist because: Network updates are performed manually with no checks and balances The as-designed inventory is not properly updated with the as-built network It is difficult to link network services with customer services Physical and logical assets are lost or unaccounted for. Thus, data integrity management requires a four-phase strategy spanning from discovery through reconciliation that is based on best practices in order to keep information up to date, and to do it in an automated fashion. The Data Integrity Life Cycle While discovering and reconciling network data helps to feed an Inventory in an initial phase, its main value is in keeping network and inventory in sync on an ongoing base. As such, these four steps are critical to an ongoing data integrity life cycle. Discrepancy Analysis Deduction Inventory Reconciliation External System Discovery Network Figure 1 Data Integrity Life Cycle

Page 4 of 5 Discovery and Deduction involves discovering (or importing) the current state of a carrier network across technologies and architectures, vendor equipment types, and configured services, and then using that data to extrapolate (deduce) the supported services. Discrepancy Analysis is performed by comparing discovered or imported network data against the database of record. This process should be flexible, with regard to the depth and periodicity of such an analysis. In addition, a good discrepancy analysis provides the user with suggestions for how to resolve the discrepancies with appropriate automated or manual tasks. Reconciliation finally resolves the identified discrepancies, either automatically by applying defined reconciliation policies, or manually by selecting one of the suggested operations. Human experts will always play a role in this phase, as new types of discrepancies can always occur that require special analysis and manual action. However, for the many discrepancies that are occurring periodically, a system that can automate fixes by invoking policies based on best practices is a boon to operational efficiency. What s more, systems should give administrators the flexibility to incorporate best practices as needed, by making it easy to enter policies for reconciling new types of discrepancies automatically. Such automated resolutions can include, but are not limited to: Ignoring a discrepancy because it is insignificant. Adding network data to the inventory system if an object is found in the network but not in inventory. For example, a node that was found in the network but is missing in inventory might be added (automatically) into the inventory system. Updating the inventory system if an object in the network and the inventory is mismatched, i.e., with Quality of Service (QoS) parameter mismatches. Updating any external system with the details of a discrepancy. Examples could be a trouble ticketing system that the service provider uses to track or manage discrepancies, or an activation system that updates the network, rather than the inventory system, in order to resolve a discrepancy. A good example for the later scenario is a bandwidth discrepancy resulting from a manual update in the network. In this context, this approach would help prevent fraud and reduce lost revenues. Deleting objects from the inventory system, if they are found in the inventory but not in the network, to free up the appropriate stranded assets. The Need for Automation Automation plays a key role across all the phases that lead to data integrity. The synchronization process between network and inventory should be scheduled periodically and happen automatically to maintain a certain level of accuracy within the database of record. Also, not only should discrepancies be identified automatically, but their analysis and resolution should require minimal effort as well. In other words, a solution must allow users to define the reconciliation policies that ought to be used to resolve specific discrepancies. Manual interaction, while needed to deal with unusual discrepancies, should be kept to a minimum. Reconciliation Is Key We have found that, for the most part, operators and vendors have implemented or defined approaches to automate the first two steps in the data integrity life cycle, but many assume that reconciliation must still be an onerous, manual task. This assumption is in part the result of

Page 5 of 5 having many discovery products in the market, but very few that actually include automated reconciliation. This lack of procedural coherence disrupts the overall data integrity life cycle and makes it more costly than it should be. A Vertical Approach Most service providers have specific pain points within their networks. In many instances, such pain points are in the core domains of their networks. Best practices suggest that service providers should implement data integrity with a vertical, domain-based approach: critical domains would be both discovered and reconciled, which would contribute immediately to the Return on Investment (ROI). This process can then be repeated for additional domains. This vertical approach differs from a horizontal approach that would discover the entire network but deal with the discovered network data in a second phase or separate engagement. This horizontal approach obscures the fact that reconciliation, not discovery, is the value proposition for service providers. What s more, delaying reconciliation hides its costs and can negatively impact the ROI. Conclusion Accurate network views, wherever they reside, are essential to every operations workflow. The only way to achieve that accuracy is to centralize network data in a system that can automate the data integrity management process, and inform any other systems with local network views. Data management consolidation and automation all the way through reconciliation avoids provisioning fallout, reclaims stranded assets, reduces overbuilding and rework, prevents revenue leakage, significantly improves troubleshooting, and speeds mean-time-to-repair. In other words, the benefits of data integrity management span the operational continuum. However, resolving discrepancies is not the end goal. In fact, the suggested approach also helps to identify the true reasons for the gaps between the as-planned and as-built views. Such reasons may not only be of a technical nature, but also may involve organizational structures and processes. Thus, while a data integrity management solution must provide tools to assess data integrity trends, it also requires the service provider s willingness to adjust processes, if needed, to reduce the volume of discrepancies over time. For more information about Telcordia, contact your local account executive, or you can reach us at: +1 800.521.2673 (U.S. and Canada) +44 (0)1276 515515 (Europe) +1 732.699.5800 (all other countries) info@telcordia.com www.telcordia.com Copyright 2009 Telcordia Technologies, Inc. All rights reserved. MC-COR-WP-019