Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2014

Size: px
Start display at page:

Download "Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2014"

Transcription

1 G Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2014 Published: 22 July 2014 Analyst(s): John P Morency, Roberta J. Witty Increasing deployments of cloud-based solutions and the adoption of the ISO 22301:2012 BCM standard are redefining the technology and process foundations of continuity and recovery management. Use this research to determine what can best support the specific requirements of your organization. Table of Contents Strategic Planning Assumption... 3 Analysis...3 What You Need to Know...3 The Hype Cycle... 4 The Priority Matrix...10 Off the Hype Cycle On the Rise Copy Data Management...12 Data Dependency Mapping Technology Recovery Assurance...15 At the Peak...16 Cloud-Based Backup Services Disaster Recovery Service-Level Management...18 IT DRM Exercising Sliding Into the Trough...22 Mobile Satellite Services Cloud-Based Disaster Recovery Services Cloud Storage Gateway...26

2 IT Service Failover Automation for DR IT Vendor Risk Management...29 Virtual Machine Backup and Recovery...31 IT Service Dependency Mapping Public Cloud Storage...36 Hazard Risk Analysis and Communication Services Humanitarian Disaster Relief Workforce Resilience Risk Assessment for BCM Climbing the Slope BCM Planning Software...45 Crisis/Incident Management Software...48 Emergency/Mass Notification Services Ka Band Satellite Communications Appliance-Based Replication BCM Methodologies, Standards and Frameworks...56 Hosted Virtual Desktops Data Deduplication Continuous Data Protection...63 Load Forecasting...65 Lights-Out Recovery Operations Management Server Repurposing...68 IT DRM Insourcing WAN Optimization Services Bare-Metal Restore Outage Management Systems...73 Entering the Plateau Server-Based Replication Continuity...76 WAN Optimization Controllers...78 Work Area Recovery...80 Appendixes Hype Cycle Phases, Benefit Ratings and Maturity Levels Gartner Recommended Reading Page 2 of 88 Gartner, Inc. G

3 List of Tables Table 1. BCM Standards Adoption...83 Table 2. Hype Cycle Phases...85 Table 3. Benefit Ratings...86 Table 4. Maturity Levels...86 List of Figures Figure 1. Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, Figure 2. Priority Matrix for Business Continuity Management and IT Disaster Recovery Management, Figure 3. Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, Figure 4. Plans for BCM Certification by Standard Strategic Planning Assumption By 2019, BCM will be widely used to support strategic and operational business activities. Analysis What You Need to Know Business continuity management (BCM) is the risk management practice of coordinating, facilitating and executing activities that ensure an enterprise's effectiveness in: Identifying operational risks that can lead to business disruptions before they occur Implementing mitigation controls, disaster recovery strategies and recovery plans according to the organization's recovery requirements Responding to disruptive events (natural and man-made; accidental and intentional) in a manner that demonstrates command and control of crisis event responses by your organization Recovering and restoring mission-critical business operations after a disruptive event turns into a disaster Conducting a postmortem to improve future recovery operations Gartner, Inc. G Page 3 of 88

4 IT disaster recovery management (IT DRM) supports BCM through its focus on the recovery of IT services. Due to the increase of disasters around the world, the importance of having an effective BCM program is growing. 1 Formalization of BCM programs is gathering momentum, especially following the introduction of the International Organization for Standardization (ISO) standard. 2, 3 For many enterprises, no single regulation, standard or framework exists that defines the complete set of BCM requirements that organizations need to meet, because they operate in multiple countries that each has its own approach to BCM. This is especially true for financial services where each central bank has its own standard (see the Business Continuity Institute [BCI] document "BCM Legislations, Regulations, Standards and Good Practice" for a list). Finally, there will be a wide diversity in the strategic nature of BCM across enterprises and governments: Those that see BCM as an operational risk management component or are required to prove to their customers that they have an effective program that meets the recovery needs of the customer will view BCM as strategic; those that don't have these drivers will continue to see BCM as a compliance/checklist activity and may therefore fail in the face of a disaster. Gartner believes that it will take at least another five years to change over to a state in which BCM is widely used to support strategic and operational activities across the organization (that is, by 2019, BCM will be widely used to support strategic and operational business activities). The Hype Cycle This Hype Cycle will aid BCM and IT DRM leaders in the identification and implementation of important processes and technologies that can make the most significant contributions in improving BCM and IT DRM program maturity. Today, BCM is a mature management discipline: many of standards and processes, as well as many of the technologies used to respond to, recover from and restore after disasters are welldefined. We also see some BCM disciplines, such as crisis/incident management, IT DRM and some aspects of business recovery being more mature than others, usually because they have had a disaster and have had the pain of a failed recovery. For example, Hurricane Sandy presented many challenges to workforce resilience (a component of business recovery), and organizations have since strengthened those aspects of their overall BCM program. However, implementation still lags, and varies, across all industries even those required to support effective BCM programs because of one or more regulatory requirements. The main reason for the implementation lag continues to be the lack of executive interest in or focus on BCM. This lack of interest/focus results in no executive sponsorship and program governance to provide adequate people and financial resources to match the recovery requirements of the organization. This is especially important in a changing business and IT environment every change requires a review of BCM strategy and recovery plans to ensure that recovery practices meet the current production practices. Without this constant review, your recovery may not be successful due to out-of-date recovery plans, procedures and supporting technologies. Hence the need for strong and ongoing executive sponsorship that supports an enterprisewide BCM governance structure and program management office. Page 4 of 88 Gartner, Inc. G

5 At the beginning of 2014, the average client BCM and IT DRM maturity score (based on the Gartner maturity self-assessment tool called ITScore for Business Continuity Management see "ITScore for Business Continuity Management") is just below 2.5 (on a scale of 1 to 5, where 1 is the least mature and 5 is the most mature). Maturity improvement barriers include: In many cases, the result of business unit and business operation leadership resistance to taking BCM ownership has been little to no focus on business process recovery itself. More often than not, the emphasis is placed on just IT service recovery. "Out of mind/out of sight" behavior. Organizations tend to rush to fix recovery gaps after a large-scale disaster, but after nine to 12 months, memories fade and other events take precedence, thereby moving continuity and recovery planning and implementation to the back burner. Also, if the disaster isn't near one of your operating locations, it is atypical for management to ask the question, "How would we recover from a similar event?" Each year typically brings at least one or two new business products, processes, locations, third parties, customer demands for recovery and technologies that require a revisiting of recovery strategies and procedures. This can be a daunting task in multiproduct or multilocation organizations. One trend that we thought could help improve maturity for some organizations is the level of BCM program formalization that is being facilitated through the U.S. PS-Prep program for organizationlevel certification. However, certification under this program has not taken hold (only eight certifications have been issued since its inception), and the delay in FEMA endorsing ISO 22301:2012 as one of the standards that an organization can leverage in order to achieve certification will only ensure low adoption for certification under PS-Prep. Therefore, we have retired the PS-Prep technology profile and included the program under the profile "BCM methodologies, standards and frameworks." For its part, IT DRM and IT service availability management are converging, primarily because of increased business requirements to reduce the business operations impact of unplanned downtime, regardless of whether the root cause of that downtime is a major disaster event or the result of an internal IT service disruption that does not result in a data center shutdown. Effectively integrating IT DRM and IT service availability management into a more holistic discipline (that is, IT service continuity management [IT SCM]) will likely require significant technology and process management changes. These changes will take time for their full implementation. As a result, many organizations are finding that the full completion of a transition to IT SCM is taking at least 18 to as much as 48 months, given the technology, process and operations maturity improvements that need to take place. There are several reasons why this is the case: IT DRM remains labor-intensive for many organizations, especially in the area of recovery plan exercising. This labor intensity will become a significant barrier to scaling IT DRM program coverage as more in-scope business processes, applications and data are added. Improved recovery services and management automation are critical to overcoming this barrier. The deployment of several technologies, including virtual machine mobility and recovery, IT service Gartner, Inc. G Page 5 of 88

6 dependency mapping, data dependency mapping, disaster recovery assurance and automated lights-out recovery operations management, is increasing as related technology maturity and user adoption rates improve. However, with the exception of virtual machine mobility and recovery, their implementation rates, as a percentage of the total number of production data centers, continue to be relatively small (that is, less than 10%). In general, the investment required to fully implement an IT infrastructure that is more capable of supporting and sustaining IT service continuity for all production applications and data (not just mission-critical) is significant. Its implementation will not be in the form of a single project. Rather, it will take the form of a sequence of phased projects, each of which will have a very bounded completion time frame over that 18- to 48-month time period. Gartner recommends that measurable implementation costs, project deliverables, benefits and a discrete set of reportable success metrics be defined as the basis for the justification for the initiation of each phase. For many clients, this approach has become a preferred implementation alternative to the justification of a one-shot, high-priced disaster-recovery-only solution. At the same time, new generations of cloud-based recovery service offerings, often referred to as recovery as a service (RaaS), as well as managed backup cloud services, have the potential to significantly improve IT DRM efficiency, effectiveness and economics. While the market uptake of RaaS has continued to increase during the past year and the number of related providers has grown to more than 150, Gartner believes that it is still at a fairly early stage of delivery maturity. Therefore, clients should not assume that the use of cloud-based recovery services for failing over some or all of the corporate data center infrastructure will largely subsume the use of traditional IT DRM approaches, at least for the next five years. Hybrid pilots and initial production implementations that combine the use of cloud and non-cloud-based recovery and failover will become increasingly common for different application recovery tiers, especially for small and midsize enterprises, over the next three to five years. Despite the potential of public cloud services to transform the data center into a logical entity, a brick-and-mortar production data center will remain as the norm for most organizations for at least the next five years. As long as a production data center is a physical entity, some form of in-house disaster recovery infrastructure and management will be required. Because the transition from IT DRM management into the more encompassing IT SCM has already begun for many organizations, the IT DRM portion of the BCM and IT DRM Hype Cycle report will be completely replaced by the IT SCM Hype Cycle report, beginning in In 2014, both reports are being published in order to facilitate a smooth content transition between key subject area themes. For 2014, the BCM and IT DRM Hype Cycle changes were made to ensure that the major technology, service and methodology changes that occurred during the past year are properly documented and positioned correctly on the Hype Cycle curve. Forward-Moving Technologies of Particular Note Server virtualization is continuing to transform the manner in which disaster recovery is managed, especially for both Microsoft Windows and Linux-based computing platforms. The replication, testing and restart of virtual machines at remote recovery facilities are being increasingly adopted Page 6 of 88 Gartner, Inc. G

7 by Gartner clients as less costly and more flexible alternatives to the use of more traditional subscription recovery services. As a reflection of this trend, Gartner notes three technologies, all of which are specific to virtual machine (VM) recovery and restart, as forward-moving technologies in 2013: IT Service Failover Automation for DR advanced from postpeak 10% in 2013 to postpeak 35% in 2014 because of the increased deployment of tools that support both VM image replication and the restart of collections of VMs supporting one or more end-to-end application services at a secondary recovery site. Recovery Assurance also advanced rapidly from post-trigger 30% in 2013 to trigger-peak midpoint, primarily because it: Supports the means to improve recovery time predictability through its support for the setup and activation of production application test beds that can be utilized to support more frequent recovery testing of virtual-machine-based applications at a secondary site. This can be done on a monthly, weekly or even overnight basis. Brings one of the key benefits of cloud-based recovery services (that is, the means to more frequently exercise applications recoverability) to the enterprise network without requiring any form of private or public cloud service as a technical prerequisite. Cloud-Based Disaster Recovery Services progressed rapidly from postpeak 15% in 2013 to postpeak 35% due to the following: A significantly increased number of production implementations (Gartner estimates that there are approximately 14,000 today), as well as the fact that nearly every major colocation, managed hosting and disaster recovery service provider currently offers one or more cloud-based recovery services. Less complex and more compelling service pricing (that is, more clearly defined service tiers and more standardized pricing policy for service bursting). Improvement in the quality and maturity of provider operations controls, especially for production data privacy management. Added Technologies Cloud Backup Cloud-based backup services aim to replace or augment traditional onpremises backup. In 2013, cloud-based VM recovery and cloud-based backup were discussed in the same technology profile (Cloud-Based Recovery Services). Because of its very different service focus, Cloud Backup now has its own separate profile. Copy Data Management Copy data management, a rapidly evolving technology, facilitates the use of one copy of data for supporting backup, archive, replication and test/development, thereby dramatically reducing the need for multiple unmanaged copies of data. Hazard Risk Analysis and Communication Services Hazard risk analysis and communication services evaluate worldwide incidents that threaten the health and safety of citizens and the Gartner, Inc. G Page 7 of 88

8 workforce, cause damage to critical physical and technology infrastructure, or cause a disruption to normal business operations. IT Vendor Risk Management (VRM) VRM products and processes are emerging to enable the assessment and management of risks from third-party service providers and IT suppliers. VRM is an important element of enterprise and IT risk management, and is mandated by many privacy and data breach notification regulations, such as the Gramm-Leach-Bliley Act in the U.S. and the Federal Data Protection Act or Bundesdatenschutzgesetz (BDSG) in Germany. Load Forecasting Load forecasting is a utility application category that minimizes risk by predicting future consumption of commodities transmitted or delivered by a utility. Backward-Moving Profiles of Particular Note None Obsolete-Before-Plateau Technologies Data Dependency Mapping As improved data protection and integrity problem detection capabilities of storage vendors improve, this product category will likely become obsolete well before it reaches the Plateau of Productivity. Outage Management Systems (OMSs) This technology is predicted to become obsolete before maturity, because the new breed of distribution management systems (DMSs) will eventually incorporate the OMS functionality as we know it. DMSs will include OMSs within real-time advanced distribution supervisory control and data acquisition (SCADA), which also will include automated restoration and self-healing smart grid functionality. Page 8 of 88 Gartner, Inc. G

9 Figure 1. Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2014 expectations Disaster Recovery Service-Level Management IT DRM Exercising Cloud-Based Backup Services Recovery Assurance Data Dependency Mapping Technology Copy Data Management Innovation Trigger Peak of Inflated Expectations Plateau will be reached in: Mobile Satellite Services Cloud-Based Disaster Recovery Services Cloud Storage Gateway IT Service Failover Automation for DR IT Vendor Risk Management Virtual Machine Backup and Recovery Load Forecasting Continuous Data Protection Data IT Service Deduplication Dependency Mapping Public Cloud Storage Hazard Risk Analysis and Communication Services Humanitarian Disaster Relief Workforce Resilience Risk Assessment for BCM Trough of Disillusionment time Hosted Virtual Desktops BCM Methodologies, Standards and Frameworks Appliance-Based Replication Ka Band Satellite Communications Emergency/Mass Notification Services Crisis/Incident Management Software BCM Planning Software As of July 2014 Slope of Enlightenment less than 2 years 2 to 5 years 5 to 10 years more than 10 years Work Area Recovery WAN Optimization Controllers Continuity Server-Based Replication Outage Management Systems Bare-Metal Restore WAN Optimization Services IT DRM Insourcing Server Repurposing Lights-Out Recovery Operations Management Plateau of Productivity obsolete before plateau Source: Gartner (July 2014) Gartner, Inc. G Page 9 of 88

10 The Priority Matrix The BCM Priority Matrix shows the business benefit ratings of the 38 continuity and recovery technologies on the 2014 Hype Cycle. The Priority Matrix maps the benefit rating of a process or technology against the length of time that Gartner expects it will take to reach the Plateau of Productivity. This mapping is displayed in an easy-to-read grid format that answers these questions: How much value will an enterprise get from a process or technology in its BCM program? When will the process or technology be mature enough to provide this value? In the case of a process, when will most enterprises surpass the obstacle that inhibits their ability to achieve mature BCM programs? This alternative perspective helps users determine how to prioritize their BCM investments. In general, companies should begin in the upper-left quadrant of the chart, where the processes and technologies have the most dramatic impact on ensuring a strong ability to recover and restore business and IT operations after a business disruption (these processes and technologies are available now or will be in the near term). Organizations should continue to evaluate alternatives that are high impact, but further out on the time scale, as well as those that have less impact, but are closer in time. Many technologies that are designated high on the Priority Matrix are process-oriented. Therefore, the most important piece of advice that Gartner can provide is to look at the BCM methodology being used in the BCM program, so that consistency of program implementation is achieved across all lines of business. Data deduplication technology is designated transformational, because it can reduce disk storage costs by a factor of 15 to 25 times over nondeduplication recovery solutions. Page 10 of 88 Gartner, Inc. G

11 Figure 2. Priority Matrix for Business Continuity Management and IT Disaster Recovery Management, 2014 benefit years to mainstream adoption less than 2 years 2 to 5 years 5 to 10 years more than 10 years transformational Data Deduplication high Continuous Data Protection IT DRM Insourcing Server-Based Replication BCM Methodologies, Standards and Frameworks BCM Planning Software Cloud Storage Gateway Copy Data Management IT Service Dependency Mapping IT Service Failover Automation for DR WAN Optimization Controllers Crisis/Incident Management Software Mobile Satellite Services Work Area Recovery Disaster Recovery Service-Level Management Emergency/Mass Notification Services Hazard Risk Analysis and Communication Services Hosted Virtual Desktops Humanitarian Disaster Relief IT DRM Exercising Load Forecasting Public Cloud Storage Risk Assessment for BCM Virtual Machine Backup and Recovery WAN Optimization Services Workforce Resilience moderate Bare-Metal Restore Continuity Lights-Out Recovery Operations Management Appliance-Based Replication Cloud-Based Disaster Recovery Services IT Vendor Risk Management Cloud-Based Backup Services Ka Band Satellite Communications Recovery Assurance Server Repurposing low As of July 2014 Source: Gartner (July 2014) Off the Hype Cycle The following technologies were removed from this year's Hype Cycle for completing the transition to the Plateau of Productivity, for becoming technically obsolete, stalled progress or because of a better fit with a different Hype Cycle: Gartner, Inc. G Page 11 of 88

12 Business Impact Analysis (Plateau of Productivity transition completed) Continuous Availability Architectures (moved to IT Service Continuity Hype Cycle) Data Restoration Services (Plateau of Productivity transition completed) Distributed Virtual Tape (Plateau of Productivity transition completed) Long-Distance Live VM Migration (moved to IT Service Continuity Hype Cycle) Mobile Service Level Management Software (Plateau of Productivity transition completed) PS-Prep (U.S. PL , Title IX; Significantly stalled progress on broad-based implementation) Test Lab Provisioning (Plateau of Productivity transition completed) On the Rise Copy Data Management Analysis By: Pushan Rinnen Definition: Copy data management refers to products that use a live clone to consolidate, reduce and centrally manage multiple physical copies of production data that is usually generated by different software tools and resides in separate storage locations. Those copies could be snapshots, clones or replicas in primary storage arrays, and backup and remote replicas in various secondary storage (disk or tape). Position and Adoption Speed Justification: Many organizations have become acutely aware of the increasing cost of managing copy data, whose capacity is often significantly higher than production storage due to multiple copies for different use cases and less managed retention periods. IT organizations have historically used different storage and software products to deliver backup, archive, replication, test/development and other data-intensive services with very little control or management across these services. This results in over-investment in storage capacity, software licenses and operational expenditure costs associated with managing excessive storage and software. Copy data management facilitates the use of one copy of data for all of these functions, thereby dramatically reducing the need for multiple unmanaged copies of data and enabling organizations to cut costs associated with multiple disparate software licenses and storage islands. In the past two years, the concept of copy data management has gathered some momentum as a few vendors are starting to use the same term to describe their existing products' capabilities, although very few vendors have truly centralized copy data management products. From the technology perspective, the techniques used by copy data management tools to reduce storage are not new; they include pointer-based virtual snapshots and clones, deduplication and compression, as well as thin provisioning. What is new with the copy data management product is the fact that it effectively separates copy data from production data so that production data will have minimum performance impact when copy data activities such as backup, replication or testing/development are performed. Moreover, it helps organizations to manage traditionally Page 12 of 88 Gartner, Inc. G

13 disparate copies more efficiently. The main challenge faced by copy data management products is that it has to resonate with higher-level executives, as adoption of such products is usually a more strategic move and will disrupt the existing IT silos. User Advice: Copy data management is still an emerging concept with very few qualifying products in the market that can consolidate many types of copies. IT should look at copy data management as part of a backup modernization effort or when managing multiple copies of testing/development databases has become costly, overwhelming or a bottleneck. Copy data management is also useful for organizations that are looking for active access to secondary data sources for reporting or analytics due to its separation from the production environment. Business Impact: The business impact of copy data management is threefold: It enables organizations to rethink and redesign their strategy for managing secondary copies of data to achieve operational efficiency. It reduces the storage and management costs associated with various copies. It enables organizations to better leverage their secondary data for reporting, analytics and other non-mission-critical activities going forward. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Actifio; Delphix Data Dependency Mapping Technology Analysis By: John P Morency Definition: Data dependency mapping products are software products that determine and report on the likelihood of achieving specified recovery targets based on analyzing and correlating data from applications, databases, clusters, OSs, virtual systems, and networking and storage replication mechanisms. These products operate on direct-attached storage (DAS), storage area network (SAN)-connected storage and network-attached storage (NAS) at both primary production and secondary recovery data centers. Position and Adoption Speed Justification: Prior to these solutions becoming available, there were only two ways to determine whether a particular recovery time objective (RTO) could be achieved through data restoration testing or through operations failovers conducted during a live recovery test exercise. A frequent outcome of the live test was the discovery of missing, unsynchronized or corrupt data that was not detected during normal backup, asynchronous replication or synchronous mirroring, resulting in unplanned data losses that could cause disruption in the operation of one or more business processes if a production operations recovery occurred. Gartner, Inc. G Page 13 of 88

14 Because of the high risk and cost incurred to discover and remediate potential data loss, a new generation of data assurance technologies was developed. These newer products support a more granular knowledge of application-specific data dependencies, as well as the identification of content inconsistencies that result from application software bugs or misapplied changes. These changes may be attributable to either human error or to the complex and dynamic nature of IT operations. One technology approach by various vendors is the use of well-defined storage management problem signatures, supported by industry-standard storage and data management software, and used in combination with the passive traffic monitoring of local and remote storage traffic (through software-resident agents). This traffic monitoring is used to detect control and user data anomalies and inconsistencies in a more timely way, notifying storage operations staff of the issue's occurrence, and to project the negative RTO effect through onboard analytics. The automation of the verification process and an attempt to quantify the impact on the business are the key deliverables of these solutions. Based on the number of direct Gartner client inquiries, this market does not appear to be evolving as quickly as we originally expected. Possible reasons for this slower-than-expected adoption rate could include lack of knowledge regarding data dependency products, lack of the resources needed to deploy the tools, and difficulty in justifying investments in what may be perceived as a luxury technology. Another contributing factor is that data dependency mapping products are still offered primarily on a stand-alone basis, as opposed to being bundled in larger storage management or backup solutions. For these reasons, Gartner positions data dependency mapping at the trigger-peak midpoint in In addition, Gartner also believes that, as self-management and self-healing capabilities of storage vendors improve, this product category will likely become obsolete well before it reaches the Plateau of Productivity. User Advice: Preproduction pilots are a viable option that may be worth pursuing. In some cases, vendors offer very low-cost pilot projects in which the vendor software autodiscovers and reports potential problems that may have been unknown to storage and server operations staff. The validation of the solution in the actual environment reduces the risk that the solution will not meet the organization's needs, and a positive outcome of the pilot can be used to justify the purchase of the solution. Some vendor products go one step further through the use of proprietary, multivendor data checkpoint and replication technology. As a result, a much richer degree of data integrity and consistency assurance can be supported across primary production and secondary recovery data centers. In addition, data checkpoint, replication and validation support for application platformspecific software, such as Microsoft Exchange, BlackBerry Enterprise Server and Microsoft SharePoint, constitutes an additional product alternative for users whose immediate recovery needs are more product-specific. Business Impact: The primary benefits of this technology are improved predictability and efficiency in both achieving and sustaining required recovery times, especially for mission-critical Tier 1 and Tier 2 applications. This is especially important for high-volume transaction applications, and for low- to medium-volume transaction applications for which the average revenue per transaction is high. In recent years, the frequency of live recovery exercises has become much more limited in Page 14 of 88 Gartner, Inc. G

15 many organizations, making such tools even more valuable in both the early detection and the longterm avoidance of data loss. Benefit Rating: Moderate Market Penetration: Less than 1% of target audience Maturity: Emerging Sample Vendors: 21st Century Software; Continuity Software; Egenera; EMC; FalconStor Software; InMage; NetApp; Sanovi; Symantec; Unitrends; Zerto Recovery Assurance Analysis By: John P Morency; Robert Naegle Definition: Recovery assurance products facilitate the reduction of recovery exercising cost and complexity, increase exercise execution flexibility and ensure that application failover is both successful and sustainable. These products configure and manage sets of virtual servers for the purposes of orchestrating disaster recovery plan exercising. Position and Adoption Speed Justification: Effective recovery plan testing is a multidisciplinary and complex challenge that spans multiple types of systems, applications, databases and even organizations. The most important challenge is to ensure that postrecovery IT operations are as stable as predisaster IT operations, to the extent that is possible. As a result, IT disaster recovery management (DRM) annual exercise budget allocations can range from $20,000 to more than $150,000 annually. IT DRM costs include hardware, software, personnel, travel expenses, data center usage, client desktops and peripherals, help desk, and voice and data networks. Most organizations want to reduce recovery exercise time and cost. The real question is how to best do so either through reducing the frequency of test exercises, increasing the use of exercise automation technology or by employing some combination of the two approaches. Recovery assurance is one technology-based alternative. The primary objective of recovery assurance products is to reduce the cost and improve the predictability of meeting critical recovery targets. Through the use of device drive and network address remapping functionality, recovery assurance products create isolated testing environments, which are also known as test "sandboxes." Test sandboxes are similar in structure to public cloud-based customer recovery configurations. These virtual test configurations simulate the production environment in which the business service runs and include all the virtual machines and related production data that support the service. After the virtual test configuration is activated at the recovery site, IT administrators can initiate automated failover of the in-scope applications from the primary site as often as needed. They can carry out recovery tests of individual IT services on a weekly or daily basis, rather than only doing so either yearly or quarterly. Reporting capabilities allow IT administrators to be alerted to potential problems based on either standard or user-defined thresholds. Recovery assurance is a relatively new recovery management category, with only a handful of supporting products and vendors today. Gartner, Inc. G Page 15 of 88

16 However, as the deployment scope of virtual machine recovery increases, so too will the need for recovery assurance functionality to more effectively manage "guaranteed" recovery times as well as reduce recovery plan testing and cost. Because of the broader use of these products in both private and public clouds over the past year, as well as their demonstrated effectiveness through several customer testimonials, Gartner has raised the Hype Cycle curve position of recovery assurance to the trigger-peak midpoint. User Advice: Recovery assurance products have the potential to improve recoverability as well as reduce the cost and complexity of recovery plan exercising. However, these products are still somewhat unproven in large enterprise environments. Today, organizations should consider initiating limited pilots intended to evaluate the use of recovery assurance products for smaller, nonmission-critical business services. Over time, as the support scope of these products broadens to support virtual machine environments besides VMware and market experience with their usage increases, the application of this technology to the recovery of a broader set of mission-critical applications will become more viable. Business Impact: Companies will experience increased staffing and logistics costs to support recovery testing and will struggle to maintain consistency between primary and secondary configurations. Recovery assurance products are targeted squarely at reducing testing costs and improving recoverability. Although still somewhat unproven, there is a clear potential benefit in augmenting more traditional disaster recovery exercising with recovery assurance technology. The related business impact is moderate. Benefit Rating: Moderate Market Penetration: Less than 1% of target audience Maturity: Emerging Sample Vendors: Actifio; CloudVelocity; Continuity Software; Egenera; NetIQ; Sanovi; Sios Technology; Unitrends; VMware; Zerto Recommended Reading: "Cool Vendors in Business Continuity Management and IT Disaster Recovery Management, 2014" At the Peak Cloud-Based Backup Services Analysis By: Pushan Rinnen Definition: Also known as backup as a service (BaaS), cloud-based backup services aim to replace or augment traditional on-premises backup with three main deployment models: (1) using local host agents to send backup data directly to the cloud data centers; (2) backing up first to a local device, which in turn sends backup data to the cloud either as another replica or as a lower tier; (3) backing up data that is generated in the cloud. Page 16 of 88 Gartner, Inc. G

17 Position and Adoption Speed Justification: Network links for the Internet and WANs have become larger and cheaper in the past few years, enabling more data to be transmitted to the cloud within the same time period or the same amount of data to be transferred faster. Improved network throughput is a key factor for cloud backup adoption, as we are starting to see some 1 Gbps or even 10 Gbps links become available for some community clouds or in areas located near public cloud data centers. The first deployment model is the traditional online backup, increasingly used for endpoint backup as the enterprise workforce becomes more mobile. Small businesses and small branch offices with limited amounts of data also leverage online backup to eliminate the hassle of managing local backup. The downside of online backup is that it has a limited backup window and slow online recovery speeds. The second deployment model is also called hybrid cloud backup where the local device offers much faster backup and restore capabilities due to the use of local networks instead of the Internet or WAN access. It therefore can scale to a much larger server environment than the first scenario. All successful cloud server backup providers offer a local device. The more innovative solutions also offer integrated cloud backup and cloud disaster recovery where the backup copies stored in the cloud could be used to boot up standby virtual machines in the cloud for fast failover. The third deployment model is still nascent; cloud-native applications such as Google Apps and salesforce.com are just starting to be used by enterprises, and some enterprises do not realize that accidental or malicious user deletion of data stored in the cloud either cannot be recovered by the cloud application provider or can only be restored at a high cost. Overall, the adoption of cloud-based backup services among midsize to large enterprises is very low for server backup due to their large amounts of data to be protected, limited network bandwidth and security concerns. For organizations that want to eliminate tape for off-site backup and don't have a desirable secondary site, some have shown interest in replication services for deduplication backup target appliances deployed on customer premises. For endpoint and small branch office server backup, Gartner is witnessing an increased interest in the use of Web-scale public cloud providers, especially among organizations that have many global offices and employees. User Advice: Cloud backup is inherently more complex than local backup at the adoption stage because of the additional consideration of networks and security. However, once implemented successfully, cloud backup can eliminate much of the daily management overhead for on-premises backup. Although technological limitations are mostly overcome for laptop backup and backup of a small number of servers, cloud backup remains largely impractical for environments with 20TB or more of production data. Gartner recommends deploying a local backup/restore device for environments with 500GB of daily incremental backup or restore workloads for most environments today. Business Impact: Cloud server backup is often used to replace traditional tape off-site backup, eliminating the daily operational complexities associated with tape backup and the management of removable media. Solutions offering integrated cloud backup and cloud DR provide small organizations with the business continuity benefits they couldn't afford before. Although the business impact for small businesses is high, its impact for large enterprises is low today. Benefit Rating: Moderate Gartner, Inc. G Page 17 of 88

18 Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Asigra; Axcient; Backupify; Barracuda Networks; Ctera Networks; EVault; Hosting; HP Autonomy; Microsoft; nscaled; NaviSite; Spanning; SunGard; Verizon Terremark; Zetta Recommended Reading: "How to Determine If Cloud Backup Is Right for Your Servers" "Exploring Common Cloud Backup Options" "Pricing Differences Complicate the Adoption of Backup and Disaster Recovery as a Service" Disaster Recovery Service-Level Management Analysis By: John P Morency Definition: Disaster recovery service-level management refers to the support procedures and technology needed to ensure that committed recovery time objective (RTO) and recovery point objective (RPO) service levels are met during recovery plan exercising or following an actual disaster declaration. Position and Adoption Speed Justification: Disaster recovery service-level management processes support IT disaster recovery management (DRM) service levels that are defined specific to business process and production application RTOs, RPOs or a combination of the two. The objectives themselves are typically defined in units of minutes, hours or days. Both types of service levels are manually measured (typically by business unit end users), although service-level tracking automation (especially for RTO service levels) is now being supported in recovery assurance products and cloud-based recovery. External service providers offer two types of disaster recovery service levels. The first is applicationspecific RTO- or RPO-based (or both), which software as a service (SaaS) providers sometimes offer. One example of application-specific service levels is the service-level commitment from salesforce.com. For some customers, salesforce.com supports disaster recovery service levels that include a 12-hour RTO and a four-hour RPO. The second type of disaster recovery service level is application-independent and is supported by a combination of server virtualization and virtual machine failover to a provider's managed facility or cloud. A few examples of application-independent service levels managed by recovery as a service (RaaS) providers are: EVault's cloud disaster recovery offering supports three separate RTO service tiers with associated guarantees of four, 24 and 48 hours, respectively. HP's Enterprise Cloud Services Continuity supports RTO service guarantees of four hours for RTO and 15 minutes for RPO. SunGard Availability Services' Recover2Cloud for Server Replication provides a contractually guaranteed SLA for four hours or less for RTO and 15 minutes or less for RPO. Page 18 of 88 Gartner, Inc. G

19 Regardless of the type of service levels offered by a given provider, however, financial penalties that are the result of missed service levels are fairly small and, in general, only consist of a monthly service fee credit. In general, provider master service agreement wording will typically limit the provider's financial liability. In addition to external providers, IT recovery teams also support formal IT DRM, and RTO- or RPO-based (or both) service-level targets. Formal service-level definition and management is typically found in organizations that have high IT DRM maturity. While lower maturity organizations also need to support either formal or informal recovery time targets, the definition and management of formalized service levels to support these targets is far less common. Not only is the market need for more-predictable operations recovery increasing, but the required recovery times for the most important mission-critical applications continue to be measured in the order of minutes or hours versus in days. This improvement will not happen at the same pace in all enterprises. Over time, disaster recovery service-level targets will be defined at a relatively early stage in the application design and implementation life cycle. Because of its dependence on technologies such as virtual machine recovery, application-specific failover mechanisms and continuously available infrastructure (such as stretch clusters), disaster recovery service-level management cannot be positioned on the Hype Cycle at a point later than its technological prerequisites. For this reason, disaster recovery servicelevel management has been moved up to the Peak of Inflated Expectations in User Advice: Over time, recovery, hosting, application and storage cloud providers may offer more robust service availability and data protection alternatives compared to in-house IT. This is a nascent, albeit fast-growing, provider service differentiator. Therefore, it is important to continually re-evaluate the recovery sourcing strategy to ensure that IT operations recovery continues to be predictable, sustainable and cost-effective, regardless of who is responsible for delivering servicelevel protection. Because service-level excellence is so critical to long-term provider viability, it's important for customers to understand the type of service-level management that individual service providers offer and to hold the providers accountable in supporting the RTO and RPO objectives of the business. Business Impact: The ability to manage recovery service levels in an automated, repeatable and timely manner is becoming increasingly critical for many organizations. As Web-based applications support more business-critical processes, managed recovery service levels will become an important basis for improving business resiliency. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: EVault; HP; IBM; SunGard Availability Services Recommended Reading: "Critical Capabilities for Recovery as a Service" "IT DRM Modernization Effect on RTO, RPO, and Budget Allocation" Gartner, Inc. G Page 19 of 88

20 "Do Your Homework Before Committing to Cloud-Based Recovery Services" IT DRM Exercising Analysis By: John P Morency Definition: Exercising an IT disaster recovery (DR) plan (which is also known as DR testing) involves a set of sequenced testing tasks typically performed at a recovery data center. These tasks focus on ensuring that the availability of and access to a production application (or group of production applications) can be restarted within a specified time (the recovery time objective [RTO]) with the required level of data consistency and an acceptable level of data loss (the recovery point objective [RPO]). Position and Adoption Speed Justification: As the recovery scope of mission-critical business processes, applications and data increases, sustaining the quality and consistency of recovery exercises can be a daunting technical and logistical challenge. This occurs especially as the frequency with which recovery exercises are held increases, in addition to increased change frequency. Regardless of the frequency with which recovery exercises are held, the consistency among the current state of the production data center infrastructure, applications and data, and their state at the time of the last recovery test, erodes on a daily basis. This is a direct side effect of the changes that are applied to the production configuration to support new business requirements. For many organizations, recovery exercising is still either a partially or totally manual exercise, making exercise scalability more difficult as new in-scope applications and data are brought into production. An additional risk is that labor-intensive manual testing, regardless of how thorough it is, cannot fully guarantee 100% correct operation of production applications should the initiation of recovery operations become necessary. To reduce test time, especially for mission-critical applications, some organizations are rearchitecting their most critical applications for active-active operations, meaning that the application runs live in two or more data centers, and that exercising is done every day by means of the production implementation across two sites. A new generation of recovery-in-the-cloud offerings also has the potential to improve the frequency with which customers can conduct live exercising and eliminate recovery configuration server and storage capital cost. The key attributes of cloud computing service-based, scalable and elastic, shared, metered by use and Internetbased access offer a strong alternative to more inflexible and expensive traditional DR services, and may even be a superior alternative to an in-house, self-managed approach. As a result, Gartner expects recovery as a service to continue to grow as a logical extension of cloud infrastructure services. To remain competitive, cloud-based recovery service providers are continuously improving remote access to their data centers, which enables in-house recovery management teams to orchestrate live exercises remotely, without having to travel. Supporting this, however, means that the provider must ensure that proper authentication, access and (if needed) data encryption controls are in place. Page 20 of 88 Gartner, Inc. G

21 Despite the increasing improvement in recovery exercise management products over the past few years, most organizations will still manage labor-intensive recovery exercises, especially for lower recovery tier (that is, recovery Tier 3 and Tier 4) applications. Because of the increased maturity of recovery-in-the-cloud services and recovery assurance products, however, as well as improvements in related exercising process maturity, the positioning of recovery exercising management technology on the 2014 Hype Cycle has been moved up to the Peak of Inflated Expectations, a positioning consistent with disaster recovery service-level management. User Advice: As a near-term alternative to IT disaster recovery management (DRM), consider consolidating previously separate preproduction quality assurance exercising and DR test teams. The multipurposing of server and storage assets for supporting day-to-day development and exercising activities, as well as recovery exercising (and application failover operations, if needed), combined with the needs for tighter change synchronization between configurations and automation of more-frequent test exercises, have been key drivers. However, organizational and tools consolidation, by itself, may not be sufficient. The ideal scenario is to establish a separate test environment that can be configured for exclusive use by the merged organization. Due to budget-related constraints, this may not always be immediately feasible. An important success factor is the availability of automated application test management software, combined with customized run book automation scripts, to manage the potential infrastructure and application failover sequences. Some of the early adopters of this approach have achieved increasingly reliable and more-effective test exercises, combined with more-thorough testing of representative production inquiries and transactions against the recovery configuration. The latter benefit improves the likelihood that recovery operations can be initiated within required RTO and RPO targets, as well as ensuring more-stable recovery operations. If this is not being evaluated, Gartner recommends considering this approach as an alternative to continued maintenance of the status quo. Business Impact: The ability to automate recovery exercise tasks in a repeatable and timely manner is becoming increasingly important for many organizations. As Web applications support an increasing number of business-critical processes, effective recovery exercise management will become an important foundation for the successful realization of improved business resiliency. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Appnomic; BMC Software; Egenera; FalconStor Software; HP; IBM; PHD Virtual; Sanovi; SunGard; Symantec; VMware; Zerto Recommended Reading: "Cool Vendors in Business Continuity Management and IT Disaster Recovery Management, 2014" "ITScore for Business Continuity Management, 2012" Gartner, Inc. G Page 21 of 88

22 Sliding Into the Trough Mobile Satellite Services Analysis By: Bill Menezes; Jay E. Pultz Definition: Mobile satellite services (MSSs) provide two-way voice and data communication to global users who are on the go or in remote locations where there is no terrestrial network connectivity. Terminals include handheld phones, laptop-size units, modems and mobile hot spots. Users also can mount terminals in a vehicle, with communication maintained while the vehicle is moving. MSSs primarily operate at L band but next-generation systems also will utilize Ka band downlinks. Position and Adoption Speed Justification: Overall MSS use is prevalent mostly in emergency/ disaster communications and for voice calling from remote sites such as ships, offshore oil rigs and forestry or mining operations where ground-based cellular does not exist. Although satellite communications (satcom) services intended largely for fixed location uses can be accessed with equipment designed to be "portable" or for mobile use cases (such as airborne Internet access or remote broadcast trucks), MSSs encompass a range of solutions that include specialized maritime and transportation data communication systems as well as worldwide "cellular in the sky" voicecentric services offered largely by Iridium and Globalstar since the late 1990s. Voice-focused MSSs avoid the latency problems of geosynchronous earth orbit (GEO) satellites by deploying large constellations of low earth orbit (LEO) satellites (Iridium has 66 active satellites). Due to competitive pressures from the rapid deployment of terrestrial-based cellular systems along with the fact that satellite handhelds are larger and more expensive than cellular phones, with usage charges in the $1-per-minute range Iridium and Globalstar each underwent Chapter 11 bankruptcy reorganizations more than a decade ago and revised their business models to focus largely on enterprise and government clients. Globalstar in August 2013 began full commercial availability of its second-generation system comprising 32 LEO satellites to provide low-bandwidth data coverage (less than 56 Kbps) over much of the globe, and voice coverage in most of North America, Europe, Australia and parts of South America, Africa, Russia and Japan. Globalstar also began initial steps in 1H14 to deploy its planned "Sat-Fi" system enabling Wi-Fi equipped devices to connect with its satellite network. The current state of the art for MSS data is represented by Inmarsat's Broadband Global Area Network (BGAN). With laptop-size terminals that can be set up in minutes, BGAN sends/receives at nearly 500 Kbps anywhere in the world (except in extreme polar regions). BGAN comprises a fleet of three GEO satellites, but high latency constrains its use for voice and other real-time applications. Retail prices are typically $2,000 and higher per terminal, $40 to $50 per month for service and a $5 or more per megabyte usage charge. MSS data capabilities will expand further with the availability of Iridium's NEXT, a LEO MSS to replace current Iridium satcoms. The company has upgraded its ground systems and plans to deploy the 66 new orbiting satellites beginning in 2015, with full deployment by (Iridium also will launch six in-orbit spares.) NEXT will offer data speeds up to 1.5 Mbps at L band and up to 8 Mbps at Ka band, enabling use cases beyond the low-bandwidth capabilities of current LEO MSS systems. Iridium has secured $1.8 billion in funding, and the Page 22 of 88 Gartner, Inc. G

23 hosted-payload approach in which Iridium will sell launch capacity to other satcom providers is expected to defray significant project costs. Thuraya provides MSS voice plus roaming onto Global System for Mobile Communications (GSM) partner networks from two GEO satellites, covering primarily the Eastern Hemisphere. Its products include a "sleeve" enabling Android and ios smartphones to function as satellite phones. Given service improvements and several generations of terminals, Gartner continues to classify MSSs as adolescent in maturity. With systems like NEXT and possibly Sat-Fi, MSSs using their more advanced technologies will likely move to an early mainstream maturity in the next three to five years. User Advice: Utilize MSSs to provide remote mobile users who are out of reach of cellular networks with voice connectivity and with bandwidth sufficient for business processes requiring low-speed or bursty data connections. Control high MSSs costs by deploying them only where needed and by closely monitoring usage charges. Examine how to use future-generation MSS systems such as Iridium's NEXT, which should provide greater bandwidth for applications such as fleet management, global asset tracking, and Internet access for remote employees such as sailors or offshore oil rig workers. Innovative use cases also might include remote backhaul for cellular devices via GSM small cells located on oceangoing vessels, to enable constant tracking of shipping containers when out of the range of terrestrial cellular machine-to-machine connections. Consider telco-integrated systems as an option; some communications service providers (CSPs), such as AT&T, offer handsets combining MSSs integrated with their cellular or wireline networks, or as part of a mobile emergency response communications on-site or vehicle configuration. Business Impact: MSSs can extend enterprise operations to nearly anywhere on the planet at data speeds acceptable for business applications with low bandwidth requirements. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: EchoStar; Globalstar; Inmarsat; Iridium; Thuraya Recommended Reading: "Next-Generation Satellite Services Set to Provide an Expanded Role for the North American Enterprise Market" "Satellite Communications: The Last, Best Communications Alternative for Remote Sites" Cloud-Based Disaster Recovery Services Analysis By: John P Morency Definition: There are three kinds of cloud-based recovery services. In the first, the provider is responsible for managing VM replication, VM activation and exercise management. In the second, Gartner, Inc. G Page 23 of 88

24 the provider role is relegated to just VM activation and shutdown, and the service customer is responsible for replication and exercise management. The focus of the third type is the replacement or augmentation of local backup and recovery through data backup to the cloud. Position and Adoption Speed Justification: Cloud-based recovery services have evolved from traditional, managed disaster recovery (DR) services adopted by enterprises and online backup services adopted by small or midsize businesses (SMBs). Over the past year, Gartner has seen a significant increase in the number of recovery as a service providers (more than 150 at this point) as well as a significant increase in the number of both production implementations (more than 12,000) and early pilot evaluations. The initial market availability of recovery as a service itself occurred in 1Q09. Its value proposition was twofold: Because server and data restoration on demand do not require the preallocation of specific computing equipment or floor space, provider customers have the opportunity to exercise their recovery plans frequently. This contrasts with the six to eight weeks of advance notice that often must be given to providers of subscription-based recovery services prior to starting a test exercise. Server images (either a VM image such as VMware VMDK, Microsoft Hyper-V or Citrix XenServer) or a backed-up physical server image is restored to providers' server hardware when needed. Therefore, the need for dedicated server and storage equipment at the recovery data center can be significantly reduced, if not totally eliminated. Typically, the server hardware maintained inside the provider's cloud supports Windows- or Linuxbased applications. However, a growing number of recovery-as-a-service providers are beginning to support hybrid configurations that are composed of activated VMs and one or more physical servers, which may be IBM AIX, HP-UX or Oracle SPARC servers, among others. Physical servers are supported either through colocation or managed hosting services. Recovery-as-a-service delivery is different from that provided by customer-managed recovery services. Here, the customer is directly responsible for managing VM and production data updates to the cloud, as well as full orchestration of recovery exercising. The provider's responsibilities are mainly limited to VM activation and teardown in response to the service customer's request. Because the customer is responsible for most of the recovery management and because VM instances are only activated on demand, the monthly service pricing tends to be much lower than recovery-as-a-service fixed monthly service pricing policies. Because of the growing number of production deployments and early pilot evaluations that have been initiated over the past year, Gartner has increased the Hype Cycle position for recovery-in-thecloud services to post-peak 35%. Another key reason is that more than half of our current DR inquiries and a large number of related backup inquiries now include one or more questions specific to the viability or planned deployment of cloud-based recovery. User Advice: Given the relative newness of recovery-in-the-cloud services, do not assume that the use of cloud-based recovery services will subsume the use of traditional DR providers or selfmanaged DR any time in the near future. DR and business continuity require the management of Page 24 of 88 Gartner, Inc. G

25 technology and operations management disciplines, not all of which can be readily addressed by cloud-based services. Examples include work area recovery, incident management and crisis communications. Therefore, it is important to look at cloud-based services as just one possible alternative for addressing in-house recovery and continuity requirements. Consider cloud infrastructure when you need DR capabilities for either Windows- or Linux-centric cloud-based applications, or when the alternative to a cloud-based recovery approach is the acquisition of additional servers and storage equipment for building out a dedicated recovery site. Additionally, because cloud services for enterprises are relatively nascent, carefully weigh the cost benefits against the service management risks as an integral part of your decision-making process for DR services sourcing. Business Impact: The business impact is moderate today. The actual benefits will vary, depending on the diversity of computing platforms that require recovery support and the extent to which the customer can orchestrate (and ideally automate) the recurring recovery testing tasks that need to be performed. An additional consideration is the extent to which the customer can transparently and efficiently use same-provider cloud storage for ongoing data backup, replication and archival, in addition to production application recovery. The key challenge is ensuring that these services can be securely, reliably and economically used to complement or supplant the use of more-traditional equipment-subscription-based services or dedicated facilities. Benefit Rating: Moderate Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Allstream; Amazon; AT&T; Axcient; Bluelock; Carpathia CenturyLink; CenturyLink; Dimension Data; EMC; EVault; Hosting; HP; iland; IBM; IPR International; nscaled; NTT Communications; Rackspace; StorageCraft; SunGard Availability Services; Verizon Terremark; Zetta Recommended Reading: "Pricing Differences Complicate the Adoption of Backup and Disaster Recovery as a Service" "Critical Capabilities for Recovery as a Service" "Research Roundup: Business Continuity Management and IT Disaster Recovery Management, 2Q13" "ITScore for Business Continuity Management" "Do Your Homework Before Committing to Cloud-Based Recovery Services" "Market Trends: Assessment of the Rapidly Growing Market for Cloud-Based Recovery Services" Gartner, Inc. G Page 25 of 88

26 Cloud Storage Gateway Analysis By: Gene Ruth Definition: Cloud storage gateways connect on-premises storage to external cloud storage, and are supplied as hardware, software or part of a storage array. Gateways connect to the cloud through Internet protocols and integrate with on-premises IT infrastructures, using protocols or connectivity that may include Internet Small Computer System Interface, FC and file protocols, such as Network File System or Common Internet File System. Gateways provide performance optimization, encryption, WAN and data footprint reduction, and a choice of service providers. Position and Adoption Speed Justification: Gateways are an important element and key enabler for connecting on-premises storage to off-site cloud storage services. The market is predominantly composed of startup vendors that closely partner with the cloud storage providers to offer a comprehensive cloud storage solution. In some cases, the cloud providers, for example, Microsoft Azure and Amazon Web Services, offer their own gateway. The market has been stable in the last year, except for the entry of Avere Systems with its enterprise-focused file service virtualization appliance. The vendor landscape is notable for the absence of the large storage hardware vendors. Gateway vendors continue to strengthen their products and gain the attention of enterprise IT organizations. Current gateway implementations target small or midsize businesses (SMBs), branch-office support and non-mission-critical storage for large enterprises. User adoption is increasing, with most occurring in the SMB space, but with growing interest from enterprise customers for branch-office support and unstructured data applications. Early adoption of public cloud storage gateway appliances focused on backup, archiving of stale data and disaster recovery (DR). Gartner is seeing increasing interest in technologies that support unstructured data delivery and enhance collaboration among distributed sites, such as support of file sync and share, or global file systems. User Advice: Until cloud storage gateways are offered in a certified and turnkey manner from either the public cloud storage service providers or major storage hardware vendors with service partners, users should focus on proof of concept and branch-office support use cases, or apply gateways to non-mission-critical data. As the industry develops, cloud storage and gateways should be included in an overall long-term storage infrastructure strategy. Businesses of all sizes should investigate the total cost of ownership against traditional on-premises storage, the impacts of cloud storage on IT operations and the risks of moving data into the cloud. Organizations should negotiate with gateway appliance vendors on guarantees to ensure that availability, data integrity, performance and financial objectives are met, and should expect partnership relationships between gateway vendors and public cloud storage service providers. Business Impact: Gateways enable off-premises cloud storage to compete against on-premises primary storage arrays for workloads that are modestly transactional, or that provide unstructured file data. Additionally, the gateway may replace existing secondary storage as a target for backup and/or archive data that is redirected into cloud storage. Gateways indirectly compete against asynchronous remote replication tools used for collaboration or DR, as well as some backup software, due to the data protection offered natively with cloud storage infrastructures. Gartner believes these gateway appliances can enable compelling cloud-based alternatives for customers that do not want to manage in-house their backup/dr processes, archives and unstructured data. Page 26 of 88 Gartner, Inc. G

27 Gateways have the potential to enable bursting in cloud computing environments that intend to move virtual machines between private and public computing clouds. Currently, gateways only partially address issues like the unpredictability of monthly cloud storage costs, large workloads, storage SLAs and quality of service. As most cloud storage gateway vendors are emerging startups, they face the issue of vendor viability and the competitive threat from the major storage vendors entering into the space. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Avere Systems; Ctera Networks; Nasuni; Panzura; TwinStrata Recommended Reading: "Cloud Storage Gateways: Enabling the Hybrid Cloud" "Hybrid Cloud Storage Can Be an Antidote to Rapid Data Growth" "How to Calculate the Total Cost of Cloud Storage" IT Service Failover Automation for DR Analysis By: Donna Scott; John P Morency Definition: IT service failover automation provides end-to-end IT service startup, shutdown and failover operations for disaster recovery (DR) and continuous availability. It establishes ordering and dependency rules as well as IT service failover policies. Position and Adoption Speed Justification: Four types of technologies are available for orchestrating IT service failover across sites: Infrastructure or platform DR tools that orchestrate IT service failover only for components of the platform (such as IBM Geoplex, VMware vcenter Site Recovery Manager [SRM] or Microsoft's Azure Hyper-V Recovery Manager) IT process automation (ITPA) and workload automation tools Clustering software tools that offer automated detection and failover (such as Symantec Veritas Cluster Server [VCS]) Cloud management platforms that enable definition of end-to-end service and enable deployment across multiple data centers DR failover automation is immature, often implementing complex scripts for end-to-end IT service automation across heterogeneous physical and virtual computing environments. These scripts are increasingly complex to maintain, especially with the more-granular DR architectures that are the norm. Increasingly, virtual-machine-specific restart software supported in both private enterprise Gartner, Inc. G Page 27 of 88

28 networks and public cloud providers is providing a more uniform method for the restart of applications that are either partially or totally made up of individual virtual machines (VMs). Virtual server platform tools such as VMware SRM and Microsoft Azure Hyper-V Recovery Manager continue to gain interest, especially within IT organizations that host almost all their IT services on VM platforms. These tools provide DR failover and orchestration for IT services that run on specific VM (but not physical machine) platforms. They also enable simpler failover testing in nonproduction network environments. ITPA tools (sometimes called run book automation [RBA] tools) have started to emerge with capabilities for use in this area, with clients using the tools to aid in developing visualization around recovery automation (startup, ordering and dependencies) and as a framework with which to organize scripts. However, unlike clustering tools, ITPA tools do not have either an inherent understanding of the underlying physical infrastructure or the ability to detect whether failure has occurred (but could be notified by clustering tools). As a result, they tend to be used to enable simpler maintenance around startup and shutdown routines but not generally for granular failover policy automation (as clustering-based tools would be). Clustering tools that gained popularity in the physical world before server virtualization have lost some of their momentum. However, many large enterprises use them to coordinate service recovery across platforms. Moreover, clustering tools can assess the health of the software inside a VM, something that virtual platforms generally do not provide. New offerings enable the coordination of service health monitoring and virtual platform recovery mechanisms. Cloud management platforms (CMPs) are also emerging and are promising from the perspective of defining a service model that can be instantiated in multiple data centers and cloud environments. However, they do not inherently have functionality to manage data, nor do they have any methodology for detection and failover. In addition to these CMPs, some applications come inherently with cross-site failover capabilities such as Microsoft SQL Server 2012 and All solutions in this space offer varying degrees of ability to manage an entire end-to-end IT service and to implement different failover policies for different types of failures. Because of the complexities of different types of solutions for various services, we consider this space to be fairly immature. In fact, most organizations utilize multiple mechanisms because no single one meets all their IT service requirements. However, because of the increasing demand for failover automation, due to the impact of virtualization and cloud computing, as well as the desire for active/active architectures and fast active/passive failover, Gartner sees accelerated investment and we have moved the positioning of this technology to near the peak/trough midpoint. The main challenges with these tools are complexity, integration and maintenance. User Advice: Enterprises should assess the emerging IT service failover automation technologies to replace fragile, script-based recovery routines as well as to enable more granularity, consistency and efficiency. Because emerging tools tend to be more loosely coupled and flexible than traditional clustering tools, enterprises can reduce the spare (available) infrastructures that are required for HA and thus reduce the overall cost of providing it. Page 28 of 88 Gartner, Inc. G

29 It is vital that enterprises test all their failover and failback architectures individually and holistically. Such testing should be done in a quality assurance (QA) test environment or, lacking equivalent environments, directly in production during planned downtime periods. Monitoring configuration consistency across the infrastructure will help identify misconfigurations before a catastrophic failure occurs. Holistic testing of end-to-end IT service failover (as opposed to just the components) is typically accomplished in QA tests and regularly through DR testing. Business Impact: The potential business impact of this emerging technology is high reducing the amount of spare infrastructure that is needed to ensure DR and continuous availability, as well as helping ensure that recovery policies work when failures occur, thus improving business process uptime. Efficiencies are increased because the maintenance of fragile recovery scripts is replaced with external policies that are easier to visualize, maintain and test. With IT service startup and shutdown, ordering, and dependencies that are used operationally and for recovery, external policies are more likely to be kept up to date, thus increasing consistency, service quality and recoverability. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: HP; IBM; Microsoft; NetIQ; Neverfail; Sanovi; SIOS Technology; Symantec; Vision Solutions; VMware IT Vendor Risk Management Analysis By: Gayla Sullivan; French Caldwell; Christopher Ambrose Definition: Vendor risk management (VRM) is the process of ensuring that the use of service providers and IT suppliers does not create an unacceptable potential for business disruption or a negative impact on business performance. VRM technology supports enterprises that must assess, monitor and manage their risk exposure from third-party suppliers that provide IT products and services, or that have access to enterprise information. Position and Adoption Speed Justification: The growing reliance of enterprises on third-party service providers, the large number of major corporate data breaches and the increasing regulatory activity on privacy policies of those service providers, all provide a steady stream of hype for VRM. Many businesses, as well as government agencies and other organizations, increasingly rely on IT vendors and service providers to support their core business processes and provide the hardware, software and licenses needed for IT operations. This reliance exposes them to greater risk of delivery disruption or failure, damage to their reputation and impacts on business performance. Essentially, it extends the enterprise risk boundaries to include the many business and IT risks facing their IT suppliers, and as enterprise risk management (ERM) adoption grows, so does VRM. Challenging economic conditions compound these risks (see "Vendor Risk Management: Criteria You Can Use to See Whether Your Vendor Is in Trouble"). Gartner, Inc. G Page 29 of 88

30 Furthermore, compliance mandates that require monitoring the risks of third-party suppliers are proliferating, as are third parties. The rapid progress of cloud adoption is increasing the demand for VRM solutions. Standards for VRM such as the Cloud Security Alliance, SOC 1 and SOC 2 (see "SAS 70 Is Gone, So What Are the Alternatives?") are beginning to stabilize, which also makes VRM solutions easier to deploy. Two issues that could impede adoption and extend the slide of VRM through the Trough of Disillusionment are: the high level of resources needed to provide ongoing monitoring and tracking of vendors, and the lack of mature vendor management functions in many enterprises. User Advice: VRM solutions are emerging to enable the assessment and management of risks from third-party service providers and IT suppliers. VRM is an important element of enterprise and IT risk management, and is mandated by many privacy and data breach notification regulations (such as the Gramm-Leach-Bliley Act in the U.S. and the Federal Data Protection Act or Bundesdatenschutzgesetz in Germany). Business performance can be improved through the VRM process. As part of a vendor management program, VRM can be a catalyst for improved vendor performance by identifying risks early and mitigating them through effective controls and process improvements: Utilize VRM technology solutions that can provide a common system of record for all the parties involved in that program. Ensure that the processes and methodologies used in the enterprise's approach to VRM are supported by the functionality and services offered by the vendor. One element commonly ignored is ongoing monitoring of strategic and high-risk vendors, which may require external vendor tracking and alert services that are not inherent in the VRM software. Develop a road map for improving the maturity of vendor management and VRM. Without that, a technology solution will not deliver the expected value. Ensure that all relevant parties are involved, including strategic vendors, even though the evaluation of a VRM solution may be led by the enterprise risk management, procurement, vendor management or IT organization. Business Impact: VRM enables a shared understanding of the full risk exposure both within the enterprise and between the enterprise and its service provider/it supplier partners. Some industries, including banking, healthcare and telecom, have industry-specific regulations that mandate monitoring third-party supplier risk. Most other enterprises also face compliance pressures to improve VRM, because of Payment Card Industry (PCI) data, state-level and national data breach notification regulations, and other privacy regulations. For enterprise risk management purposes, it is important to have a thorough understanding of the risk to business performance from vendor performance failures and disruptions. Furthermore, business performance can be improved, because VRM can be a catalyst for improved vendor performance by identifying risks early and mitigating them through effective controls and process improvements. At a strategic level, a vendor can facilitate VRM when approached as a business partner. To get the most value from VRM: Page 30 of 88 Gartner, Inc. G

31 Treat vendor risks as drivers affecting the quality of service and quality of products defined in vendor engagement contracts. View vendor risks in tandem with overall business unit and/or enterprise risk. Do not view individual vendor risks in isolation. For example, integrate results from the VRM solution with the larger set of risk assessment results from enterprisewide risk assessment initiatives. Engage in regular communication with vendors. Planned and surprise assessments, and being aware of a vendor's overall performance in the market, are vehicles for identifying vendor risks. Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Agiliance; Avior Computing; BWise; EMC; EMC (RSA); Evantix; Fusion Risk Management; Global AlertLink; Hiperos; IBM; LockPath; Markit; MetricStream; Modulo Recommended Reading: "SAS 70 Is Gone, So What Are the Alternatives?" "Toolkit: How to Scope a Supply Risk Program and Solution" "2014 Strategic Road Map for Supply Risk Solution Deployment" "Magic Quadrant for Enterprise Governance, Risk and Compliance Platforms" "Gartner's Simple Vendor Risk Management Framework" "Toolkit: Getting Started at Vendor Risk Management" Virtual Machine Backup and Recovery Analysis By: Pushan Rinnen; Dave Russell Definition: VM recovery focuses on protecting and recovering data from VMs, as opposed to the physical server or nonvirtualized systems they run on. Early on, VM backup/recovery was achieved by installing guest OS agents, which presented significant challenges in terms of I/O contention and agent sprawl. The modern design is an image-based backup that offers VM-level or more granular recovery, leveraging hypervisor-native APIs. Other advanced features also emerged to leverage VMs' mobility, such as instant VM recovery or more frequent recovery testing. Position and Adoption Speed Justification: Server virtualization for the x86 platform from hypervisor vendors such as VMware, Citrix, Microsoft and Red Hat is gaining considerable attention. The rate of server virtualization technology adoption is increasing, and the workloads being virtualized are becoming more mission-critical. As more production data is housed in or generated by VM environments, the need to protect data in these environments is increasing. Recoverability of the virtual infrastructure is a significant component of an organization's overall data availability, backup/recovery and disaster recovery plan. Protection of VMs needs to be taken Gartner, Inc. G Page 31 of 88

32 into account during the planning stage of a server virtualization deployment, as virtualization presents new challenges and new options for data protection. VM-specific backup/recovery tools showed increased adoption during the past few years because they were early-to-market, cost-effective solutions that didn't require guest OS agents, and are simpler to manage than traditional comprehensive backup tools. Because these VM-specific tools were designed especially for virtual server environments, they tend to have better support for the dynamic VM environment in terms of discovery of new VMs and VM migration. The more successful products have further innovated by introducing more advanced capabilities, such as standing up a new production VM from a backup copy or replicated copy for business continuity, or more frequent recovery testing. However, VM-specific recovery tools are beginning to face stiff competition from two fronts: hypervisor-native tools and traditional physical server backup tools that are catching up on their VM protection capabilities. VMware has introduced new native backup tools such as no-charge vsphere Data Protection and chargeable vsphere Data Protection Advanced, while other hypervisors haven't done much. We believe hypervisor-native backup/recovery functions will continue to have limited scalability because backup is a resource-intensive task, which may require too many host resources. Meanwhile, many traditional physical server backup vendors are starting to catch up in their capabilities to support VMs. Over time, the few remaining VM-specific backup/recovery solutions could continue to do well in highly virtualized environments with little need for tape, or could give way to more-general solutions that offer wider platform support and greater scalability. Gartner believes that in the near term VM-specific solutions will continue to see strong market adoption, as many organizations are willing to switch backup vendors to ease backup burdens. If VM-specific backup/recovery providers can continue to enhance supported platforms, scalability and reporting, they will rise up to challenge the larger, general-purpose backup vendors to a greater degree. User Advice: Many products and methods are available for VM data recovery. Most traditional backup applications can install their agents in VMs, which may be acceptable in small deployments. As the number and mobility of VMs increase, more advanced backup (such as VMware's vsphere APIs or block-level incremental capabilities for tracking and moving only changed and/or used blocks) should be considered. Additionally, snapshot, replication and data reduction (including data deduplication) techniques, and deeper integration with the hypervisor provider should also be viewed as important capabilities. With hundreds to thousands of VMs deployed in the enterprise, and with 10 or more mission-critical VMs on a physical server, improved data capture, bandwidth utilization, and monitoring and reporting capabilities will be required to provide improved protection without complex scripting and administrative overhead. This is a fast-changing market. Continually re-evaluate your options, especially if you decide to invest in a point solution that is different from the rest of the recovery tools that are deployed for the physical environment, and ensure that any point solutions can scale to meet your current and nearterm requirements. Business Impact: Like physical server recovery, VM recovery solutions help recover from the impact of disruptive events, including user or administrator errors (such as the accidental deletion or Page 32 of 88 Gartner, Inc. G

33 overwrite of a file), application errors (such as corrupted files), external attacks (such as phishing, virus and denial-of-service attacks), equipment malfunction (such as disk failures), and the aftermath of disaster events (such as a partial or complete loss of the production data center). The ability to protect and recover VMs in an automated, repeatable and timely manner is important for many organizations. As server-virtualized environments become pervasive and are used for more business-critical activities, VM recovery will become necessary to ensure timely access to data and the continuation of business operations. Gartner believes that the server-virtualized environment will soon become if it is not already the dominant deployment model. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Acronis; Asigra; Catalogic Software; CA Technologies; CommVault; Dell; EMC; EVault; FalconStor Software; HP; IBM; InMage; Microsoft; NetApp; Neverfail; Novell; Quantum; StorageCraft Technology; Symantec; UltraBac Software; Unitrends; Veeam Software; Vision Solutions; VMware Recommended Reading: "Essential Practices for Optimizing VMware Backup" "Magic Quadrant for Enterprise Backup Software and Integrated Appliances" "Best Practices for Repairing the Broken State of Backup" IT Service Dependency Mapping Analysis By: Ronni J. Colville; Patricia Adams Definition: IT service dependency mapping (SDM) tools discover, document and track relationships by leveraging blueprints or templates to map dependencies among infrastructure components (like servers, networks and storage) and applications in physical, virtual and cloud environments to form an IT service view. The tools provide various methods to create blueprints for internally developed or custom applications. Key differentiators are breadth of blueprints, mainframe discovery and depth of discovery across physical, virtual and cloud infrastructures. Position and Adoption Speed Justification: Enterprises continue to struggle to maintain an accurate and up-to-date view of the system and application dependencies across IT infrastructure components that make up IT services, usually relying on data manually entered into diagrams and spreadsheets that may not reflect a timely view of the environment. Existing inventory and discovery tools do not provide the necessary hierarchical relationship information regarding how an IT service is configured. SDM tools continue to fall short in the area of homegrown or custom applications. Without the stakeholders (e.g., application development or support, system administrators, and business liaisons) working together, identifying and blueprinting custom applications, it is not possible to Gartner, Inc. G Page 33 of 88

34 identify the complete application or IT service. Although SDM tools provide functionality to create the application or IT service blueprints, the task of validating the relationships remains laborintensive, which will slow enterprisewide adoption of the tools beyond their primary use of discovery especially with the increase in adoption of Web-based, agile and Web-scale applications. These tools are still most often adopted as a jump-start or companion to IT service view configuration management database (CMDB) projects, but there are new use cases where they are acquired stand-alone: data center moves, consolidations or transitions; disaster recovery planning; and as a complement to audit compliance for change control. While the users of IT SDM tools are often the teams that are part of the IT infrastructure and application support, these two newer use cases drive new buyers. We moved the position of these tools along the Hype Cycle curve due to the challenges in adoption and broader use, but we are still watchful for the extended use case for discovering IT services and applications that may be partially or totally provisioned in a public cloud. IT organizations will need to understand where application components are being hosted. Cloud services providers (CSPs) will be driven to move workloads based on capacity and elasticity requirements, which will lead IT organizations to be concerned with maintaining visibility and compliance. While these tools are scan-based and not real-time, the view of IT services that can be discovered will become more compelling with hybrid cloud deployments that include production applications (versus early adoption for development/test); however, because cloud deployments are taking longer than expected, the mainstream adoption and use of SDM will not happen before two to four years for this use case. This kind of understanding and focus will require a Level 3 or Level 4 maturity, where change and configuration are a prerequisite for achieving value, as well as service orientation. The vendor landscape is composed of a narrow set of suppliers that acquired point solution vendors predominantly to complement their IT service view CMDB solutions almost a decade ago. Some IT SDM solutions also have an embedded IT service view CMDB, or have some IT service view CMDB functionality (e.g., reconciliation). We have seen very little new vendor activity. There is only one stand-alone vendor that comes with a different approach to discovery and the added delivery model for SaaS. Over the past year, this stand-alone vendor has added more discovery depth and breadth and, takes a different approach to SDM. Additionally, only one other SDM vendor offers a SaaS delivery model. To meet the demand of today's dynamic data center, IT SDM tools require expanded functionality for breadth and depth of discovery, such as a broad range of storage devices, virtual machines, mainframes and applications that cross into the public cloud. Although the tools are expensive, the price has been declining over the past two years due to pressures from other discovery tools. Without this information, the IT organization cannot track relationship impacts for planned and unplanned changes, nor can it understand the changing dynamics of relationships across applications. The adoption of these tools continues to increase because the number of new stakeholders and business drivers with growing use cases. New requirements for hybrid cloud will likely take two to three years to mature, because most activity for hybrid clouds is just moving to production applications. User Advice: With modest innovation over the past several years, companies considering SDM should evaluate the vendor's road map to ensure there is focus on emerging requirements. Evaluate Page 34 of 88 Gartner, Inc. G

35 IT SDM tools to address requirements for configuration and relationship discovery of IT infrastructure components and software. The tools should be considered as precursors to IT service view CMDB initiatives. If the primary focus is to build out IT services or applications, or if the IT SDM tool you select is different from the CMDB, then ensure that the IT SDM vendor has an adapter to integrate and federate to the desired or purchased CMDB. User should consider SDM tools for projects such as disaster recovery and data center consolidation initiatives, and other tasks that benefit from a near-real-time view of the relationships across a data center infrastructure (e.g., asset management). Although most of these tools aren't capable of action-oriented configuration functions (e.g., patch management), the discovery of the relationships, especially virtual machines, can be used for a variety of high-profile projects in which a near-real-time view of the relationships in a data center is required. IT SDM tools can document what is installed and where, and can provide an audit trail of configuration changes. Network port level and storage device discovery are becoming differentiators among the various SDM vendors. If this is a priority in your service model, then ensure the tool is tested for this capability in a proof of concept. If the use case for these tools is to gain visibility in your virtual or cloud infrastructure, ensure that the tool can discover and map virtual-to-virtual relationships (where IT services are within a single host or can be across hosts and data centers), as well as virtual-to-physical relationships (e.g., where the application might be virtualized, but the database might still be physical). If the virtual infrastructure includes public cloud resources, ensure that the IT SDM tool supports CSPs' APIs (e.g., Amazon). Business Impact: IT SDM tools will have an effect on high-profile initiatives, such as IT service view CMDBs, and data center transformation projects. These tools will also have a less glamorous, but significant, effect on the day-to-day requirements to improve configuration change control by enabling near-real-time change impact analysis, and by providing missing relationship data critical to disaster recovery initiatives. These tools provide a mechanism that enables a near-real-time view of relationships that previously would have been maintained manually, with extensive time delays for updates. The value is in the real-time view of the infrastructure and applications, so that the effect of a change can be easily understood prior to release. This level of proactive change impact analysis can create a more stable IT environment, thereby reducing unplanned downtime for critical IT services, which will save money and ensure that support staff are allocated efficiently, rather than fixing preventable problems. Using dependency mapping tools in conjunction with tools that can make configuration-level changes, companies have experienced labor efficiencies that have enabled them to manage their environments more effectively and have created improved stability of the IT services. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Mature mainstream Sample Vendors: BMC Software; CA Technologies; HP; IBM; Neebula; VMware Gartner, Inc. G Page 35 of 88

36 Recommended Reading: "Selection Criteria for IT Service Dependency Mapping Vendors" "Toolkit: RFP/RFI for IT Service Dependency Mapping Tools" "IT Service View CMDB Vendor Landscape, 2012" "IT Service Dependency Mapping Vendor Landscape, 2012" "Seven Steps to Select Configuration Item Data and Develop a CMDB Project That Delivers Business Value" Public Cloud Storage Analysis By: Gene Ruth Definition: Public cloud storage is infrastructure as a service (IaaS) that provides object, block or file storage services through a REST API using Internet protocols. The service is stand-alone with no requirement for additional managed services. The service is priced based on capacity, data transfer and/or services. It provides on-demand storage capacity elasticity and self-provisioning. Stored data exists in a multitenant environment, and users access that data through either the Internet or dedicated network connectivity. Position and Adoption Speed Justification: Cloud storage is available on a global basis with providers offering a wide breadth of storage services and service-level agreements (SLAs). Services target a wide variety of customers and workloads and are defined by their SLAs, global scale and pricing regimes. The evolution of cloud storage is driven by market demand pressure to reduce the total cost of ownership for storage, which is necessitated by the rapid growth of data and the operational costs of maintaining high-growth internal storage infrastructures. Other drivers include requirements to improve organizational responsiveness, to deal with unpredictable workloads and to offer collaborative opportunities. Based on client interactions, U.S.-based data centers are willing to consider public storage, but are still inhibited by a variety of privacy, regulatory, economic, vendor and service provider credibility issues. Regulatory concerns have contributed to differences in expectations between users in the U.S. and other geographic areas. In the U.S., a handful of larger service providers are solidifying their positions in the market and are continuing to establish their credibility. Other regional markets have been slower to adopt the storage services model, but are expected to quickly move to adoption once more traction is evident for cloud services in the U.S. Enterprise customers are focusing on hybrid solutions that bridge on-premises storage with public services as they test the capability and prove viability of public cloud storage for their use cases. The hybrid architecture allows a stepwise approach to public cloud adoption and is highly dependent on common APIs and the still-evolving cloud storage gateway market. Gartner expects regions outside the U.S. to follow the same path as the U.S. market, albeit at a much quicker pace. U.S. success stories are undoubtedly reaching other regions, encouraging end users to demand regional support. Gartner expects adoption expansion to continue as costs, legal concerns, security and infrastructure integration issues are sufficiently addressed to reduce the risk of entry by large enterprises. The failure of large storage hardware vendors to offer enabling Page 36 of 88 Gartner, Inc. G

37 products that support hybrid infrastructures and the reluctance of IT organizations to use evolving storage service providers limits the growth of the market. User Advice: Evaluate cloud storage as an alternative for non-mission-critical storage services such as archiving, file sharing and backup. These noncritical use cases provide allowance for shortfalls in SLA compliance by service providers and facilitate expectation setting by clients. Cloud storage can be useful for prestaging data for transient cloud computing IaaS environments that depend on manipulating large amounts of data. Also, consider public cloud storage as an effective solution for providing data and data protection for branch offices and mobile users. For hybrid environments, include on-premises cloud storage gateway appliances that provide cache, data deduplication, thin provisioning, encryption and further security capabilities that will assist with security and latency concerns. When selecting a service provider, due diligence should include an evaluation of the client organization's sensitivities to SLAs compliance, data sovereignty, bandwidth costs, usage rates and disaster recovery requirements. Initially, considerable investments in time and money will be required to integrate cloud storage options into current applications and environments. Business Impact: The cost and agility expectations set by the public cloud storage vendors are enabling in-house IT operations to change their storage infrastructure management procedures and storage infrastructure strategies. User demands for lower costs, more agility and operations that are more autonomic are also influencing vendor R&D investments and cloud services offerings. Services that have already been influenced by user demands include backup, versioning, encryption and secure erasure. To attract new customers, vendors continue to lower costs and offer a variety of pricing models that allow end users to align their storage costs with their usage rates, with the goal of lowering costs in both the short and long term. Customers must be mindful that, although operational costs may move into the cloud, management issues such as chargeback, asset management, billing, security and performance responsibilities remain with the customer. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Amazon; AT&T; Google; HP; IBM; Microsoft; Rackspace Recommended Reading: "Cloud Storage Gateways: Enabling the Hybrid Cloud" "Hybrid Cloud Storage Can Be an Antidote to Rapid Data Growth" "How to Calculate the Total Cost of Cloud Storage" Hazard Risk Analysis and Communication Services Analysis By: Roberta J. Witty Definition: Hazard risk analysis and communication services evaluate worldwide incidents that threaten the health and safety of citizens and the workforce, cause damage to critical physical and Gartner, Inc. G Page 37 of 88

38 technology infrastructure, or cause a disruption to normal business operations. Using in-situ and remote sources, such as academic, political and business personnel, institutions and technology resources, predictive hazard risk intelligence is delivered to customers in real time for early warning and in-progress events. Position and Adoption Speed Justification: Hazard risk analysis and communication services have been in place for more than 10 years, leveraging thousands of information sources to deliver hazard risk intelligence reports at the facility, neighborhood, regional, national and international level to clients via , phone and print. Some hazard risk analysis and communication services provide additional services, including: A centralized communication hub for worldwide personnel, including a 24-hour hotline for access by personnel who may be in harm's way Travel risk management services Medical, security, kidnapping or evacuation assistance to personnel impacted by an incident With the introduction of the Web for customer ease of use, information access and reporting, GIS for geolocation data visualization and mobile devices for SMS, and voice alerting, the use of these services has greatly improved the situational analysis awareness level of an incident. Therefore, more organizations are starting to adopt these tools, especially those with a multilocation footprint. The GIS capability, in particular, is a large benefit to operations, security and business continuity management (BCM) professionals, because the service can be tailored just to their operations. By categorizing the severity of incidents and geocoding their assets in the system, they can receive alerts in real time that need investigation and possible response actions. Therefore, we position this new Hype Cycle technology at the post-trough 5% point in the Hype Cycle only because of adoption closer to 20% than 50%. User Advice: Organizations should consider the use of hazard risk analysis and communication services when: They are maturing their BCM program to include crisis/incident management and travel risk management. Their operational risk management program expands to include hazard risk. Their operations are in locations that have a medium to high level of risks of environmental, geopolitical, weather-related and health-related incidents. They have a medium to high percentage of the workforce that travels on a regular basis (daily or weekly, for example). They have personnel regularly traveling to medium- to high-risk locations. Look for a solution that includes: Coverage of your operating locations GIS capability that can be customized to your operating footprint Page 38 of 88 Gartner, Inc. G

39 Risk intelligence communication via phone, , SMS and customer-customizable reporting Mobile device app support for communications and interactive situational awareness activities Daily news alerts for events occurring in your operating locations Business Impact: Hazard risk analysis and communication services are used by government agency and private enterprise professionals in corporate security, BCM, supply chain management, travel risk management, watch command, fusion centers and more for a number of purposes: Location risk assessments as part of the due diligence done for business decision making, such as data center or recovery site development, and personnel or operations relocation Incident situational awareness analysis for production operations disruption impact, traveler and expatriate duty of care, and other corporate interest impact Preparedness and response management for large-scale scheduled events, such as the Olympics, G8 meetings, political conventions, the World Cup and other sport-specific events Incidents such as medical outbreaks, biohazards, weather or natural events, accidents, economic crises, and cultural, geopolitical, crime and civil unrest incidents occur 24/7 every day of the year. Knowing which one will turn into a crisis requiring a directed response for your organization can be a daunting task without accurate and timely information. Monitoring the multitude of news sources from the news networks, the Internet, social media outlets and more can result in information paralysis and under- or overreacting to an incident. For this reason, we assign a benefit rating of High to this technology service. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: A3M Mobile Personal Protection; CountryWatch; ijet; NC4 Recommended Reading: "How Gartner Defines Crisis/Incident Management" "Magic Quadrant for U.S. Emergency/Mass Notification Services" Humanitarian Disaster Relief Analysis By: Roberta J. Witty Definition: Humanitarian disaster relief is a complex set of tools and processes that coordinating bodies use to manage disparate resources and address the many needs of disaster victims. Complexity is inherent in the process, because a mix of players converges on the disaster site. Utilities, charities, other NGOs, private enterprises with expert disaster relief skills, emergency responders and volunteers are often included in the mix. Many have no official interface to the authorities or specific deployment instructions, making their responses chaotic. Gartner, Inc. G Page 39 of 88

40 Position and Adoption Speed Justification: Free and open-source software, such as Sahana Software Foundation and Ushahidi; social media services, such as Facebook and Twitter; emergency or mass notification services (EMNS) implementations, such as Notify NYC; and other for-profit software tools for crisis/incident management have been key to coordinating the delivery of disaster relief for major disasters, such as the 2010 earthquake in Haiti, the 2011 floods and cyclone in Australia, the 2011 earthquake and tsunami in Japan, Superstorm Sandy in 2012, and the Boston Marathon bombing in For example, for Superstorm Sandy, New York City used its citizen-alerting service called Notify NYC extensively to inform residents in the metro New York City area of storm actions and advice. New Jersey used Twitter in the same way. For the 2011 Japanese earthquake and tsunami, Google set up its Crisis Response center, which has a person finder, transit status, flight status, shelter information, and many more alerts and notices for people involved and interested in the event. Google previously set up a similar system for the 2010 Haiti earthquake. Universities are also heavily involved in the development of some of the tools, such as the Humanitarian Free and Open Source Software (FOSS) Project started by Trinity College, Wesleyan University and Connecticut College, and the University of Colorado's work on the "Tweak the Tweet" project during the Haiti earthquake. Commercial players are also offering solutions for humanitarian relief, predominantly through the U.S. United Incident Command and Decision Support (UICDS) program for crisis/incident management for government agencies. Private enterprises, such as telecommunications firms (see AT&T's Network Disaster Recovery service) and electric power utilities, set up emergency utility access. To further benefit the process of humanitarian disaster relief, solution providers need to add more integration among their tools so that each mashup implementation is easier and faster. The government emergency management organization in the country experiencing the disaster is typically the point of coordination, and it is at this point in each country where the tools are implemented and the process is optimized. In the 2014 Hype Cycle, we moved the rating to post-trough 5% from its pre-trough position in 2012 and 2013 due to the increased use of these tools in the regional disasters since However, every event is unique, and Gartner continues to stress that humanitarian disaster relief remains a complex implementation among all participating suppliers each time there is a disaster. User Advice: National emergency and disaster management personnel should investigate the use of opensource and for-profit software for managing humanitarian relief aid and collaboration during a disaster. Local emergency management offices that run normal operations for example, for highaltitude, missing-person and shelter management should integrate humanitarian disaster relief into their operations before an event occurs. Doing so will reduce the chaos of unmanaged search-and-rescue efforts. Humanitarian aid organizations will find these solutions useful for managing the intake and distribution of their own resources for example, relief camp management and food distribution management. Page 40 of 88 Gartner, Inc. G

41 Business continuity management (BCM) professionals will find these solutions useful as a way to integrate all BCM tools and platforms for private-enterprise disaster management, as well as to coordinate disaster management activities across multiple parties (private and public) during a large-scale (for example, regional) event. Academic institutions seeking to participate in open-source projects will find the open-source humanitarian disaster relief software market an opportunity for rich, open-source interactions. Business Impact: Managing humanitarian disaster relief is a challenge and can benefit from the use of automation. These solutions help manage emergency resources in a manner that brings support to victims faster and mitigates further damage due to the recovery effort. However, the process of managing a disaster still needs tremendous coordination across multiple government, private and NGO players. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: American Red Cross; CrisisCommons; Department of Homeland Security; DigitalGlobe; Google Crisis Response; IFRC; InSTEDD; International Network of Crisis Mappers; Notify NYC; Open Solutions Group; Sahana Software Foundation; Star-Tides; Ushahidi; VirtualAgility Recommended Reading: "Sahana: Humanitarian Disaster Management and Collaboration System" Workforce Resilience Analysis By: Roberta J. Witty Definition: Workforce resilience, a subset of the business recovery discipline, addresses the protection and productivity of the critical workforce during a disaster. A workforce resilience program delivers solutions for these critical recovery areas: life and safety; immediate availability to cash, food and shelter; workforce personal preparedness; workforce event preparedness; recovery staffing, housing and travel; and workspace/work area recovery. Position and Adoption Speed Justification: Organizations that have any type of recovery plan have, to some extent, addressed their workforce resilience needs. For example, the IT disaster recovery plan must, by definition, identify the IT recovery teams to recover the data center, and telework solutions put in place for work-life balance have become a key part of many recovery strategies. However, unless the organization has made the transition from IT disaster recovery to business continuity management (BCM), it typically has not addressed all aspects of its workforce resilience needs. For example, workforce collaboration tools, mobile devices, social media, bring your own device and tablet-enabled applications are being implemented with greater frequency, and therefore, the workforce can respond to a crisis from any location, thereby enhancing response Gartner, Inc. G Page 41 of 88

42 and recovery needs. Gartner predicts that, as organizations make this transition, workforce resilience needs and solutions will mature. Areas that are growing in adoption for workforce resilience are: The use of services for ensuring that housing is available for the workforce during a disaster. Electric service providers have been using housing continuity services for some time, but the vast majority of other organizations have not. Savings can be had in this area because of the expertise of these new market providers. Therefore, every organization should take advantage of them. Consideration of implementing generators at the homes of critical workforce. Multisourced Internet access at the homes of critical workforce. Personal-level training for disaster recovery, oftentimes leveraging the American Red Cross program for personal preparedness. Our 2014 rating was based on the maturity of the overall process of ensuring that workforce resilience preparedness needs are addressed, including the overall adoption of technical solutions and good practices. As a result, we moved the rating just slightly from the Trough of Disillusionment to Post-Trough 5% to acknowledge that some progress has been made in improving workforce resilience in a number of ways especially after Hurricane Sandy's effect on the east coast of the U.S. in However, we continue to stress that much work remains to be done in ensuring that workforce needs are adequately addressed in recovery planning, especially in personal preparedness. User Advice: Consider these areas when developing and evaluating a workforce resilience plan: Emergency response activities, such as building evacuation, traveling and work-at-home worker management, immediate medical care, and trauma counseling Personal preparedness at home so that workers can leave loved ones to come to the aid of the organization Recovery team training through recovery plan exercises in many forms HR benefits and compensation considerations for recovery activities, as well as job description requirements for recovery team members Organizational recovery, including collaboration tools for work activities and crisis management activities; immediate access to disaster resources, such as cash, housing/hotel arrangements for impacted workers and transportation; as well as work-at-home programs for both production and recovery needs, information and applications needed to conduct day-to-day business operations Management succession planning, and formally including the successor in recovery plan exercises Staff scheduling, cross-training and external contracts for special/limited skills Page 42 of 88 Gartner, Inc. G

43 Work area/workspace in other facilities that the organization owns/leases, as well as telework centers and mutual aid agreements between partners, customers or other organizations willing to offer such good-will arrangements Business Impact: There are three goals of a workforce resilience program: Enhance the organization's ability to recover from a disaster by ensuring that the workforce is as practiced as possible for an eventual disaster at home as well as in the workplace and has the resources it needs to perform recovery activities. Preserve the reputation of the organization by ensuring that crisis communications are wellcrafted and timely, and that the right recovery team responds at the right time. Maintain the social networks normally found at work to ensure that the workforce's emotional state is protected. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: American Red Cross; Apple; BlackBerry; CAPS; Citrix; Continuity Housing; Extreme Behavioral Risk Management (XBRM); HP; IBM; Preparis; Regus; Rentsys Recovery Services; SunGard Availability Services Recommended Reading: "Remote/Teleworking Practices Offer Opportunities to Help Japanese Businesses Recover From Disaster" "Telework in Government Moves From 'Good Idea' to 'Must Have'" "Out of the Ashes: Business Continuity Management Lessons From Iceland's Volcanic Eruption" "Toolkit: Update Your Human Resources Policies for BCM Considerations" Risk Assessment for BCM Analysis By: Tom Scholtz Definition: Risk assessment in the business continuity management (BCM) context is the process of identifying risks to business process availability and the continuity of operations. It is an essential first step (along with the business impact analysis [BIA]) in the overall BCM process, and is necessary to reduce the frequency and effect of business interruptions, addressing risks related to technology, location, geopolitics, regulations and industry, as well as to the business and IT supply chains. Position and Adoption Speed Justification: Although it is well-understood that risk assessments are a necessary component of BCM planning, business managers sometimes consider them to be time-consuming and too resource-intensive. This opinion has been justified by the general lack of Gartner, Inc. G Page 43 of 88

44 effective risk assessment methods and tools, and often exacerbated by the inappropriate use of such tools and methods. Furthermore, given that BCM planning is often focused on low-likelihood, high-impact events, the emphasis of the risk assessment is typically on planning for the possibility of a catastrophic event, rather than on the probability of the event happening. However, expectations of better levels of practice are increasing, encouraged to some extent by standards such as ITIL; COBIT; and International Organization for Standardization (ISO)/ International Electrotechnical Commission (IEC) 22301, and Risk assessment is increasingly seen as having a valuable role to play in identifying, assessing and preventing events that could result in the unnecessary triggering of recovery plans. That is, risk assessment focuses not only on events over which the enterprise has little control (such as natural disasters and terrorism), but also on those over which it has more control (for example, facility failures, supply chain complexity, poor change management, security control weaknesses and human error). Risk assessments are recommended in all BCM frameworks, and risk assessment tools are being included as integrated or stand-alone modules in BCM planning (BCMP) toolsets. Governance, risk and compliance tools increasingly support assessing and reporting on BCM risk. Using these tools requires specific BCM skills and time, both of which are often unavailable; but this situation is improving. Increasing emphasis on the importance and value of risk assessment in all spheres of business management is driving increased adoption of the discipline as a key component of BCM. However, unrealistic expectations about risk assessment being a panacea for ensuring business involvement in the BCM process, coupled with the inappropriate use of risk assessment tools (such as using very algorithmic mathematical models with a business audience that manages risk in a more intuitive manner), will continue to result in some disillusionment and a lack of business unit buy-in. User Advice: Make formal risk and business impact assessments mandatory components of your BCM program. These assessments must identify key control weaknesses and single points of failure. Define the extent to which risk assessments will be performed based on BCM project scope, resources and time availability. If existing processes are not effective, then change them. Consider replacing complex mathematical tools with more-intuitive assessment methods (for example, scenario planning and Delphic brainstorming), if they will better suit the cultural approach to risk management. Such methods are typically more suited to assessments of multisourced environments, including software as a service (SaaS) and cloud-based services. Improve efficiency and reduce the time demands on business managers by leveraging risk assessments performed by operational or IT risk teams. Work with those teams to ensure that their data is sufficiently granular to meet BCM needs. As you become more mature at BCM risk assessment, make the transition to a continuous improvement process that accommodates BCM, IT and security risks. This will ensure that BCM team members business and IT are included and kept apprised of new or changing threats. Use standard terminology and processes to ensure consistency in assessment and risk prioritization. Investigate the use of software tools. They will not eliminate the need for an experienced risk assessor, but they can simplify the risk assessment process. Additionally, they provide an important repository for risk information, tracking assessments and treatment activities, as well as documentation for auditors and aid to program improvement. BCMP tools, which often Page 44 of 88 Gartner, Inc. G

45 provide integrated risk assessment functionality, are increasingly used as hosted or SaaS solutions. This potentially allows the business continuity manager to realize value at a lower-price entry point. Business Impact: Implementing BCM plans can be expensive and disruptive. Risk assessments are essential for pre-emptive action to reduce threat occurrences and constrain the effect of any disaster. Risk assessments ("What are the chances of a disaster happening?") also provide critical information for effective BIAs ("What will the impact be if a disaster becomes reality?"). Increasing adoption of SaaS and cloud-based services adds an additional level of complexity to BCM planning and the ability to perform effective risk assessment. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Business Protector; COOP Systems; Cura Technologies; ebrp Solutions; Fusion Risk Management; Linus Information Security Solutions; MetricStream; Rentsys Recovery Services; Risk Watch International; RSA The Security Division of EMC; Strategic BCP; Sungard Availability Services Recommended Reading: "Survey Analysis: Assessment Practices for Cloud, SaaS and Partner Risks, 2013" "15 Common IT Risk Management Pitfalls" "Magic Quadrant for Business Continuity Management Planning Software" "Toolkit Sample Template: Checklist for Data Center BCM/DR Risk Assessment" "Toolkit: Applying the Gartner Risk Assessment Methodology to Critical Enterprise Assets" Climbing the Slope BCM Planning Software Analysis By: Roberta J. Witty Definition: Business continuity management planning (BCMP) software is the key tool used to manage BCM programs. It provides risk assessment; business impact analysis; business process, supplier or vendor, and IT dependency mapping; plan management functionality; and program management metrics and analysis. Some tools also offer plan exercising, resource modeling, and "lite" support for crisis/incident management and emergency notification. Position and Adoption Speed Justification: Organizations increasingly need usable recovery plans of all types from response to restoration, and a consistent and repeatable plan development process. Also, the growing focus on BCM program metrics has resulted in increased sophistication Gartner, Inc. G Page 45 of 88

46 in BCMP tools. In addition, they integrate with other BCM tools, such as emergency or mass notification service (EMNS) and crisis/incident management (C/IM), GIS, and geospatial tools, creating an ecosystem for real-time situational awareness during an actual disaster. Mature BCM programs use these tools for business and program management analysis with a goal of building more resilience into day-to-day business operations. The biggest competitors to using a BCMP tool are Microsoft Office tools and SharePoint for document management. With more than 30 vendors, the BCMP market has a 2013 revenue estimate of $162 million, a 24.6% growth over our 2012 estimate of $130 million. The three-year average annual growth rate is 16%. Pricing for this market remains competitive for simpler implementations, while pricing for large, multinational implementations can be in the high six figures or more. Large or regulated enterprises, as well as government agencies, typically use the tools, while small and midsize firms are increasingly looking to do so. The financial services market and organizations with complex business operations lead the pack in implementations. Coordinating, analyzing and managing large amounts of availability information are almost impossible to do without a tool. Therefore, the significant growth in adoption of BCMP tools 24% from 2010 to 2011, 38% from 2011 to 2012, and 42% from 2012 to 2013 (as measured from our annual security and risk management survey) indicates that organizations are realizing these tools can help standardize and manage recovery plan development, as well as manage the BCM program itself. Having current, effective and exercised recovery plans is the key to success during a disaster, and these tools are essential for effective crisis and business recovery. We anticipate adoption to continue to grow in the next five years, given the increased focus from government agencies, regulators and private-sector preparedness initiatives. Some of the future growth will come through the governance, risk and compliance (GRC) market as more are providing BCMP capability as part of the broadening operational risk management toolkit. In this year's Hype Cycle, we moved the BCMP market position up by one spot to post-trough 20% from the 2013 Hype Cycle position of post-trough 15%. User Advice: Consider a BCMP tool when: You are starting a new BCM program and want to follow standard practices throughout the organization. You are updating your current BCM program and processes. You are maturing your BCM program and need more analytics than traditional office management tools can provide. You need to integrate plans and partial plans from a number of departments and business units into one consistent, accessible and easily updated plan. A merger or acquisition has presented you with the need to create a BCM program reflecting all the elements of your organization. You want to conduct the research and planning process in-house, with minimal assistance from outside consultants. Page 46 of 88 Gartner, Inc. G

47 Do not overbuy. Focus on: Ease of use in the hands of business users, not IT or BCM program office users only Ease of customization by you, not the vendor to your organization's continuity delivery framework and so forth Ease of reporting, including modifying the report formats provided by the vendor, as well as creating new report formats Ease of integration with other important business applications such as enterprise directories, HR tools, business process management tools (whether internally developed or purchased), change and configuration management databases (CCMDBs), IT asset management tools, BCM software that your organization may already have purchased (such as EMNS or C/IM software), and news feeds to a BCM program dashboard Mobile device (smartphone or tablet) support for recovery plan access and execution at the time of a business disruption. In addition to a financial statement or strategic plan, a recovery plan is an organizational document that is most likely to result in lost revenue, damaged reputation or worse if it is not current or is unavailable (or nonexistent) at the time of a business disruption. Moreover, like all organizational policies and procedures, the best recovery plan can rapidly become obsolete. Therefore, organizations must consider the recovery plan a living document that needs a continuous review and update process for regular plan reviews (annually, at a minimum, or when there are major business or infrastructure changes) and event-triggered plan reviews (such as changes in operational risk profiles, business or IT processes, and applicable regulations, as well as exercise results showing a gap in plan actions versus current recovery needs). Business Impact: BCMP tools will benefit any organization that needs to perform a comprehensive analysis of its preparedness to cope with business or IT interruptions, and that needs to have in place an up-to-date, accessible plan to facilitate response, recovery and restoration actions. If used to its fullest potential, a BCMP tool can be used to enhance business operations and resilience outside of recovery and in areas such as HR management, business and IT re-engineering, and mergers and acquisitions. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Avalution Consulting; Bis-Web; BOLDplanning; Continuity Logic; Coop Systems; ebrp Solutions; EMC (RSA); Factonomy; Fusion Risk Management; Inoni; KingsBridge; Linus Information Security Solutions; Metric One; MetricStream; Modulo; Paradigm Solutions International; Phoenix IT Group; Quantivate; RecoveryPlanner; Rentsys Recovery Services; Strategic BCP; Sungard Availability Services; Tamp Systems; Virtual Corp. Gartner, Inc. G Page 47 of 88

48 Recommended Reading: "Research Roundup: Business Continuity Management and IT Disaster Recovery Management, 2Q13" "The Continuity Delivery Framework Is Essential for Ensuring Measurable and Sustainable BCM Planning Tool and Program Benefits" "Business Impact Analysis: Enabling Effective Business Continuity Management" Crisis/Incident Management Software Analysis By: Roberta J. Witty; Leif Eriksen Definition: Crisis/incident management (C/IM) software is used to manage the actions of the workforce and other key stakeholders in response to an incident in a consistent manner so as to return to normal as soon as possible. C/IM functionality includes crisis communications and collaboration, recovery plan repository, plan training/exercising, task and expense management, workforce scheduling, a geographic information system, social media analysis, data visualization, support for multichannel/application viewing, and government agency reporting. Position and Adoption Speed Justification: The goal of C/IM is to contain and minimize the impact of a crisis or incident (such as earthquakes, power outages, transportation delays, product failures, market shifts, adverse management activity, workplace violence, fires, floods, collapsing bridges, severe weather conditions, terrorist attacks, chemical spills and accidental discharges) on individuals, localities, businesses and public agencies. Damage can be done to an organization's reputation, operations and revenue streams, as well as a government's ability to reduce any adverse impact on public safety. Information security incidents are usually handled by a specialized team under the chief information security officer, but may turn into a larger event that can impact business operations to the point that a disaster is declared. In those cases, the event management would transition to the broader C/IM team. In recent years, specialized C/IM software tools originally designed for government agencies and utilities have been commercialized for the private enterprise. These tools are used for the following purposes: Managing relationships with all organization stakeholders (internal and external) Managing response, recovery and restoration actions for the crisis, incident or situation through task and workforce management Managing media communications, including national services and social media traffic Communicating information internally and externally, typically via emergency/mass notification services Providing postmortem reviews of the crisis or incident for regulatory training, reporting and business continuity management (BCM) process improvement efforts Solutions may be: Page 48 of 88 Gartner, Inc. G

49 Specialized to the operations of one industry for example, government, electric utilities, transportation, or oil and gas. Generalized for the management of any type of crisis or incident, such as found in a business continuity management planning (BCMP) tool. Part of an environmental, health and safety (EH&S) application. Part of a case management tool. Many of these products are evolving into centralized "systems of record" and general risk management tools. Regional and national-scope disasters require enterprise-based C/IM for the critical infrastructure sectors to interact at least at the level of status reporting and communicating with one another and with government agencies. As a result, the Federal Emergency Management Agency (FEMA), through the Unified Incident Command and Decision Support (UICDS) Project (a middleware framework to tie together many disparate technologies used for C/IM), will help remove some process barriers in place today, as well as provide meaningful situational-awareness information to public and private organizations. In addition, government and regulatory agency requirements, such as those of the U.S. Occupational Safety and Health Administration (OSHA), National Incident Management System/Incident Command System (NIMS/ICS) and FEMA, are driving more organizations to move to automation. In the 2014 Hype Cycle, C/IM software remains at the post-trough 20% position because we aren't seeing enough private-sector usage to change the adoption rate. Private enterprises, other than large, multilocation and often multinational organizations, find them rather complicated to use or fit for purpose to only one standard for example, the NIMS/ICS. Less-complex tools are required for market adoption to rise, and can be found in the BCMP tools market. User Advice: Match the type of C/IM software solution deployed to the most likely and critical types of crises or incidents that pose the greatest operational risk to a company based on a formal, boardapproved risk assessment. A financial services company might opt for a solution that provides functionality aligned with an IT outage, a natural disaster or a pandemic, while a heavy-industry manufacturing entity might choose one with functionality tailored for response to EH&S-related crises or incidents. Buyers need to be realistic about the initial benefits and the level of effort required to reach these benefits, and they should expect years of slow but steady improvement in the value they extract from this category of product. Ensure that the chosen software solution adheres to public-sector crisis/incident protocols relevant to the geographic regions in which the solution is deployed. For example, in the U.S., any solution targeted to respond to physical crises or incidents, such as environmental mishaps, safety issues, or natural disasters affecting health and safety, should adhere to the NIMS/ICS process, as mandated by the U.S. Department of Homeland Security. This will ensure interoperability with public-sector response agencies. Gartner, Inc. G Page 49 of 88

50 Manufacturers with exposure to EH&S issues as a result of disruptions caused by natural disasters should adopt solutions that are interoperable with regional public-service protocols to ensure timely and efficient responses to minimize brand damage, and consult with their corporate counsel for jurisdictional issues relating to privacy and rules of evidence. Business Impact: C/IM processes and software solutions help organizations manage the following actions taken in response to a critical event or disaster that interrupts the delivery of goods and services: Improve the organization's ability to protect public safety and to restore business services as quickly as possible. Improve the efficiency of crisis/incident command and related emergency responses by continual communication and progress assessment when responding to a disaster. Ensure the recovery of expenses incurred during the disaster from business interruption insurance policies. Protect the reputation of the organization in the eyes of all stakeholders employees, customers, citizens, partners and suppliers, auditors, and regulators. Using a system that imposes a standardized best-practice or leading-practice model extends uniform managerial controls across the organization. It also cuts staff training time and ensures better integration with the broader internal and external community involved in recovering from a disaster. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Crisis Commander USA; Enablon; Enviance; ERMS; Global AlertLink; IHS; Intelex; Intergraph; Intermedix; IntraPoint; Ixtrom Group; MissionMode; NC4; Previstar; ReadyPoint Systems; Reality Mobile; RMSS; SAI Global; Send Word Now; SIS EmerGeo Solutions; Sungard Availability Services; Swan Island Networks; VirtualAgility; Witt O'Brien's Recommended Reading: "How Gartner Defines Crisis/Incident Management" "Toolkit: Requirements for Crisis Command and Emergency Operations Centers" Emergency/Mass Notification Services Analysis By: Roberta J. Witty Definition: EMNS automates the distribution and management of notification messages to an organization's interested parties (e.g., workforces, customers, students and citizens) through multiple endpoints (e.g., voice, , SMS, digital signage, safety systems, public alerting systems and so on). Message distribution can be done via a Web portal, a mobile device app or browser, Page 50 of 88 Gartner, Inc. G

51 interactive voice response, or the vendor's call center. Use cases include emergency events, business operations notifications, IT service alerting and public safety. Position and Adoption Speed Justification: Organizations are increasingly implementing EMNS to build stronger crisis management programs. Communication is critical during incidents ranging from localized events, such as a fire or power outage, to regional and catastrophic disasters, such as earthquakes (as in Chile, Haiti and Japan); hurricanes/tsunamis (as in Hurricane Sandy in the U.S., as well as the storms in Indonesia and Japan); terrorist attacks (as in Mumbai, India; in London; and in the U.S. on September 11); and other business disruptions (as in the 2014 Bangkok Shutdown, the 2010 Iceland volcanic ash disrupting air travel, and the 2009 to 2010 H1N1 pandemic). The EMNS market is price-competitive at the basic capabilities level. As customer needs and use cases change and expand, so, too, will this market. The majority of implementations are hosted by the vendor and priced using a per-contact model. EMNS products have attracted many specialty audiences, resulting in a large field of many small vendors and a few large multiproduct vendors. Gartner's current list contains more than 60 vendors. Consolidation is expected and needed over the next five years. Potential EMNS mergers and acquisitions include vendors in the following markets: facilities management; physical security; fire safety; crisis management; environmental, health and safety; disaster event information analytics/ situational awareness; and business continuity management planning. No vendor has an offering that supports all use cases. There is some vendor overlap between the EMNS and communications-enabled business process markets (see "Hype Cycle for Unified Communications and Collaboration, 2013") through an EMNS product application programming interface for integration to a triggering business application. We are also seeing purpose-built offerings, such as customer communications management (see "Hype Cycle for P&C Insurance, 2013") and multichannel marketing communications (see "Magic Quadrant for Multichannel Campaign Management"). We expect that organizations will continue to need multiple tools to achieve all use cases. Innovation in EMNS will come from expanded support for mobile devices as well as crisis/incident management situational awareness. The position for EMNS in 2014 remains the same as in 2013 for the following reasons: The number of vendors is still expanding. The direction of new features is still open to interpretation a few vendors are moving into the situational awareness market, but the adoption of these tools for that purpose has barely been embraced by the customer base. The use cases within the organization are expanding, and EMNS product capabilities are expanding in support of them. User Advice: 1. Understand all the notification use cases needed by your organization to ensure that you are making the best use of your investment. Gartner, Inc. G Page 51 of 88

52 2. Use the same pricing model across all vendors on your shortlist to do a valid pricing comparison; this may require a vendor to convert its pricing model to yours. 3. Choose a vendor that has experience in your vertical industry to better align its offering to your business operations. 4. Choose an EMNS vendor that has customer support services located in the same or adjacent time zones as yours, and that also has language support for your operating locations. 5. Choose an EMNS vendor that has data center operations located in different geographic locations from yours; this is not only to prevent the same event from impacting you and the EMNS vendor, but also for privacy protection considerations. 6. Select an EMNS vendor that supports your organization's mobile technology and social media integration strategy, and that has device-specific applications that align with that strategy. 7. Carefully plan your workforce enrollment procedure to ensure that all people who need to be contacted are included in the service, and that their contact information is current and complete. 8. Carefully plan the type, number and content of notification messages, because: Recipients of notification messages may ignore them if too many are sent about the same event. Carrier-based character restrictions on text messaging make the formation of a meaningful message challenging. During a regional disaster, you shouldn't overload the telecommunications infrastructure with needless messages. Business Impact: The interest in and need for EMNS which are critical for managing and improving an organization's crisis communications capability continue to grow among governments, public and private enterprises (regulated or not), educational institutions, and operators of critical infrastructures, because crisis communications are becoming a best practice and a requirement for some industries (for example, higher education and part of U.S. fire code NFPA 72). The business benefits of using an EMNS tool include: Key personnel can be notified in minutes, and large numbers of nonkey, but affected, personnel can receive critical information about the event. Management can focus on critical decision making and exception handling, instead of message delivery. Human error, misinformation, rumors, emotions and distractions which are so often present during a crisis can be better managed and corrected. A documented notification audit log can be provided for real-time and postevent management. The reputation of the organization can be preserved/enhanced. Benefit Rating: High Page 52 of 88 Gartner, Inc. G

53 Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Amcom Software; AtHoc; Blackboard; Cassidian Communications; Eaton; Emergency Communications Network; Everbridge; Facebook; Federal Signal; FirstCall; Global AlertLink; MessageNet; MIR3; Omnilert; Rave Mobile Safety; salesforce.com (Chatter); Send Word Now; Sungard Availability Services; TapShield; Twenty First Century Communications; Twitter; Volo; xmatters; Yammer Recommended Reading: "Magic Quadrant for U.S. Emergency/Mass Notification Services" "The Emergency or Mass Notification Service Market: Now and for the Next Five Years" "Spam Filters Could Cripple Your Emergency Notification System" Ka Band Satellite Communications Analysis By: Bill Menezes; Jay E. Pultz Definition: Ka band satellite communication (satcom) services are designed to augment and replace Ku band very small aperture terminal (VSAT) services the enterprise fixed-location satcom mainstay since the 1980s. Higher-frequency Ka band offers much higher data rates than Ku band, at a 30% to 40% lower cost per bit, but is not yet available globally. Position and Adoption Speed Justification: Ka band services are available from satellite service operators, resellers and system integrators. These services primarily provide broadband connectivity to remote geographic regions and locations that terrestrial systems, such as wireline or cellular networks, do not currently serve or are unlikely to serve. These regions include rural, lowpopulation density areas; islands and large swaths of Africa and Asia (such as Siberia); shipping; and offshore oil and gas platforms. Service providers target Ka band satcom for a wide variety of users, including enterprises, small and midsize businesses (SMBs) and consumers. As aging Ku band systems reach their end of life, Ka band is an attractive, next-generation system replacement. With multimegabit data rates and higher power density via spot beams, Ka band expands applications suitable for satcom, including cloud computing, collaboration, HD video and very large file transfers. Ka band services typically use remote earth terminals with antennas of 1 meter or less in diameter more than 40% smaller in area than Ku band. Services are offered at up to the 10 Mbps range in remote uplinks (Ku band has typically been limited to 2 Mbps uplinks or lower). On a per-bit basis, Ka band is typically 30% to 40% less expensive than Ku band. (Note: Ka band transmissions also are more prone to attenuation issues such as "rain fade" during periods of heavy precipitation, although IT network managers can use multiple technologies, such as advanced forward error correction, adaptive power or adaptive modulation to mitigate the issue.) Nearly all fixed-location Ka band satcoms are in geosynchronous orbit about 22,000 miles (36,000 kilometers) above the Earth. The latency over that distance constrains their use for voice and videoconferencing applications to when Ka band is the only service available. Hence, the Gartner, Inc. G Page 53 of 88

54 primary Ka band services are data and video broadcasts. However, one emerging service is designed to provide Ka band data throughput at lower latency: The O3b Networks Ka band satcom system will use so-called "medium" orbits (about 8,000 kilometers) to reduce latency below 150 milliseconds and is targeting throughput of up to 1.2 Gbps per beam. This data rate is sufficient for cellular telecom backhaul from remote areas. O3b Networks plans full commercial service from an eight-satcom constellation by YE14. Ka band enables a common networking solution for all sites and is therefore useful for operations such as big-box retail chains. Satcoms also are redundant to terrestrial networks, giving them wide applicability in disaster recovery and business continuity. For example, numerous satcom service and infrastructure vendors provided satellite connectivity and terrestrial communications equipment to local and international emergency and relief agencies in the Philippines following November 2013's devastating Typhoon Haiyan. Advanced Long Term Evolution (LTE) cellular networks eventually may emerge as a key competitor of Ka band in certain underserved markets and disaster recovery applications. However, these cellular networks rely on terrestrial connections such as fiber backhaul that in some situations might be cut or compromised. Ka band satcom service availability is increasing in key markets, such as North America (for example, through Hughes or ViaSat), Europe (for example, through SES and Eutelsat) and the Asia/ Pacific region (for example, through Ipstar and Inmarsat). With ViaSat-1 having begun operations in January 2012, Ka band capacity has increased dramatically in North America. This single satellite offers a total throughput of 140 Gbps, which equals the capacity of about 100 Ku band satellites. Hughes' EchoStar XVII, launched in July 2012, added more than 100 Gbps covering North America, thus enabling the company to offer consumer downlink speeds in the 10 Mbps to 15 Mbps range. Inmarsat, in December 2013, launched the first of a three-satellite constellation intended to provide global Ka band coverage under the Global Xpress brand. Hughes, SES, Inmarsat, Eutelsat, Arabsat and Intelsat all plan to launch additional Ka band satellite capacity by YE16. In this time frame, we also see Ka band as moving from adolescent to early mainstream in maturity. User Advice: First, identify whether terrestrial-based systems such as xdsl, 3G or 4G/LTE are available to meet the organization's need. View Ka band satcom as complementary to terrestrialbased systems and use it for specific applications that satcom can uniquely serve. Consider Ka band as a replacement for older-generation C band and Ku band VSATs or for uses requiring multimegabit data speeds. Look for suppliers to offer multiband-capable hybrid networking so that Ka band can be readily added for new sites on existing enterprise satcom networks. Also watch for future enhancements, such as lower-frequency S or L uplinks, which enable Ka band satcoms to address some mobile segments. Ka band satcoms have improved tie-ins to terrestrial wireless systems. Use wide-area optimization technologies to limit the effects of latency in satcom. Look for integration of these optimization technologies as well as routing and switching with earth terminal electronics. Business Impact: Ka band satcom offers substantially greater data rates, with a smaller, more readily portable earth terminal, at an attractive price point for specific enterprise communication needs mainly for geographic regions and locations that aren't likely to be served by terrestrial systems (for example, rural North America, remote islands, Africa and deep-sea oil rigs). Energy, utilities, retail, transportation, maritime and government are key vertical industries that will benefit from Ka band deployments. This technology also can improve disaster recovery capabilities. Page 54 of 88 Gartner, Inc. G

55 Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Eutelsat; Hughes; Inmarsat; Ipstar; O3b Networks; SES; ViaSat Recommended Reading: "Ka Technology Compels Enterprises to Revisit Satellite Communications" "Satellite Communications: The Last, Best Communications Alternative for Remote Sites" Appliance-Based Replication Analysis By: Stanley Zaffos Definition: Replication appliances provide network-based, storage-vendor-neutral block-level and/or network-attached storage (NAS) replication services that can include local and remote replication services of protected volumes and NAS file systems. Network-based replication solutions can span servers and storage systems. Integration into the storage infrastructure can be via software agents, storage area network (SAN) directors or switch APIs, or direct storage system support. Position and Adoption Speed Justification: Offloading replication services from storage systems into a network-based replication appliance provides operational and financial advantages compared with software- and controller-based solutions. Operational benefits include preserving native storage system performance and the use of common replication services across multiple heterogeneous storage systems, which can simplify disaster recovery by creating a constant timeline or consistency group across multiple storage systems. Other potential operational advantages include the ability to include direct-attached storage (DAS) support via the use of software agents, and insulating disaster recovery integrity from imperfect software control procedures. Using replication appliances reduces the strength of storage vendor lock-ins, which can translate into lower storage ownership costs by keeping storage system acquisitions competitive. Despite these advantages, market acceptance has been hampered by the following: A reluctance by end users to add another device to the input/output path Competition from storage virtualization appliances Storage array vendor lock-ins The channel constraints created by storage vendors and indirect channels protecting the vendor lock-ins that controller-based replication technologies create The increasing number of storage system that use all-inclusive software-pricing models User Advice: Users should consider the use of replication appliances when there is a: Gartner, Inc. G Page 55 of 88

56 Need to create a constant timeline across multiple homogeneous or heterogeneous storage systems Problem with the usability or performance of the existing replication solution Savings to be had from using a replication-appliance-based solution rather than an existing or virtualization-appliance-based solution Need to preserve investments in existing storage systems Desire to pursue a dual-vendor strategy Business Impact: Appliance-based replication services can: Provide the benefits of storage-based replication solutions without the lock-ins that storagesystem-based replication solutions create Delay storage system upgrades by offloading replication overhead from the storage system that lacks the compute power and bandwidth needed to limit the impact of replication services on native system performance Work with DAS, SANs and NAS Provide heterogeneous replication targets to allow lower-cost solutions Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Early mainstream Sample Vendors: Cisco; DataCore Software; EMC; FalconStor Software; Hitachi Data Systems; Huawei; IBM; InMage; NetApp BCM Methodologies, Standards and Frameworks Analysis By: Roberta J. Witty Definition: Business continuity management (BCM) methodologies, standards and frameworks offer a standard approach to implementing a BCM program for activities ranging from program governance to risk assessment/business impact analysis, recovery strategy development, recovery plan development through exercising and program management. There are more than 100 such approaches, but the first International Organization for Standardization (ISO)-based BCM standard ISO 22301:2012 will likely help reduce the count. Position and Adoption Speed Justification: The growing visibility of BCM in boardrooms around the world is putting considerable attention on the development of a best-practice model for BCM methodologies, best practices, terminology and so forth. There are many initiatives: Page 56 of 88 Gartner, Inc. G

57 Industry-based, such as the Federal Financial Institutions Examination Council (FFIEC), North American Electric Reliability (NERC), Securities Industry and Financial Markets Association (SIFMA), the Health Insurance Portability and Accountability Act (HIPAA), Australian Prudential Regulation Authority (APRA), Ontario Securities Commission (OSC), the Center for Financial Industry Information Systems (FISC Japan), Monetary Authority of Singapore (MAS), Financial Services Commission (South Korea), and the Basel Committee on Banking Supervision Industry-neutral, such as the ISO, Information Technology Infrastructure Library (ITIL) and the Supply Chain Risk Leadership Council Country-specific, such as the National Fire Protection Association (NFPA) from the U.S., the British Standards Institution (BSI) in the U.K., Standards Australia, Standards New Zealand, Associacao Brasileira de Normas Tecnicas (ABNT) and Korean Industrial Standards BCM professional good/best-practice guidelines from Disaster Recovery Institute International (DRII) and Business Continuity Institute (BCI) A proliferation of regulations, standards and frameworks has occurred during the past 10 years. Because of the U.S. PS-Prep program, the three standards selected for that program will likely be considered as representing a set of BCM best practices for nonregulated industries. The overall uptake of PS-Prep will take time. To date, only nine certifications have been issued to four organizations (see the PS-Prep Directory). In addition, the May 2012 publication of ISO's first BCM standard ISO 22301:2012 and its companion implementation guidance standard ISO 22313:2012 is on its way to becoming a worldwide acknowledged standard. However, adoption is still nascent, and the U.S. Department of Homeland Security has yet to add ISO 22301:2012 to the list of approved program standards. Most ISO 22301:2012 certifications are for organizations that had already obtained BS certification (BS has been retired by BSI in favor of ISO 22301:2012). Audit standards have been developed for BS , ASIS SPC and NFPA Accreditation bodies will be developing audit standards for ISO 22301:2012. ISACA Doc G32 provides guidance to IT auditors for assessing BCM plans. As organizations recognize their need to work more closely together with multiple service providers or business partners, there is pressure to use common frameworks and standards to make it easier to integrate processes across organizational boundaries. As a result, there is increasing focus on compatibility across the supply chain, such as enforcing the use of ITIL as a common framework for IT service management in a multisourced environment. In the same way, the use of a common BCM model would facilitate BCM planning, testing and auditing of business activities across the organization's product/service delivery ecosystem. Enterprises are putting more pressure on their service providers to meet the enterprise's recovery requirements, especially data center hosting providers and service providers that handle time-sensitive information. The latter are often asked to produce Service Organization Control (SOC) 2 reports with attestation based on security and availability principles. Therefore, organization certification will be the best avenue for the service provider to demonstrate to multiple customers that it can meet recovery requirements. Gartner, Inc. G Page 57 of 88

58 The position on the 2014 Hype Cycle remains the same as in 2013, as it was in 2012, for two reasons: (1) Since ISO 22301:2012 was published in May 2012, adoption is growing, but implementation and compliance are very low, and (2) there is very low adoption of PS-Prep. User Advice: For most enterprises, no single regulation, standard or framework exists that defines the BCM requirements they should meet, although enterprises may be subject to SLAs and contractual requirements that contain availability requirements. There will be pockets of BCM regulation, standard or framework adoption by organizations, depending on: Industry (for example, financial services, healthcare, nuclear industry and electric utilities) Geographic location (such as Singapore, where BCM standardization and implementation is high) Business operations priorities (customer-facing, revenue-generating and regulatory compliance processes, which tend to have a higher mission-critical rating) BCM program maturity (more mature programs that end up following a BCM approach, even if they are rationalized across multiple standards or frameworks) Organizations worldwide should review a number of existing standards, including the new ISO 22301:2012 standard, and select or develop their own BCM model based on appropriate industry and country regulations and standards. This review could become complex if you have to rationalize multiple models because of your organizational breadth in location and depth in industry. Organizations worldwide should identify the models their service providers, trading partners, customers and external auditors are using for their audit work. Nonregulated U.S.-based organizations should follow the work being done in relation to PS- Prep to understand how it might influence their models and organizational certification initiatives. The only means by which organizations can assess the effectiveness of recovery and continuity controls is through the use of live testing, or experiencing a real disaster and executing a recovery plan in real time. Even organizational certification does not provide a guarantee that the organization will effectively recover from an actual disaster. It only confirms the maturity of a BCM program process and management thereof. Know your organization's culture and management style for identifying gaps, single points of failure, or mistakes in current business and IT processes, as well as its maturity for embedding BCM-style thinking and acting into business operations management (process maturity). Business Impact: Because of the proliferation of regulations, standards and frameworks, there is no reason that an organization should not have a good-quality BCM program delivery model. Using a BCM standard/framework can provide more consistency and completeness (by using a thirdparty-vetted process) across the enterprise in the execution of the BCM program. Also, responding to BCM program validation requests from auditors, customers and trading partners will be easier due to that consistency. However, it is one thing to have a solid BCM model, and it's another thing Page 58 of 88 Gartner, Inc. G

59 to be "process mature" and perform well in each of the categories listed in that model. Finally, having a BCM model is a goal that will benefit organizations in many ways, but it cannot guarantee that organizations will be able to recover from a disaster. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Hosted Virtual Desktops Analysis By: Mark A. Margevicius; Nathan Hill Definition: A hosted virtual desktop (HVD) is a full, thick-client user environment run as a virtual machine (VM) on a server and accessed remotely. HVD implementations comprise server virtualization software to host desktop software (as a server workload), brokering/session management software to connect users to their desktop environments, and tools for managing the provisioning and maintenance (for example, updates and patches) of the virtual desktop software stack. Position and Adoption Speed Justification: An HVD involves the use of server virtualization to support the disaggregation of a thick-client desktop stack that can be accessed remotely by its user. By combining server virtualization software with a brokering/session manager that connects users to their desktop instances (that is, the OS, applications and data), enterprises can centralize and secure user data and applications, and manage personalized desktop instances centrally. Because only the presentation layer is sent to the accessing device, a thin-client terminal can be used. For most early adopters, the appeal of HVDs has been the ability to thin the accessing device without significant re-engineering at the application level (sometimes required for server-based computing). While customers implementing HVDs cite many reasons for deployments, several important factors have contributed to the increased focus on HVD: the desire to implement new client computing capabilities in conjunction with Windows 7 migrations; the desire for bring your own device (BYOD) and device choice (particularly ipads); the need to deliver on business continuity requirements; and the uptick in customers focused on security and compliance issues. During the past few years, the adoption of virtual infrastructures in enterprise data centers has increased, making HVDs easier to deploy. With this increase comes a level of maturity and an understanding of how to better utilize the technology. This awareness aids HVD implementations where desktop engineers and data center administrators work together. Early adoption of this technology has been hindered by confusion around licensing, including compliance issues for the Windows Client OS. This has since been resolved through Microsoft Windows Virtual Desktop Access (VDA) licensing offerings; however, the cost still inhibits adoption for many customers. Licensing costs are only one aspect of the higher costs associated with implementing HVD on a broad scale; sizeable costs exist for necessary infrastructure build-outs. While many IT organizations made significant progress in virtualizing their data center server Gartner, Inc. G Page 59 of 88

60 infrastructures, HVD implementations required additional virtual capacity for server and storage (above and beyond what was in place for physical to virtual migrations). Even with Microsoft's reduced license costs for the Windows OS, which enables an HVD image to be accessed from a primary and secondary device with one license, there are still other technical issues that hinder mainstream adoption. Since late 2007, HVD deployments have grown steadily, reaching 25 million to 30 million users by the end of 1Q14. The broad applicability of HVDs has been limited to specific scenarios, primarily structured-task workers in call centers and kiosks, trading floors, and secure remote access. We expect that by 2018, there will be 50 million devices used to access HVDs. Throughout the second half of 2014 and into 2015, we expect general deployments to continue. Inhibitors to general adoption involve the cost of the data center infrastructure required to host the desktop images (servers and storage in particular) and network constraints. Even with the increased adoption of virtual infrastructures, cost-justifying HVD implementations remains a challenge because of HVD and PC cost comparisons. Some advancements in management tools (for example, application virtualization and image layering) make HVD less cumbersome by introducing the ability to more easily deploy applications. This makes managing the image and maintaining the HVD easier. Availability of the skills necessary to manage virtual desktops remains a challenge, as is deploying HVDs to mobile/offline users, despite the promises of offline VMs and advanced synchronization technologies. Support for graphics processing units (GPUs; introduced in 2012) will eventually allow a broader audience, but will not have much impact until the end of 2014 and into Likewise, advances in storage technologies (that is, VSANs and SSDs) will help improve performance at lower costs. User Advice: Through 2014 and 2015, all organizations should carefully assess the user types for which this technology is best-suited. Clients that make strategic HVD investments will gradually build institutional knowledge. These investments will allow them to refine technical architecture and organizational processes, and to grow internal IT staff expertise before IT is expected to support the technology on a larger scale through Balance the benefits of centralized management with the additional overhead of infrastructure and resource costs. Customers should recognize that HVDs may resolve some management issues, but will not become panaceas for unmanaged desktops. In most cases, the promised total cost of ownership (TCO) reductions will not be significant, and will require initial capital expenditures to achieve. The best-case scenario for HVDs remains securing and centralizing data management and structured-task users. Organizations must optimize desktop processes, IT staff responsibilities and best practices to fit HVDs, just as organizations did with traditional PCs. Leverage desktop management processes for the lessons learned. The range of users and applications that can be viably addressed through HVDs will grow steadily through Although the user population is narrow, it will eventually include mobile/offline users. Organizations that deploy HVDs should plan for growing viability across their user populations, but should be wary of rolling out deployments too quickly. Employ diligence in testing to ensure a good fit of HVD capabilities with management infrastructure and processes, and integration with newer management techniques (such as application virtualization and software streaming). Visibility into future product road maps from suppliers is essential. Page 60 of 88 Gartner, Inc. G

61 Business Impact: HVDs provide mechanisms for centralizing a thick-client desktop PC without reengineering each application for centralized execution. This appeals to enterprises on the basis of manageability and data security. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: Citrix; Dell; Desktone; Microsoft; Red Hat; Virtual Bridges; VMware Data Deduplication Analysis By: Dave Russell Definition: Data deduplication is a form of compression that eliminates redundant data on a subfile level, improving storage utilization. In this process, only one copy of the data is stored; all other redundant data will be eliminated, leaving only a pointer to the extraneous copies of the data. Deduplication can significantly reduce the amount of disk space required and the amount of bandwidth needed to remotely replicate, since only unique data is stored and transmitted. Position and Adoption Speed Justification: This technology reduces the amount of physical storage required, significantly improving the economics of disk-based solutions for backup, archiving and primary storage. Gartner clients using deduplication for backup typically report seven to 25 times the reduction (a 7-to-1 to 25-to-1 ratio) in the size of data. While deduplication is most commonly used in backup activities (due to the repetitive nature of capturing largely unchanged data), it can be applied to long-term archiving and primary storage, with file storage of unstructured data most frequently considered. Backup has been an attractive use case for deduplication, as the most common backup methodology routinely captures full copies of data (often once a week and sometimes nightly for critical applications), whether the data has been modified or not, thus producing many redundant data copies that can be deduplicated to factor out extra copies. To achieve the highest levels of reduction, backup workloads during a period of three to four months are typically needed, and a traditional model of nightly incremental backups and weekly full backups may be needed. However, even short-term backups can achieve two to six times the reduction or better, in part because duplication almost always also employs compression, which can reduce the size of data, even if there are no other duplicates found. Archiving deduplication ratios are often in the 3-to-1 to 10-to-1 range, and primary file data commonly yields 3-to-1 to 5-to-1 ratios. Deduplication has taken on a vital role in all flash array (AFA) and hybrid flash array storage appliances in an effort to contain the cost of the flash solution while maximizing capacity. As such, nearly all flash storage and hybrid flash array devices possess some form of deduplication. Gartner, Inc. G Page 61 of 88

62 Solutions vary in terms of where and when deduplication takes place, which can significantly affect performance and ease of installation. When used with backup, deduplication that occurs on a protected machine is referred to as "client-side" or "source" deduplication, whereas deduplication that takes place after the protected machine, after the data is sent to the backup application, is considered "target-side" deduplication. A distinction is made between solutions that deduplicate the data as it is processed (in-line deduplication) and products that have the data write directly to disk, as it would without deduplication, and deduplicate it later, which is post-processing or deferred deduplication. Deduplication solutions also vary in granularity, but 4KB to 128KB chunks, or segments, of data are typical and some deduplication algorithms are content-aware, meaning that they apply special logic for further processing, depending on the type of application and data being stored and/or can factor out metadata from an application such as a backup program. User Advice: For backup workloads, carefully consider the architectures and design points mentioned above. Client-side deduplication assumes that the backup software is switched to the deduplication software (so a choice in backup application and deduplication is being made simultaneously) if deduplication is not already an available feature from the currently installed product. Most backup applications have some form of deduplication built in, but cost (if any), degree of integration and ease of use can vary. Gartner often recommends client-side deduplication for files, virtual machines and target-side appliances or backup software implementations for large databases. If deduplication is used for primary storage, ensure that the workload matches the performance characteristics of the deduplication approach, because performance can be negatively affected. However, not all data types require the highest levels of performance. Given the costs associated with flash storage, deduplication is an essential capability for improving the economics and wear endurance of flash. Business Impact: The effects of deduplication primarily involve the improved cost structure of diskbased solutions, as fewer disks need to be purchased, deployed, powered and cooled. As a result, businesses may be able to use disks for more of their storage requirements and may retain data on disks for longer periods of time, thus enabling recovery or read access from disks versus retrieval from slower media (such as magnetic tape). Backup to and restore from disks can improve performance, compared with tape-based approaches. The effective cost of remote replication can be reduced if the data has previously been deduplicated, because potentially less bandwidth would be required to move the same amount of nondeduplicated data. Deduplication can improve the cost structure for disk-based archives and primary storage, because fewer resources are utilized. The additional benefits of deduplication include its positive impact on disaster recovery (DR), because less network connectivity is required, since each input/output (I/O) operation carries a larger data payload. Additionally, data may be copied over the network more frequently, as the traffic impact may be minimized due to the reduction in data. For a backup application, deduplication, along with replication, may mean that physical tapes do not need to be made at the primary location and manually taken off-site. Instead, deduplication and replication can be used to electronically vault the data. Page 62 of 88 Gartner, Inc. G

63 For primary storage, deduplication can have a positive impact on improving overall performance, reducing the amount of cache consumed and reducing the overall input/output operations per second (IOPS). When deployed with hard disk storage arrays, latency-sensitive applications such as transactional databases often are not a good fit for most organizations, server-virtualized images (virtual machines [VMs]) and networked file shares have proved to be solid use cases. Deduplication for flash storage systems should be considered a "must have" feature. Benefit Rating: Transformational Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Acronis; Actifio; Asigra; CA Technologies; CommVault; Datacastle; Dell; EMC; EVault; Exablox; ExaGrid Systems; FalconStor Software; GreenBytes; Hitachi Data Systems; HP; IBM; NetApp; NEC; Nimbus Data; Nutanix; Oracle; Permabit; Pure Storage; Quantum; Sepaton; SimpliVity; Skyera; SolidFire; Symantec; Tegile; Tintri; Unitrends; Veeam Software; Violin Memory Recommended Reading: "Magic Quadrant for Enterprise Disk-Based Backup/Recovery" "Best Practices for Repairing the Broken State of Backup" "Storage Appliances May Transcend IT Silos and Incur High Cost at Scale" "The Future of Backup May Not Be Backup" Continuous Data Protection Analysis By: Dave Russell Definition: Continuous data protection (CDP) is an approach to recovery that continuously, or nearly continuously, captures and transmits changes to files or blocks of data while journaling these changes. This capability provides the option to recover to many more-granular points in time to minimize data loss, and enables arbitrary recovery points. Some CDP solutions can be configured to capture data either continuously (true CDP) or at scheduled times (near CDP). Position and Adoption Speed Justification: The difference between near CDP and regular backup is that backup is typically performed once a day, whereas near CDP is often done every few minutes or hours, providing many more recovery options and minimizing any potential data loss. Several products also provide the ability to heterogeneously replicate and migrate data between two different types of disk devices, allowing for potential cost savings for disaster recovery solutions. Checkpoints of consistent states are used to enable rapid recovery to known good states (such as before a patch was applied to an OS, or the last time a database was reorganized) to ensure application consistency of the data and to minimize the number of log transactions that must be applied. Gartner, Inc. G Page 63 of 88

64 CDP and near-cdp capabilities can be packaged as server-based software, as network-based appliances that sit between servers and storage, and as part of a storage controller. To date, storage controllers offer near CDP only by way of the frequent use of snapshots, and do not allow for the capture, journaling and transmission of every write activity. The delineation between frequent snapshots (one to four per hour or less granularity) and near CDP is not crisp, and administrators often implement snapshots and CDP solutions in a near-cdp manner to strike a balance between improved recovery capabilities without requiring significantly more disk space. Note that frequent snapshots alone do not equate to near CDP; the data (snapshot) must be copied to another location to qualify as a near-cdp solution. Most large vendors have acquired their current offerings, and only a few startup vendors remain. Many backup applications, such as those from Asigra, CA Technologies, CommVault Systems, Dell and IBM, include CDP technology as an option to their backup portfolio. The market has since mostly adopted near-cdp solutions via more frequent, array-based snapshots, or as part of the backup application. The disk requirements and potential production application performance impact were among the main reasons for true CDP initially facing challenges. Later, as near CDP became more readily available, near CDP satisfied most of the market's needs. Some backup vendors, such as Symantec, which was early to market with its CDP in Backup Exec in 2005, have discontinued CDP features. EMC, with its RecoverPoint product, has been successful in selling a hybrid replication and true-cdp solution. Today, FalconStor Software and InMage are among the "pure play" CDP providers in the market. User Advice: Consider CDP for critical data that is not meeting recovery point objectives (RPOs). Gartner has observed that true CDP implementations are most commonly deployed for files, and laptop data. True CDP for databases and other applications is less common and has a lower market penetration. Near CDP for applications might be more appropriate to ensure application consistency and to minimize the amount of disk and potential processor cycles required for the solution. Business Impact: CDP can dramatically change the way data is protected, decreasing backup and recovery times, as well as the amount of data lost, and can provide additional recovery points. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Actifio; Apple; Asigra; CA Technologies; CommVault; Datacastle; DataCore Software; Dell; EMC; EVault; FalconStor Software; IBM; InMage; Microsoft; Veeam Software; Vision Solutions Recommended Reading: "Magic Quadrant for Enterprise Disk-Based Backup/Recovery" "Best Practices for Repairing the Broken State of Backup" Page 64 of 88 Gartner, Inc. G

65 Load Forecasting Analysis By: Zarko Sumic; Chet Geschickter Definition: Load forecasting is a utility application category that minimizes risk by predicting future consumption of commodities transmitted or delivered by the utility. Techniques include price elasticity, weather and demand-response/load analysis. Forecasts must use regional customer load data with time series customer load profiles. Accurate forecasts require adjustments for seasonality. Distribution load forecasting must be reconciled with distribution network configuration as part of the distribution circuit load measurements. Position and Adoption Speed Justification: Utilities require many categories of forecasts that vary by purpose, time horizon, techniques and end-user requirements. Examples for electric utilities include 10-year demand forecasts, five-year area or feeder studies, regional or substation transmission loading analyses, and spatial load forecasts. Quality load forecasting is critical to ensuring reliable and scalable delivery of commodity services and an important contributor to the financial health of an energy and utility company. Forecasts that are too low lead to a short position forcing the utility to go to an expensive spot market or, worse, leading to outages. Forecasts that are too high lead to money wasted on excess unused capacity. Too many utilities focus on the engineering analysis function of planning, without providing the same level of rigor to the forecasts of load (or demand), thereby leading to poor results. Modeling of price elasticity, weather changes, demand response and growth in renewable generation sources are required. Forecasts often depend on load data from multiple sources, such as consumption data from meter data management for the distribution level or data from a historian database for the transmission level. At the distribution level, forecasts must reflect the actual distribution network configuration at peak load periods. Higher adoption of distributed generation by consumers in many markets, including growth in renewable generation (wind and solar power) and electric vehicles (EVs), will require new forecasting models that are capable of integrating generation contributions from intermittent renewable energy sources that are not dispatchable. The potentially significant impact of demand-response initiatives, as well as price sensitivity of load resources, also needs to be modeled and included as a component of the overall load forecasting discipline. Improvements in the performance of general forecasting and analytical tools are filtering into load forecasting solutions. This enables the use of wider simulations and scenarios involving variables associated with future load or demand. Utility advanced metering infrastructure (AMI) systems are also generating highly granular energy usage data useful for forecasting distribution loads, planning demand-response actions, and generally improving the accuracy and detail of load forecasts. User Advice: Utility CIOs can support engineering and operations by understanding changing load forecasting requirements. Identify current forecasting initiatives across the enterprise (for example, asset and capital forecasting, financial forecasting, and long-range economic forecasting) that are influenced by load forecasting to identify sensitive analytic needs and to rationalize forecasting requirements. Also, configure your load forecasting tools to leverage AMI. Be prepared for increased support requirements as analysts use a variety of data to support discovery of new forecasting techniques. Assess emerging in-memory big data analytics tools for possible just-in- Gartner, Inc. G Page 65 of 88

66 time forecast capabilities, as well as geospatial representations that help planners quickly understand complex distribution forecasts to identify patterns and trends. Also, investigate emerging cloud solutions for improved short-term load forecasts for wind and solar sources. Business Impact: Affected areas include retail, supply and energy commodity management. Benefit Rating: High Market Penetration: More than 50% of target audience Maturity: Early mainstream Sample Vendors: IBM; Itron; Oracle; SAP; Ventyx Recommended Reading: "Top 10 Business Trends Impacting the Utility Industry in 2014" "Magic Quadrant for Meter Data Management Products" Lights-Out Recovery Operations Management Analysis By: John P Morency Definition: Lights-out recovery operations management refers to the use of remote management software to administer a remote (and largely unmanned) recovery data center. Active management of the recovery data center may be designed to support a recovery plan or to orchestrate postdisaster event operations recovery. A key requirement for lights-out recovery operations is that remote management should be operationally consistent with supported capabilities in the primary production data center. Position and Adoption Speed Justification: There are two lights-out recovery operations scenarios: When enterprise IT is responsible for operations management on the production and remote data center sides When a third-party provider offers one or more managed services to the enterprise customer from a remote data center In the second scenario, an external provider supports managed services, such as systems colocation, managed hosting, software as a service (SaaS) or recovery as a service. Given that the supporting management software for these services continues to evolve at a fairly rapid rate, Gartner has moved lights-out recovery operations management beyond the Trough of Disillusionment and toward the Plateau of Productivity. For these reasons, the gap between user expectations and what vendors can support has closed considerably during the past few years. Nonetheless, there may still be a need for a remote staffing presence to manage equipment installation, configuration and basic problem triage for some organizations. This may or may not affect the exercising of recovery. However, it could affect recovery operations support, especially if it lasts several days or more. In the case of recovery as a Page 66 of 88 Gartner, Inc. G

67 service, the extent to which IT lights-out recovery operations can continue to support day-to-day production management virtual (as well as physical) machines executing within the cloud varies by provider. Due to the rapidly increasing maturity of related management processes and support tools, as well as increasingly broad deployment by both enterprise IT organizations and external service providers, Gartner has advanced the positioning of lights-out recovery operations management in 2014 to the trough-plateau midpoint. User Advice: Equipment malfunctions will always occur, whether the equipment is locally or remotely situated. Therefore, you need at least some on-site support personnel (that is, remotehands support) authorized to enter the data center; turn the equipment off and on; support requested moves, adds and changes; and escort equipment vendors that come to the remote data center to add, remove or repair equipment. It is generally in the IT organization's best interest to have a technician with Level 1 (or even Level 2) support skills to provide basic incident management and to coordinate problem triage with the central IT operations team. Lights-out recovery operations in remote centers are increasingly being used to reduce the costs associated with flying to a recovery service provider location in another state, another country or even a remote area in the same country. Lights-out recovery operations can reduce the time required to commence live recovery operations, because key support staff can perform recovery management from their homes or from a recovery location that is closer to the primary production data center location (assuming, of course, that electrical power is available at the recovery location). Because of the potential time and cost benefits, Gartner recommends detailed due diligence for the application of remote management technologies and processes to ensure that they fully support the recovery requirements of the business. If your recovery service contract is coming up for renewal, and you are considering the use of other provider services, such as colocation or recovery as a service, ensure that your provider's due-diligence process includes the assessment of provider data center access to support not only remote recovery exercising but also production operations management. The latter consideration is important if, in the aftermath of a major disaster such as the combined earthquake and tsunami events in Japan in 2011, Hurricane Sandy in the greater NY and NJ area in 2012, the tornadoes in Oklahoma City in May 2013 or the recent rash of tornadoes across the U.S. Midwest in the spring of 2014 IT operations need to be supported in a provider data center or cloud for an extended period of time (such as weeks or months). Finally, ensure that you have sufficiently tested the supporting tools and technologies for lights-out recovery operations before relying on them. Business Impact: The business impact is medium. The actual benefits will vary, depending on the frequency with which live recovery exercising is performed each year and the likelihood of an extended recovery operations scenario. Benefit Rating: Moderate Market Penetration: 20% to 50% of target audience Gartner, Inc. G Page 67 of 88

68 Maturity: Mature mainstream Sample Vendors: Dell; Emerson Network Power; HP; IBM; Minicom; Oracle; Raritan; SunGard Availability Services Server Repurposing Analysis By: John P Morency Definition: Server repurposing enables servers to be quickly reconfigured to run different software stacks or images. It is used to share servers and achieve higher utilization. Examples of server repurposing include switching servers from daytime to nighttime use, reallocating development/test servers to disaster recovery (DR) and reconfiguring a spare server when the main server fails. Server repurposing can generally manage physical and virtual servers and enable switching between them. It can be initiated manually or through a service governor. Position and Adoption Speed Justification: Server repurposing, which is part of the serverprovisioning and configuration management life cycle, focuses on server-provisioning functionality. In addition to provisioning software and/or images to a server, server repurposing can manage the connectivity to storage (typically by maintaining address names) and virtual LANs (VLANs). Server repurposing is commonly used for IT DR management (DRM) architectures, in which the servers in the secondary site are shared (for example, with development, testing and training). Server repurposing is common in virtual environments, but less common in physical environments, where it is more challenging to implement. In virtual environments, server repurposing involves bringing down one virtual server and bringing up another, typically in the same resource pool. In physical environments, server repurposing can be achieved by leveraging imaging and configuration management technologies, or by booting from shared storage to connect the server to a different image on a disk. Alternatively, it can be achieved through fabric-based infrastructure or via a blade server environment, where server profiles provide a degree of server mobility as well as flexibility to reconfigure servers and/or capacity. Server repurposing sometimes requires a conversion from one environment type to another for example, from physical to virtual (often used for DR), from virtual to physical (such as to obtain vendor support when the vendor does not support virtual environments), from physical or virtual to cloud (such as when moving a workload or service to a public cloud or a virtual private cloud), or from virtual to virtual (such as when development/test uses one type of hypervisor and production uses another). Server repurposing is advancing along the Trough of Disillusionment because its implementation requires much thought and policy development, which are difficult to execute. For example, if an IT organization decides to implement shared DR services, in which the DR site needs to be reconfigured to look like production, it must decide how to implement the repurposing (for example, via virtual machines, images or unattended installations of the software), as well as how to manage the failover and startup orchestration, while also considering the implications on software licensing. User Advice: Enterprises that are consolidating data centers, implementing private clouds and/or real-time infrastructure (RTI), or implementing DR sites should consider server repurposing to increase asset utilization and cut server capital costs. For high-availability architectures, server Page 68 of 88 Gartner, Inc. G

69 repurposing can reduce spare server inventory and speed recovery. Enterprises with specific repurposing needs, such as configuring servers for daytime versus nighttime use or provisioning servers quickly based on a standard image (for example, to support IT DRM), should consider this technology, but they should also realize that they must orchestrate the timing and implementation of these changes. Business Impact: The business impacts of server repurposing are higher asset utilization and lower costs. For example, in a high-availability use case, an organization may go from having one spare server for every server to one spare server for every eight or 10 servers. For a DR use case, in which a development/test environment can be repurposed to production, a significant amount of hardware capacity can be saved, but at the expense of increased risk and reduced flexibility in testing (because testing and DR require that a shared environment get reduced capacity or be turned off). Server repurposing can also benefit DR through greater speed and lower costs because it can enable faster server provisioning. IT organizations with significant standardization will find dramatically lower operational costs and will realize that automation can drive repurposing. IT organizations with little standardization will find that maintaining a diverse environment is difficult and that it increases costs, thereby potentially outstripping any repurposing-specific savings. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: CA Technologies; Dell; Egenera; HP; NetIQ; Racemi; RiverMeadow Software; SAS; Unisys; VMware Recommended Reading: "Cool Vendors in Business Continuity Management and IT Disaster Recovery Management, 2014" "Cool Vendors in Cloud Management, 2014" "Balancing Software Licensing With High-Availability/Disaster-Recovery Requirements" "Fabric-Based Infrastructure Enablers and Inhibitors Through the Lens of User Experiences" IT DRM Insourcing Analysis By: John P Morency Definition: Disaster recovery (DR) insourcing is the IT management of: Recovery data center facilities management Recovery infrastructure management Plan exercising IT operations following the occurrence of a disruptive event Gartner, Inc. G Page 69 of 88

70 Recovery data center facilities may be owned and managed by IT itself (in conjunction with corporate facilities management) or by an external colocation or managed hosting services provider. Position and Adoption Speed Justification: Due to the large number of technology, risk and cost management trade-offs, which change fairly frequently, IT disaster recovery management (DRM) must be managed more as a continuous life cycle process, rather than as a once- or twice-yearly stand-alone test. The management of key life cycle deliverables (such as business impact analyses, IT DRM plans, recovery infrastructures, and application and data recovery processes) and their associated interdependencies (including technology enablement, execution management and process definitions) has resulted in the creation of a more-complex management challenge. There are three options for managing the creation of the key deliverables, as well as the resolution of their related technology, management and process interdependencies: (1) The IT organization can manage deliverables and interdependency resolution on its own if that's the most cost-effective option. (2) All deliverables creation and interdependency resolutions can be entirely outsourced to a third party, including cloud services providers. (3) Multisourcing that is, supplementing the efforts of the in-house recovery management team with services from one or more external service providers (ESPs) can be used to partition management responsibilities. This approach is increasingly being used to manage the recovery of mainframe and open systems. Because of cost or logistical reasons, some clients are electing to continue their relationships with ESPs for mainframe recovery while insourcing the recovery management of open systems (Windows as well as Unix or Linux platforms), leveraging colocation and hosting services, and (in some cases) building out an internal facility. Options 1 and 3 are the most common recovery insourcing approaches. When it comes to recovery facilities, the technologies that can most affect its adoption are the availability and recurring costs of managed data center facilities, high-speed network connectivity and cloud-based recovery. Gartner has seen client adoption of all three of these increase during the past three years. Because of its broad deployment, especially in larger enterprises, as well as its advancing management maturity, Gartner has advanced the Hype Cycle positioning of IT DRM insourcing to pre-plateau 45%. User Advice: A decision to insource IT DRM is typically driven by one or more of the following factors: dissatisfaction with the quality and pricing of shared recovery services provided by an incumbent IT DRM vendor; relatively inflexible long-term recovery service contracts; and the need for increased recovery exercising flexibility and frequency, rather simply executing a once- or twicea-year event. Two additional reasons include the increased use of data replication, which drives the need for dedicated storage system presence at the secondary site, as well as the need to improve change control consistency between the primary production and secondary recovery sites. Insourcing any of the disciplines referenced in the Definition section does not necessarily mean that the same approach should be taken for all of them. Successful insourcing decisions are based on a combination of cost, risk avoidance and operations management considerations. Business Impact: Given its support for IT operations risk mitigation, the management of business reputation risk and the associated IT investment, the business impact is high. Regardless of Page 70 of 88 Gartner, Inc. G

71 whether IT DRM is insourced or outsourced, IT still owns the responsibility and accountability for its effective implementation and evolution. This has become increasingly visible during the past five years, resulting in many IT organizations taking on more-direct roles for its day-to-day management. Benefit Rating: High Market Penetration: More than 50% of target audience Maturity: Mature mainstream Recommended Reading: "Start Here: IT DRM Owners Should Focus on the IT Recovery Plan First" "Managing IT Resilience Is Much More Than Simply Failing Over Applications" WAN Optimization Services Analysis By: Bjarne Munch; Joe Skorupa Definition: WAN optimization services provide enterprises with improved application performance across the enterprise WAN through use of on-site managed WAN optimization controller (WOC) services. Position and Adoption Speed Justification: We now see a broad range of providers, from system integrators to network service providers, offering managed WAN optimization services. Typically, providers offer these services based on one to three different vendor platforms. Many providers offer more basic remote managed WOC services targeted at solving specific issues or just general WAN optimization. However, we have seen the higher end of the market evolve into several distinct service offerings: network audit and discovery services aimed at designing and dimensioning the network to optimize performance of application traffic in the network, ongoing network traffic monitoring services aimed at ensuring ongoing optimum network performance, and specific application optimization services where network service providers offer a bundled managed WOC and WAN service with custom application performance guarantees. We are now seeing enterprises of all sizes adopt these managed WOC services as they seek to reduce their internal operational overheads, and during the next 12 to 24 months we expect increased enterprise adoption of managed WAN optimization services. In response to this increased enterprise demand, we expect to see more providers expand their service offerings as well as continue to productize related professional services. We are already seeing the large network service providers evaluate how they can introduce virtualized WOC services, such as router integrated services and WOC services deployed in the WAN for operational efficiencies (such as end-to-end service orchestration and various levels of customer self-service deployments). User Advice: While most network service providers and several system integrators offer managed WOC services, enterprises must carefully choose which type of services they source from these providers, as their skills and experience are still key differentiators, especially if the enterprise is seeking a wider array of professional services beyond basic remote device monitoring for ongoing Gartner, Inc. G Page 71 of 88

72 maintenance. For example, dedicated application optimization professional services are not consistently delivered by all providers, and only done in combination with a managed WAN service. Business Impact: WAN optimization services remove the need for an enterprise's skilled resources to deploy, configure and manage a WAN optimization solution. Similar to an internally managed solution, these services can deliver significant gains in application performance while reducing the cost of WAN bandwidth. However, when bundled with a WAN service, these managed WOC services can deliver more, including improved WAN performance and visibility as well as application performance guarantees. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Early mainstream Sample Vendors: AT&T; BT; CSC; Dimension Data; IBM Global Business Services; NTT Communications; Orange Business Services; Verizon Business Recommended Reading: "Magic Quadrant for WAN Optimization" "How to Pick the Right WAN Optimization Solution for Your Organization" "RFP Template for WAN Optimization Controllers" Bare-Metal Restore Analysis By: Dave Russell Definition: Bare-metal restore (BMR) products provide a way to recover (or redeploy) the OS, middleware, applications and data to PCs or servers that are bare metal, that is, that have no previously installed software and/or OS, or where the installed OS and/or applications and data are corrupt. Position and Adoption Speed Justification: BMR products have long been used to redeploy or repair PCs, and most organizations have a product for that purpose. BMR served a particular need in organizations with large deployments of Windows servers, where security or other patch management issues required quick deployment, and fast recovery of a system or files if problems arose. BMR products deploy a sector-by-sector, disk-imaging approach to copy the contents of a hard disk. Most solutions let users boot a repaired computer from a CD-ROM, external USB drive or network, and restore previously backed-up image data from a CD-ROM, a DVD, a second disk drive connected to the computer, a disk on the network or, in some cases, physical tape. BMR products provide backup and restoration of servers, networked workstations or PCs at the level of discrete files, or the entire disk volume. The ability to perform dissimilar hardware restoration (that is, to restore to new hardware that is not the same as the original system) is often a requirement because the new server, even if from the same vendor, may contain different components. In addition, some vendors enable users to restore different OSs to the same hardware (for example, a Linux system image to Wintel hardware). Page 72 of 88 Gartner, Inc. G

73 Server virtualization has significantly diminished the adoption of BMR solutions because server OSs, applications and user files can all be encapsulated in a VM and more easily deployed, redeployed or migrated to another physical server. User Advice: Although backup vendors have continued to improve the system recovery features of their traditional backup solutions and many service solutions have added system recovery capabilities, stand-alone solutions should be considered if ease of use and advanced features are of interest. BMR-specific solutions are often less costly than more complete backup solutions. Choosing solutions that offer dissimilar hardware restoration is important, because PC and server hardware configurations change frequently, and the ability to restore to different equipment can be valuable in a disaster recovery scenario. For smaller single servers or individual desktops and laptops, these tools can sometimes serve as backup products for file and application data, and can provide protection for the OS, often at a very low price point and in an easy-to-use application. In virtual environments, VMs can typically enable faster and easier recovery than BMR does, and some backup products offer physical-to-virtual conversions in their solutions, potentially making VMbackup-capable solutions more compelling than pure-play BMR tools. Business Impact: The need for rapid system recovery is more important than ever because an entire business model can hinge on a company's servers functioning properly. In the event of a hardware failure, traditional recovery can take hours or days, especially, if the new systems are not identical to those that went down. BMR dramatically reduces recovery times for servers, and can get PCs up and running rapidly. Many solutions provide a means of converting to or from a VM, making server migration easier. Because VM image management and recovery are on the rise, the industry is shifting focus to the data, instead of the need to update OS images. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: Acronis; Asigra; CommVault; Cristie Software; Dell; EMC; Horizon DataSys; IBM; NetIQ; Novell; StorageCraft; Storix; Symantec; UltraBac Software; Unitrends Recommended Reading: "Best Practices for Repairing the Broken State of Backup" "Essential Practices for Optimizing VMware Backup" Outage Management Systems Analysis By: Zarko Sumic; Randy Rhodes Definition: An outage management system is a utility software application that models network topology for safe, efficient field operations related to outage restoration. OMSs tightly integrate with call centers to provide customer-specific outage information, as well as supervisory control and Gartner, Inc. G Page 73 of 88

74 data acquisition (SCADA) systems for switching and breaker operations. These systems track, group and display outages to safely and efficiently manage service restoration activities. Position and Adoption Speed Justification: Extreme weather events, combined with aging infrastructure resulting from protracted low investment levels in the utility delivery infrastructure, are straining utility companies' efforts to keep mandated customer service levels, which maintain an industry focus on improved outage restoration. OMS solutions are intended to improve outage restoration processes and used for historical outage reporting and the calculation of reliability indexes. Consequently, regulators are likely to decide favorably on cost recovery for investments related to storm-related outage management. OMSs require an accurate network connectivity model that reflects distribution network topology from the substation to each customer connection. Outage determination procedures rely on network-tracing schemas to associate customer outage calls with common protective devices. GISs are often used to provide this common customer connectivity model, enabling the OMS to be synchronized with planning and estimating models for other utility applications. Some GIS vendors have provided OMS functionality as an extension of their GIS solutions. Other vendors tightly integrate OMSs with SCADA and distribution management systems. Outage management business processes require substantial integration with many other systems, including enterprise asset management, to better manage work and the associated financials. Integration with advanced metering infrastructure (AMI) enables outage reporting and callback function by integrating directly with customer meters. Enterprise service bus architecture with standards-based, model-driven integration enables effective integration with related business systems. The utility industry's focus on the development of smart grid platforms has fueled buyer interest in OMS solutions beyond just emergency-response decision support systems. Subsequently, the narrowly defined "conventional" OMS is disappearing as a distinct software product category. While the need for software that automates emergency restoration functions still exists, it is increasingly being integrated with advanced distribution management system (ADMS) products, which provide more comprehensive distribution operation functionality. By incorporating smart grid technologies, ADMS products address distribution network operation needs under normal conditions, as well as storm-restoration-related activities Although most OMS vendors have focused on the energy utility sector, some have started extending into addressing specific needs for water utilities, such as event management and compliance reporting. User Advice: Utilities that are considering buying an OMS should re-evaluate their network operations, customer service strategies and overall smart grid strategies. Accordingly, they should decide whether to install a new OMS, upgrade legacy solutions or implement an ADMS solution that includes outage management functions. Some legacy OMS systems provide only a trouble-call functionality, and lack the ability to manage the storm restoration process. Some advanced requirements, such as AMI integration, may not be met by some vendors or have not yet been fully implemented and proven in production. A nexus of IT forces in the utility sector provides the ability Page 74 of 88 Gartner, Inc. G

75 to leverage crowdsourcing to identify outage location, communicate network restoration status and check customer sentiment. Consequently, OMS strategy has to take the nexus into consideration. Business Impact: Distribution network operation centers use outage management systems to analyze outages and dispatch crews, and will be affected by OMS improvements. Customer-facing applications in customer service centers will be impacted significantly in particular when OMS updates are delivered through social media channels. Benefit Rating: Moderate Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: CGI; GE Energy; Intergraph; Milsoft Utility Solutions; Oracle; Schneider Electric; Trimble; Ventyx Recommended Reading: "MarketScope for Outage Management Systems" "Market Definition: Advanced Distribution Management System Products" "Magic Quadrant for Advanced Distribution Management Systems" "Use Multichannel Power Outage Communications to Improve Utility Customer Satisfaction" Entering the Plateau Server-Based Replication Analysis By: Valdis Filks Definition: Server-based replication software copies data from one server to another using methodologies such as file- or block-level replication. In general, server-based replication requires compatible software on each server. Because this technology is storage-appliance-neutral and array-neutral, data can be replicated among servers that are attached to homogeneous or heterogeneous disk targets. Position and Adoption Speed Justification: The desire for heterogeneous storage replication, higher levels of availability, server virtualization and lower solution costs is driving the increased exploitation of non-array-based replication. Users are incorporating replication into their recovery hierarchies by exploiting replication features within server OSs and hypervisor file systems, which in some cases are augmenting or even replacing traditional backup solutions, at least for recovery purposes. This is becoming a standard feature in software-defined storage and virtual storage appliances. User Advice: Consider server-based replication as a lower-priced solution with traditional, chargeable OSs and as a zero-priced, inclusive solution incorporated with open-source solutions Gartner, Inc. G Page 75 of 88

76 and OSs. Server-based replication in the OS/hypervisor is a competitive file and application continuity solution for recovery, compared with array-based solutions, especially solid-state disk arrays, where synchronous replication is too slow due to networking latency that reduces performance. These server-based replication solutions are flexible in that they are independent of the storage attached to the server and can support a variety of disk systems and storage interconnection protocols. Server-based replication solutions can be used to protect most applications, and may use less network bandwidth than network- or storage-system-based solutions, because they perform data analysis and compression within the server. Database replication solutions are not included in this technology because they are mature and have reached their plateau. The lower cost of server-based replication relates to the initial acquisition expense; however, ongoing management costs can be high because of the complex issues involved in broad-scale deployments where some products must be managed on a per-server basis. Ensure that the serverbased replication solution under consideration supports the deployed applications. Although many popular applications are supported, few vendors offer a complete support matrix. Business Impact: Server-based replication improves recovery times, and can be used to support or even replace backup with faster data and system recovery, while reducing the impact of backup on application performance. Due to increased functionality within the server hypervisor or OS, customers may have already purchased and installed replication functionality; therefore, customers need to validate and verify their investment in replication products within servers and storage arrays. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Mature mainstream Sample Vendors: Canonical (Ubuntu); CA Technologies; CommVault; DataCore Software; EMC; FalconStor Software; IBM; Microsoft; Novell (SUSE); Oracle; Red Hat; Sios Technology; StorMagic; Symantec; Vision Software; VMware; Zerto Recommended Reading: "When Migrating Storage, Use the Tools in Server Virtualization Products" "How to Deploy SSDs to Infrastructures Without Decreasing Availability" "How to Implement High-Availability Storage for Server Virtualized Environments" "Storage Infrastructure Issues to Consider in a Virtual Server Environment" "Hype Cycle for Storage Technologies, 2013" Continuity Analysis By: Matthew W. Cain Page 76 of 88 Gartner, Inc. G

77 Definition: merits substantial redundancy in case of a catastrophic failure. continuity refers to off-site services that, in the event of a failure at the primary location, enable services to continue operating at an alternative site, generally within one to four hours and with minimal data loss. This implies that the organization has redundant infrastructure in the alternative location to be used for failover purposes, coupled with continuous data protection and/or data or message replication. Position and Adoption Speed Justification: continuity software products, infrastructure and services have been in use for many years, and continue to mature in terms of ease of configuration, failover initiation and failback. External service providers (ESPs) have also prospered and proven to be reliable. Native software replication functionality that protects against any data loss has been present in IBM Domino for years, and is also included in Microsoft Exchange 2010 and The use of lagged mailbox replicas may be considered in discussions on recovery point objectives (RPOs) as an option to augment the backup strategy or provide a source of clean data in the event of detected corruption at the primary site. Cloud providers typically offer recovery SLAs between 10 minutes and two hours. Overall, we estimate that 50% of organizations have adopted these technologies, with adoption higher among large organizations and lower among small and midsize ones. Due to the maturity of the technologies and the growth of the market, continuity is at the Plateau of Productivity. User Advice: Organizations need to determine, through conversations with business users and other stakeholders, how much data they can afford to lose (the RPO) and for how long they can afford to be offline (the recovery time objective [RTO]) if a disaster occurs. Many organizations confuse availability the percentage of time an system is functioning properly at the primary site with disaster recovery or continuity requirements. Highavailability architectures (such as local clustering) will typically not provide services in the event of a major data center failure or disaster. The RPO and RTO will determine what type of investment is needed in continuity software, infrastructure and/or services. Business Impact: For many organizations, is a vital channel of communication with employees and customers one that must be operational at all times. A range of options needs to be examined before making an continuity investment, including storage-area-network-based replication, appliances, and software-based and ESP/cloud-based services all of which have pros and cons. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: CA Technologies; CommVault; Dell; EMC; IBM; Kroll Ontrack; Microsoft; Mimecast; NetApp; Neverfail; Symantec; Vision Solutions Recommended Reading: "What You Need to Know About Exchange 2010 HA/DR and Storage" Gartner, Inc. G Page 77 of 88

78 WAN Optimization Controllers Analysis By: Bjarne Munch; Joe Skorupa Definition: WAN optimization controllers (WOCs) in the enterprise WAN enable application centralization by mitigating latency effects and reducing bandwidth costs through the use of bandwidth reduction algorithms, network-level optimization, and other application layer protocol spoofing and optimization techniques that may compensate for network links with high packet loss. SoftWOCs provide these benefits to individual devices to support remote or mobile users. WOCs are also used for data-center-to-data-center WAN links to optimize data replication. Position and Adoption Speed Justification: While we consider WOCs mature and having reached Plateau of Productivity, WOCs continue to evolve and offer a wider array of functions and deployment options. As to functionality, we see improved data reduction algorithms that include a richer set of caching, a broader set of storage protocol optimization, enterprise content delivery network, and video streaming optimization functionality. Vendors are improving the level of detail of monitoring and reporting capabilities, and they are upping their game on WAN path control (direct to Internet as well as IP VPN functionality). Optimization for hosted virtual desktops is becoming more widely available. A growing number of WOCs support guest applications via an integrated hypervisor, allowing the integration of fine-grained monitoring and SLA management, as well as other remote requirements, such as DNS, Dynamic Host Configuration Protocol and security functions. WOC deployment options have been very appliance-centric and often cost-prohibitive for smaller branch offices. However, some vendors are implementing specialized hardware in an effort to deliver limited-function devices priced well under $1,000. We are also seeing increased interest in virtual WOC solutions in the data center, both internal as well as external data center services. In addition, we are now only beginning to see growing enterprise interest in these virtualized solutions in the branch office. This means that vendors are beginning to place greater focus on solutions that can more easily incorporate virtual appliances in the branch office, either in a router, switch or offthe-shelf servers. There is still a slow but growing interest in SoftWOCs that can provide optimization services to individual mobile clients, including smartphones and tablets, but not all vendors have yet placed a focus on this as a deployment option. We are also seeing the beginning of a new trend where service providers deploy WOCs or WOC functionality within their WAN infrastructure as an alternative to deploying appliances on-site, as well as to support cloud-delivered applications where both physical and virtual WOCs can be difficult to deploy. Although protocol spoofing, route control, compression and application-specific acceleration can be delivered effectively via network-based services, last-mile bandwidth limitations will force clients to use on-premises solutions for more data-intensive applications. A hybrid approach is possible with on-premises equipment teamed with network-based services. New algorithms and technologies have significantly improved the performance gains available with these technologies. User Advice: Because WOCs continue to evolve with greater functionality and new deployment options, as described previously, there continues to be difference between the various vendors Page 78 of 88 Gartner, Inc. G

79 marketplace offers. This means that enterprises must be specific in their needs and requirements specifications, and WOCs must be matched with the specific problems that an organization faces. Organizations wishing to consolidate branch office servers or dealing with real-time applications should consider WOCs as a way to deal with branch optimization requirements. Organizations may also consider high-end WOCs as well as SoftWOCs as part of their business continuity/disaster recovery (BC/DR) strategy. For WOCs that are used for data-center-to-data-center WAN links for data replication (block- or file-based) support for storage-specific protocols, high bandwidth links (up to 10 Gbps or more), quality of service (QoS) and low insertion latency are important. They may also be used to support long-distance virtual machine migrations for planned outages. In these use cases, WOCs are often used in conjunction with wide-area load balancing to transparently direct clients to the active data center. WOCs can be expensive for smaller locations, although sub-$1,000 devices have emerged from some vendors and we have seen router-integrated solutions at attractive cost points. Implementing WOC solutions in complex networks that employ techniques such as asymmetrical routing, mesh forwarding and deep-inspection firewalls can be complicated. When WOCs enable server centralization, loss of the WAN link results in loss of access to applications and other centralized services. As a result, redundant WAN links, often deployed with link-load balancing (sometimes integral to the WOC), or a local instance of business-critical applications, may be required. Business Impact: WOCs enable consolidation of infrastructures and improved compliance through the centralization of data. However, they also enable deployment of external cloud services as well as mobilization of applications with more control and predictable performance. WOCs can also reduce the cost of WAN bandwidth while delivering significant gains in application and data replication performance. Although penetration in the small- or midsize-business segment, as well as small-enterprise branch offices, has been limited, new alternatives are emerging (including low-cost appliances, virtualized solutions and nascent service-based offers) to meet their price/performance requirements. Benefit Rating: High Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: Aryaka; Blue Coat Systems; Cisco; Citrix; Ipanema Technologies; Riverbed Technology; Silver Peak Recommended Reading: "Magic Quadrant for WAN Optimization" "How to Pick the Right WAN Optimization Solution for Your Organization" "RFP Template for WAN Optimization Controllers" Gartner, Inc. G Page 79 of 88

80 Work Area Recovery Analysis By: John P Morency Definition: Work area recovery ensures that an adequate employee work environment can be accessed should the primary work environment become unavailable. Several options can be used for work area recovery, including fixed work area recovery services, mobile work area recovery services (supported through mobile trailers) and rentable office space in large metropolitan areas. Position and Adoption Speed Justification: Some work area recovery needs are automatically addressed through a recovery plan. For example, the IT disaster recovery management (DRM) plan must, by definition, address the work area needs of IT recovery team members for recovering the data center. However, unless the organization has made the transition from an IT DRM-centric program to a business continuity management (BCM) program, enterprises typically have not addressed all the business unit aspects of their work area recovery needs. Multiple recovery strategies may be required one for office-based staff, one for specialized workgroups (such as customer service teams and call centers), one for work-at-home professionals, and one for mobile sales and service employees. Implementing each of these strategies may require different combinations of in-house support efforts; the use of fixed or mobile work area recovery locations, which may either be remote office locations or work area recovery locations supported by an external service provider (ESP); and sufficient network capacity to support a large number of networked endpoints. As more workers become mobile and decentralized, and they use multiple devices to manage their work processes, fewer paid-for work area recovery seats are needed. A decentralized workforce means that fewer people may be affected by a disaster, which reduces the need for large quantities of paid-for work area recovery seats. For 2014, we slightly adjusted the positioning of work area recovery to plateau. Although some of the large recovery service providers (such as IBM and SunGard Availability Services) are increasing the recovery seat capacity (primarily in Asia/Pacific countries), the supporting technology is still fundamentally the same. Similar to 2013, this is because service provider offerings and the use of corporate access VPNs (which may or may not be supplemented by the use of desktop virtualization) for supporting work area recovery continue to be increasingly commoditized or built into the day-to-day operating model of business delivery through the use of mobile devices. Another key reason for the limited growth of recovery provider-supported work area recovery services is that a work-at-home option for both day-to-day and recovery operations is growing in use. Despite its general availability, hosted virtual desktop (HVD) services have not proven to be a viable alternative to more-conventional work area recovery services, primarily due to cost and lack of customizability reasons. The increased use of mobile devices (especially mobile tablets) is the key reason why we continue to keep the penetration level at 20% to 50% of the target audience. User Advice: If the business process is mission-critical, then dedicated recovery space may be required. For instance, during a regional disaster, when a third-party recovery facility may be filled Page 80 of 88 Gartner, Inc. G

81 with a subscription service, your mission-critical personnel will not have a place to work without a dedicated contract in place. Special equipment needs must be identified before the selection of a facility is made. For example, call centers and trading desks are unique technical environments that not every service provider can supply. Not all work area recovery locations are colocated with the data center disaster recovery site. Therefore, network connectivity to the data center recovery site is often required. Key network service-related considerations include the availability of last-mile connectivity, bandwidth options and end-to-end WAN stability. Understand the proportion of campus workers, traveling workers, work-at-home and day extenders for each class of worker in your enterprise to correctly set the work area recovery requirements. Expect to have multiple options supported. Leverage internal locations for work area recovery, because they are usually less costly than thirdparty services. Evaluate application suite platforms, multichannel access gateways and mobile point solutions based on how their features meet these user recovery requirements. Adjustments to the application delivery model may need to be made to accommodate mobile devices. During the BCM planning stages, take the ongoing rate of technology and business change into account. Understanding how these changes are progressing (that is, the rising number of workers with laptops and work-at-home capabilities) provides a basis for thinking about changing work area recovery needs. For example, the scope of a contract with a third-party provider in place today can be reduced or eliminated within three years, as organizations move into positions to take up other and sometimes less costly approaches. Business Impact: Having a work area recovery plan is critical to ensuring that the organization can recover from a disaster, whether it's local or regional. Without your workforce, there is no business. Benefit Rating: High Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: Agility Recovery; Caps; HP; IBM; Regus; Rentsys Recovery Services; SunGard Availability Services Appendixes Gartner, Inc. G Page 81 of 88

82 Figure 3. Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2013 expectations PS-Prep (U.S. PL , Title IX) Disaster Recovery Service-Level Management IT DRM Exercising Recovery Assurance Data Dependency Mapping Technology Innovation Trigger IT Service Dependency Mapping Peak of Inflated Expectations Plateau will be reached in: IT Service Failover Automation for DR Cloud-Based Recovery Services Continuous Availability Architectures Mobile Satellite Services Cloud Storage Gateway Risk Assessment for BCM Business Continuity Management Planning Software Lights-Out Recovery Operations Management Long-Distance Live VM Migration Virtual Machine Recovery Bare-Metal Restore Server Repurposing Distributed Virtual Tape WAN Optimization Services Test Lab Provisioning Trough of Disillusionment time WAN Optimization Controllers Continuity Business Impact Analysis Server-Based Replication Work Area Recovery Slope of Enlightenment less than 2 years 2 to 5 years 5 to 10 years more than 10 years Outage Management Systems Public Cloud IT DRM Insourcing Mobile Service-Level Management Software Storage Data Restoration Services Continuous Data Protection Humanitarian Data Deduplication Disaster Relief BCM Methodologies, Standards and Frameworks Workforce Resilience Ka-Band Satellite Communications Hosted Virtual Desktops Emergency/Mass Notification Services Crisis/Incident Management Software Appliance-Based Replication As of July 2013 Plateau of Productivity obsolete before plateau Source: Gartner (July 2013) Page 82 of 88 Gartner, Inc. G

83 Table 1. BCM Standards Adoption BCM Standard Percent of Organizational Adoption ISO 22301: % DRII Generally Accepted Practices for Business Continuity Practitioners (International) 31% Proprietary 30% BCI The BCI Good Practice Guidelines (International) 23% FFIEC Business Continuity Planning IT Examination Handbook: March 2008 (U.S.) Federal Emergency Management Agency (FEMA) National Incident Management System (NIMS)/Incident Command Systems (ICS) 21% 19% BSI Business Continuity Management Specification (BS Part 1) 17% NFPA 1600 Standard on Disaster/Emergency Management and Business Continuity Programs (2013 Edition, U.S.) 15% HIPAA 15% BSI Business Continuity Management Specification (BS Part 1) 15% ITIL v.3 (International) 14% Other 12% Source: Gartner (July 2014) The 2013 Gartner Security and Risk Management Survey asked the following question regarding the adoption of BCM methodologies, standards and frameworks: "For which of the following, are you planning on obtaining organization level BCM certification within the next 12 months?" See Figure 4. Gartner, Inc. G Page 83 of 88

84 Figure 4. Plans for BCM Certification by Standard Source: Gartner (July 2014) Page 84 of 88 Gartner, Inc. G

85 Hype Cycle Phases, Benefit Ratings and Maturity Levels Table 2. Hype Cycle Phases Phase Innovation Trigger Peak of Inflated Expectations Trough of Disillusionment Slope of Enlightenment Plateau of Productivity Years to Mainstream Adoption Definition A breakthrough, public demonstration, product launch or other event generates significant press and industry interest. During this phase of overenthusiasm and unrealistic projections, a flurry of wellpublicized activity by technology leaders results in some successes, but more failures, as the technology is pushed to its limits. The only enterprises making money are conference organizers and magazine publishers. Because the technology does not live up to its overinflated expectations, it rapidly becomes unfashionable. Media interest wanes, except for a few cautionary tales. Focused experimentation and solid hard work by an increasingly diverse range of organizations lead to a true understanding of the technology's applicability, risks and benefits. Commercial off-the-shelf methodologies and tools ease the development process. The real-world benefits of the technology are demonstrated and accepted. Tools and methodologies are increasingly stable as they enter their second and third generations. Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid growth phase of adoption begins. Approximately 20% of the technology's target audience has adopted or is adopting the technology as it enters this phase. The time required for the technology to reach the Plateau of Productivity. Source: Gartner (July 2014) Gartner, Inc. G Page 85 of 88

86 Table 3. Benefit Ratings Benefit Rating Transformational High Moderate Low Definition Enables new ways of doing business across industries that will result in major shifts in industry dynamics Enables new ways of performing horizontal or vertical processes that will result in significantly increased revenue or cost savings for an enterprise Provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise Slightly improves processes (for example, improved user experience) that will be difficult to translate into increased revenue or cost savings Source: Gartner (July 2014) Table 4. Maturity Levels Maturity Level Status Products/Vendors Embryonic In labs None Emerging Commercialization by vendors First generation Pilots and deployments by industry leaders High price Much customization Adolescent Maturing technology capabilities and process understanding Uptake beyond early adopters Early mainstream Proven technology Second generation Less customization Third generation Vendors, technology and adoption rapidly evolving More out of box Methodologies Mature mainstream Robust technology Not much evolution in vendors or technology Several dominant vendors Legacy Not appropriate for new developments Maintenance revenue focus Cost of migration constrains replacement Obsolete Rarely used Used/resale market only Source: Gartner (July 2014) Page 86 of 88 Gartner, Inc. G

87 Gartner Recommended Reading Some documents may not be available as part of your current Gartner subscription. "Understanding Gartner's Hype Cycles" "Managing IT Resilience Is Much More Than Simply Failing Over Applications" "Predicts 2014: Business Continuity Management and IT Disaster Recovery Management" "Research Roundup: Business Continuity Management and IT Disaster Recovery Management, 2Q13" "Cool Vendors in Business Continuity Management, 2014" "Business Continuity Management Key Initiative Overview" Evidence Data collected from Gartner client inquiries and data contained in the Gartner ITScore for BCM and IT DRM database. 1 World Economic Forum Global Risks 2014 report. 2 Adoption of BCM Methodologies, Standards and Frameworks from the 2014 BCMP Magic Quadrant customer reference survey results. 3 Adoption of BCM Methodologies, Standards and Frameworks from the 2013 Gartner Security and Risk Management Survey. More on This Topic This is part of an in-depth collection of research. See the collection: Gartner's Hype Cycle Special Report for 2014 Gartner, Inc. G Page 87 of 88

88 GARTNER HEADQUARTERS Corporate Headquarters 56 Top Gallant Road Stamford, CT USA Regional Headquarters AUSTRALIA BRAZIL JAPAN UNITED KINGDOM For a complete list of worldwide locations, visit Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartner s prior written permission. If you are authorized to access this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartner s research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see Guiding Principles on Independence and Objectivity. Page 88 of 88 Gartner, Inc. G

Sanovi Disaster Recovery Management Making IT Recovery Work

Sanovi Disaster Recovery Management Making IT Recovery Work Disaster Recovery Management Software Issue 1 Sanovi Disaster Recovery Management Making IT Recovery Work 2 From the Gartner Files: Hype Cycle for Business Continuity Management and IT Disaster Recovery

More information

Our Cloud Backup Solution Provides Comprehensive Virtual Machine Data Protection Including Replication

Our Cloud Backup Solution Provides Comprehensive Virtual Machine Data Protection Including Replication Datasheet Our Cloud Backup Solution Provides Comprehensive Virtual Machine Data Protection Including Replication Virtual Machines (VMs) have become a staple of the modern enterprise data center, but as

More information

Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2010

Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2010 Research Publication Date: 8 September 2010 ID Number: G00206424 Hype Cycle for Business Continuity Management and IT Disaster Recovery Management, 2010 Roberta J. Witty, John P Morency Business continuity

More information

Asigra Cloud Backup V13.0 Provides Comprehensive Virtual Machine Data Protection Including Replication

Asigra Cloud Backup V13.0 Provides Comprehensive Virtual Machine Data Protection Including Replication Datasheet Asigra Cloud Backup V.0 Provides Comprehensive Virtual Machine Data Protection Including Replication Virtual Machines (VMs) have become a staple of the modern enterprise data center, but as the

More information

Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise

Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise 2 Your company s single most valuable asset may be its data. Customer data, product data, financial data, employee data this

More information

How To Protect Your Data From Harm

How To Protect Your Data From Harm Brochure: Comprehensive Agentless Backup and Recovery Software for the Enterprise Comprehensive Agentless Backup and Recovery Software for the Enterprise BROCHURE Your company s single most valuable asset

More information

Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise

Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise 2 Your company s single most valuable asset may be its data. Customer data, product data, financial data, employee data this

More information

5 Essential Benefits of Hybrid Cloud Backup

5 Essential Benefits of Hybrid Cloud Backup 5 Essential Benefits of Hybrid Cloud Backup QBR is a backup, disaster recovery (BDR), and business continuity solution targeted to the small to medium business (SMB) market. QBR solutions are designed

More information

A Modern Guide to Optimizing Data Backup and Recovery

A Modern Guide to Optimizing Data Backup and Recovery Structured: Cloud Backup A Modern Guide to Optimizing Data Backup and Recovery What to Consider in an Enterprise IT Environment A Modern Guide to Optimizing Data Backup and Recovery Data is the lifeblood

More information

Maintaining Business Continuity with Disk-Based Backup and Recovery Solutions

Maintaining Business Continuity with Disk-Based Backup and Recovery Solutions I D C V E N D O R S P O T L I G H T Maintaining Business Continuity with Disk-Based Backup and Recovery Solutions March 2008 Adapted from Worldwide Data Protection and Recovery Software 2007 2011 Forecast:

More information

Daymark DPS Enterprise - Agentless Cloud Backup and Recovery Software

Daymark DPS Enterprise - Agentless Cloud Backup and Recovery Software Daymark DPS Enterprise - Agentless Cloud Backup and Recovery Software Your company s single most valuable asset may be its data. Customer data, product data, financial data, employee data this is the lifeblood

More information

Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider

Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider Whitepaper: Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider WHITEPAPER Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider Requirements Checklist

More information

Leveraging Public Cloud for Affordable VMware Disaster Recovery & Business Continuity

Leveraging Public Cloud for Affordable VMware Disaster Recovery & Business Continuity White Paper White Paper Managing Public Cloud Computing in the Enterprise Leveraging Public Cloud for Affordable VMware Disaster Recovery & Business Continuity A Quick Start Guide By Edward Haletky Principal

More information

Leveraging the Cloud for Data Protection and Disaster Recovery

Leveraging the Cloud for Data Protection and Disaster Recovery WHITE PAPER: Leveraging the Cloud for Data Protection and Disaster Recovery Leveraging the Cloud for Data Protection and Disaster Recovery Bennett Klein DATA MANAGEMENT CUSTOMER SOLUTIONS MARCH 2012 Table

More information

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup

Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP

More information

Mayur Dewaikar Sr. Product Manager Information Management Group Symantec Corporation

Mayur Dewaikar Sr. Product Manager Information Management Group Symantec Corporation Next Generation Data Protection with Symantec NetBackup 7 Mayur Dewaikar Sr. Product Manager Information Management Group Symantec Corporation White Paper: Next Generation Data Protection with NetBackup

More information

BACKUP IS DEAD: Introducing the Data Protection Lifecycle, a new paradigm for data protection and recovery WHITE PAPER

BACKUP IS DEAD: Introducing the Data Protection Lifecycle, a new paradigm for data protection and recovery WHITE PAPER BACKUP IS DEAD: Introducing the Data Protection Lifecycle, a new paradigm for data protection and recovery Despite decades of research and development into backup and data protection, enterprise customers

More information

The Disaster Recovery Maturity Framework

The Disaster Recovery Maturity Framework The Disaster Recovery Maturity Framework A guide for understanding and improving your company s resiliency www.axcient.com Climbing The Recovery Maturity Curve Businesses are critically reliant upon IT

More information

How To Back Up A Virtual Machine

How To Back Up A Virtual Machine 2010 Symantec Disaster Recovery Study Global Results Methodology Applied Research performed survey 1,700 enterprises worldwide 5,000 employees or more Cross-industry 2 Key Findings Virtualization and Cloud

More information

The case for cloud-based disaster recovery

The case for cloud-based disaster recovery IBM Global Technology Services IBM SmartCloud IBM SmartCloud Virtualized Server Recovery i The case for cloud-based disaster recovery Cloud technologies help meet the need for quicker restoration of service

More information

Dell Data Protection Point of View: Recover Everything. Every time. On time.

Dell Data Protection Point of View: Recover Everything. Every time. On time. Dell Data Protection Point of View: Recover Everything. Every time. On time. Dell Data Protection White Paper May 2013 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Simplify Your Data Protection Strategies: Best Practices for Online Backup & Recovery

Simplify Your Data Protection Strategies: Best Practices for Online Backup & Recovery Simplify Your Data Protection Strategies: Best Practices for Online Backup & Recovery Souvik Choudhury, Senior Director, Product Management SunGard Availability Services DRAFT www.sungardas.com Agenda

More information

Asigra Cloud Backup V13.0 Gives You Greater Flexibility and Expands Your Total Addressable Market

Asigra Cloud Backup V13.0 Gives You Greater Flexibility and Expands Your Total Addressable Market Datasheet Asigra Cloud Backup V13.0 Gives You Greater Flexibility and Expands Your Total Addressable Market As a provider of cloud-based data protection services, you want to offer customers a spectrum

More information

AtriaCapture using Asigra

AtriaCapture using Asigra AtriaCapture using Asigra Your company s single most valuable asset may be its data Customer data, product data, financial data, employee data this is the lifeblood of modern organisations. And when something

More information

Effective Storage Management for Cloud Computing

Effective Storage Management for Cloud Computing IBM Software April 2010 Effective Management for Cloud Computing April 2010 smarter storage management Page 1 Page 2 EFFECTIVE STORAGE MANAGEMENT FOR CLOUD COMPUTING Contents: Introduction 3 Cloud Configurations

More information

SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery

SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery www.citrix.com Contents Introduction... 3 Fitting Disaster Recovery to the Cloud... 3 Considerations for Disaster Recovery

More information

Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider

Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider Requirements Checklist for Choosing a Cloud Backup and Recovery Service Provider Requirements Checklist for As the importance and value of corporate data grows, complex enterprise IT environments need

More information

HP StoreEasy and Cloud-Based Data Protection

HP StoreEasy and Cloud-Based Data Protection Business white paper HP StoreEasy and Cloud-Based Data Protection Protecting critical file data on HP StoreEasy with HP Autonomy LiveVault Table of contents 3 Introduction 3 The challenges of rapidly growing

More information

What You Need to Know About Cloud Backup: Your Guide to Cost, Security, and Flexibility

What You Need to Know About Cloud Backup: Your Guide to Cost, Security, and Flexibility Your Guide to Cost, Security, and Flexibility What You Need to Know About Cloud Backup: Your Guide to Cost, Security, and Flexibility 10 common questions answered Over the last decade, cloud backup, recovery

More information

We look beyond IT. Cloud Offerings

We look beyond IT. Cloud Offerings Cloud Offerings cstor Cloud Offerings As today s fast-moving businesses deal with increasing demands for IT services and decreasing IT budgets, the onset of cloud-ready solutions has provided a forward-thinking

More information

How to Manage Critical Data Stored in Microsoft Exchange Server 2010. By Hitachi Data Systems

How to Manage Critical Data Stored in Microsoft Exchange Server 2010. By Hitachi Data Systems W H I T E P A P E R How to Manage Critical Data Stored in Microsoft Exchange Server 2010 By Hitachi Data Systems April 2012 2 Table of Contents Executive Summary and Introduction 3 Mission-critical Microsoft

More information

Traditional Disaster Recovery versus Cloud based DR

Traditional Disaster Recovery versus Cloud based DR Traditional Disaster Recovery versus Cloud based DR May 2014 Executive Summary Many businesses want Disaster Recovery (DR) services to prevent either man-made or natural disasters from causing expensive

More information

Introduction. Silverton Consulting, Inc. StorInt Briefing

Introduction. Silverton Consulting, Inc. StorInt Briefing Introduction Silverton Consulting, Inc. StorInt Briefing All too often in today s SMB data centers the overall backup and recovery process, including both its software and hardware components, is given

More information

What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered

What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered Over the last decade, cloud backup, recovery and restore (BURR) options have emerged

More information

FUJITSU Backup as a Service Rapid Recovery Appliance

FUJITSU Backup as a Service Rapid Recovery Appliance FUJITSU Backup as a Service Rapid Recovery Appliance The unprecedented growth of business data The role that data plays in today s organization is rapidly increasing in importance. It guides and supports

More information

Future-Proofed Backup For A Virtualized World!

Future-Proofed Backup For A Virtualized World! ! Future-Proofed Backup For A Virtualized World! Prepared by: Colm Keegan, Senior Analyst! Prepared: January 2014 Future-Proofed Backup For A Virtualized World Like death and taxes, growing backup windows

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com W H I T E P A P E R T h e B e n e f i t s o f C l o u d - B a s e d B a c k u p : A d d r e s s i

More information

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with

More information

Backup and Disaster Recovery Modernization Is No Longer a Luxury, but a Business Necessity

Backup and Disaster Recovery Modernization Is No Longer a Luxury, but a Business Necessity Research Publication Date: 11 August 2011 ID Number: G00215300 Backup and Disaster Recovery Modernization Is No Longer a Luxury, but a Business Necessity John P Morency, Donna Scott, Dave Russell For the

More information

Virtualization Support - Real Backups of Virtual Environments

Virtualization Support - Real Backups of Virtual Environments Virtualization Support Real Backups of Virtual Environments Contents Virtualization Challenges 3 The Benefits of Agentless Backup 4 Backup and Recovery Built for Virtualized Environments 4 Agentless in

More information

Cloud Backup And Disaster Recovery Meets Next-Generation Database Demands Public Cloud Can Lower Cost, Improve SLAs And Deliver On- Demand Scale

Cloud Backup And Disaster Recovery Meets Next-Generation Database Demands Public Cloud Can Lower Cost, Improve SLAs And Deliver On- Demand Scale A Forrester Consulting Thought Leadership Paper Commissioned By Microsoft March 2014 Cloud Backup And Disaster Recovery Meets Next-Generation Database Demands Public Cloud Can Lower Cost, Improve SLAs

More information

Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise

Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise Comprehensive Agentless Cloud Backup and Recovery Software for the Enterprise Asigra, Inc. with Asigra technology Working gives me confidence in my data protection plan. I know that if I ever need to restore,

More information

DEFINING THE RIGH DATA PROTECTION STRATEGY

DEFINING THE RIGH DATA PROTECTION STRATEGY DEFINING THE RIGH DATA PROTECTION STRATEGY The Nuances of Backup and Recovery Solutions By Cindy LaChapelle, Principal Consultant, ISG www.isg-one.com INTRODUCTION Most organizations have traditionally

More information

Data Protection as Part of Your Cloud Journey

Data Protection as Part of Your Cloud Journey Data Protection as Part of Your Cloud Journey Jim Vanek DPAD Area Manager IL / WI EMC Data Protection & Availability Division October 23, 2014 Copyright 2014 EMC Corporation. All rights reserved. 1 Setting

More information

The Microsoft Large Mailbox Vision

The Microsoft Large Mailbox Vision WHITE PAPER The Microsoft Large Mailbox Vision Giving users large mailboxes without breaking your budget Introduction Giving your users the ability to store more e mail has many advantages. Large mailboxes

More information

Real-time Protection for Hyper-V

Real-time Protection for Hyper-V 1-888-674-9495 www.doubletake.com Real-time Protection for Hyper-V Real-Time Protection for Hyper-V Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate

More information

Things You Need to Know About Cloud Backup

Things You Need to Know About Cloud Backup Things You Need to Know About Cloud Backup Over the last decade, cloud backup, recovery and restore (BURR) options have emerged as a secure, cost-effective and reliable method of safeguarding the increasing

More information

Unitrends Integrated Backup and Recovery of Microsoft SQL Server Environments

Unitrends Integrated Backup and Recovery of Microsoft SQL Server Environments Solution Brief Unitrends Integrated Backup and Recovery of Microsoft SQL Server Environments Summary Your business infrastructure relies on your Microsoft SQL Servers. Your business no matter the size

More information

Virtualized Disaster Recovery (VDR) Overview... 2. Detailed Description... 3

Virtualized Disaster Recovery (VDR) Overview... 2. Detailed Description... 3 Service Description Virtualized Disaster Recovery (VDR) Terremark's Virtualized Disaster Recovery (VDR) service is a fully managed replication and Disaster Recovery (DR) service, where Terremark provides

More information

Why Your Backup/Recovery Solution Probably Isn t Good Enough

Why Your Backup/Recovery Solution Probably Isn t Good Enough Why Your Backup/Recovery Solution Probably Isn t Good Enough Whatever can go wrong, will go wrong. Whatever can go wrong, will go wrong. Are you prepared? @! Restored? Complex 18.5 Hours downtime $5,600/Min

More information

CA ARCserve Family r15

CA ARCserve Family r15 CA ARCserve Family r15 Rami Nasser EMEA Principal Consultant, Technical Sales [email protected] The ARCserve Family More than Backup The only solution that: Gives customers control over their changing

More information

VMware System, Application and Data Availability With CA ARCserve High Availability

VMware System, Application and Data Availability With CA ARCserve High Availability Solution Brief: CA ARCserve R16.5 Complexity ate my budget VMware System, Application and Data Availability With CA ARCserve High Availability Adding value to your VMware environment Overview Today, performing

More information

How Our Cloud Backup Solution Protects Your Network

How Our Cloud Backup Solution Protects Your Network How Our Cloud Backup Solution Protects Your Network Cloud Backup for Healthcare Key Cloud Backup Features Protection for your Whole Network The 3 Levels of Backup Intelligence 2 Our backup solution powered

More information

Virtual Disaster Recovery

Virtual Disaster Recovery Virtual Disaster Recovery White Paper September 16, 2008 Kelly Laughton Davenport Group Consultant Davenport Group 2008. All rights reserved. Introduction Today s data center has evolved into a complex,

More information

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper

Disaster Recovery Solutions for Oracle Database Standard Edition RAC. A Dbvisit White Paper Disaster Recovery Solutions for Oracle Database Standard Edition RAC A Dbvisit White Paper Copyright 2011-2012 Dbvisit Software Limited. All Rights Reserved v2, Mar 2012 Contents Executive Summary... 1

More information

THOUGHT LEADERSHIP. Journey to Cloud 9. Navigating a path to secure cloud computing. Alastair Broom Solutions Director, Integralis

THOUGHT LEADERSHIP. Journey to Cloud 9. Navigating a path to secure cloud computing. Alastair Broom Solutions Director, Integralis Journey to Cloud 9 Navigating a path to secure cloud computing Alastair Broom Solutions Director, Integralis March 2012 Navigating a path to secure cloud computing 2 Living on Cloud 9 Cloud computing represents

More information

How To Protect Data On Network Attached Storage (Nas) From Disaster

How To Protect Data On Network Attached Storage (Nas) From Disaster White Paper EMC FOR NETWORK ATTACHED STORAGE (NAS) BACKUP AND RECOVERY Abstract This white paper provides an overview of EMC s industry leading backup and recovery solutions for NAS systems. It also explains

More information

Virtualizing disaster recovery using cloud computing

Virtualizing disaster recovery using cloud computing IBM Global Technology Services Thought Leadership White Paper January 2012 Virtualizing disaster recovery using cloud computing Transition your applications quickly to a resilient cloud 2 Virtualizing

More information

Protecting Microsoft Hyper-V 3.0 Environments with CA ARCserve

Protecting Microsoft Hyper-V 3.0 Environments with CA ARCserve Solution Brief: CA ARCserve R16.5 Complexity ate my budget Protecting Microsoft Hyper-V 3.0 Environments with CA ARCserve Customer Challenges Today, you face demanding service level agreements (SLAs) while

More information

Disaster Recover Challenges Today

Disaster Recover Challenges Today S O L U T I O N S O V E R V I E W Disaster Recovery Solutions from VMware Transforming Disaster Recovery - VMware Infrastructure for Rapid, Reliable and Cost-Effective Disaster Recovery Disaster Recover

More information

The Difference Between Disaster Recovery and Business Continuance

The Difference Between Disaster Recovery and Business Continuance The Difference Between Disaster Recovery and Business Continuance In high school geometry we learned that a square is a rectangle, but a rectangle is not a square. The same analogy applies to business

More information

Zerto Virtual Manager Administration Guide

Zerto Virtual Manager Administration Guide Zerto Virtual Manager Administration Guide AWS Environment ZVR-ADVA-4.0U2-01-23-07-15 Copyright 2015, Zerto Ltd. All rights reserved. Information in this document is subject to change without notice and

More information

June 2009. Blade.org 2009 ALL RIGHTS RESERVED

June 2009. Blade.org 2009 ALL RIGHTS RESERVED Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS

More information

What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered

What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered What you need to know about cloud backup: your guide to cost, security, and flexibility. 8 common questions answered Over the last decade, cloud backup, recovery and restore (BURR) options have emerged

More information

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com

Global Headquarters: 5 Speen Street Framingham, MA 01701 USA P.508.872.8200 F.508.935.4015 www.idc.com BUYER CASE STUDY EMC IT Increasing Efficiency, Reducing Costs, and Optimizing IT with Data Deduplication Sponsored by: EMC Corporation Robert Amatruda February 2011 Laura DuBois Global Headquarters: 5

More information

Protecting enterprise servers with StoreOnce and CommVault Simpana

Protecting enterprise servers with StoreOnce and CommVault Simpana Technical white paper Protecting enterprise servers with StoreOnce and CommVault Simpana HP StoreOnce Backup systems Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key

More information

Deployment Options for Microsoft Hyper-V Server

Deployment Options for Microsoft Hyper-V Server CA ARCserve Replication and CA ARCserve High Availability r16 CA ARCserve Replication and CA ARCserve High Availability Deployment Options for Microsoft Hyper-V Server TYPICALLY, IT COST REDUCTION INITIATIVES

More information

Dell Data Protection Managed Service Provider Program

Dell Data Protection Managed Service Provider Program Dell Data Protection Managed Service Provider Program Grow your business in the data protection marketplace, a sector that Gartner estimates will see 40 percent of organizations augmenting or changing

More information

How To Backup With Ec Avamar

How To Backup With Ec Avamar BACKUP AND RECOVERY FOR MICROSOFT-BASED PRIVATE CLOUDS LEVERAGING THE EMC DATA PROTECTION SUITE A Detailed Review ABSTRACT This white paper highlights how IT environments which are increasingly implementing

More information

Agentless Cloud Backup and Recovery Software for the Enterprise

Agentless Cloud Backup and Recovery Software for the Enterprise Recovery is Everything Agentless Cloud Backup and Recovery Software for the Enterprise T: 020 7749 0800 E: [email protected] W: www.datrix.co.uk Your company s single most valuable asset may be its data.

More information

Table of contents 3 4 4 5 5 6 7

Table of contents 3 4 4 5 5 6 7 Business white paper Unified data protection with HP Data Protector Leverage on-premise, cloud, and hybrid backup and recovery strategies Table of contents 3 Introduction 4 Are legacy approaches meeting

More information

Top 5 Disaster Recovery Reports IT Risk and Business Continuity Managers Live For

Top 5 Disaster Recovery Reports IT Risk and Business Continuity Managers Live For Whitepaper Top 5 Disaster Recovery Reports IT Risk and Business Continuity Managers Live For 1. Disaster Recovery Runbook Report 2. Disaster Recovery Compliance Report 3. Disaster Recovery Listing: Virtual

More information

Using HP StoreOnce Backup systems for Oracle database backups

Using HP StoreOnce Backup systems for Oracle database backups Technical white paper Using HP StoreOnce Backup systems for Oracle database backups Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key features and benefits 2 HP StoreOnce

More information

SAFETY FIRST. Emerging Trends in IT Disaster Recovery. By Cindy LaChapelle, Principal Consultant. www.isg-one.com

SAFETY FIRST. Emerging Trends in IT Disaster Recovery. By Cindy LaChapelle, Principal Consultant. www.isg-one.com SAFETY FIRST Emerging Trends in IT Disaster Recovery By Cindy LaChapelle, Principal Consultant www.isg-one.com INTRODUCTION Against a backdrop of increasingly integrated and interdependent global service

More information

Enable unified data protection

Enable unified data protection Business white paper Enable unified data protection HP Data Protector Table of contents 3 The latest backup and recovery strategies 3 Are legacy approaches meeting current challenges? 4 The deployment

More information

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture Continuous Availability Suite: Neverfail s Continuous Availability Suite is at the core of every Neverfail solution. It provides a comprehensive software solution for High Availability (HA) and Disaster

More information

The Growth Opportunity for SMB Cloud and Hybrid Business Continuity

The Growth Opportunity for SMB Cloud and Hybrid Business Continuity WHITE PAPER The Growth Opportunity for SMB Cloud and Hybrid Business Continuity Sponsored by: Carbonite Raymond Boggs Laura DuBois April 2015 Christopher Chute EXECUTIVE SUMMARY To mitigate the economic

More information

Redefining Microsoft SQL Server Data Management. PAS Specification

Redefining Microsoft SQL Server Data Management. PAS Specification Redefining Microsoft SQL Server Data Management APRIL Actifio 11, 2013 PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft SQL Server Data Management.... 4 Virtualizing

More information

Financial Services Need More than Just Backup... But they don t need to spend more! axcient.com

Financial Services Need More than Just Backup... But they don t need to spend more! axcient.com Financial Services Need More than Just Backup... But they don t need to spend more! axcient.com Introduction Financial institutions need to keep their businesses up and running more than ever now. Considering

More information

IBM Spectrum Protect in the Cloud

IBM Spectrum Protect in the Cloud IBM Spectrum Protect in the Cloud. Disclaimer IBM s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM s sole discretion. Information regarding

More information

Vodacom Managed Hosted Backups

Vodacom Managed Hosted Backups Vodacom Managed Hosted Backups Robust Data Protection for your Business Critical Data Enterprise class Backup and Recovery and Data Management on Diverse Platforms Vodacom s Managed Hosted Backup offers

More information

7 Essential Benefits of Hybrid Cloud Backup

7 Essential Benefits of Hybrid Cloud Backup Datto Whitepaper 7 Essential Benefits of Hybrid Cloud Backup Datto is a leading provider of backup, disaster recovery (BDR), and business continuity solutions targeted to the small to medium business (SMB)

More information

Backup, Recovery & Archiving. Choosing a data protection strategy that best suits your IT requirements and business needs.

Backup, Recovery & Archiving. Choosing a data protection strategy that best suits your IT requirements and business needs. A N AT L A N T I C - I T. N E T W H I T E PA P E R Choosing a data protection strategy that best suits your IT requirements and business needs. 2 EXECUTIVE SUMMARY Data protection has become the hydra

More information

SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE. When you can do it simply, you can do it all.

SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE. When you can do it simply, you can do it all. SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE When you can do it simply, you can do it all. SYMANTEC NETBACKUP APPLIANCES Symantec understands the shifting needs of the data center and offers NetBackup

More information

HRG Assessment: Stratus everrun Enterprise

HRG Assessment: Stratus everrun Enterprise HRG Assessment: Stratus everrun Enterprise Today IT executive decision makers and their technology recommenders are faced with escalating demands for more effective technology based solutions while at

More information

Reduce your data storage footprint and tame the information explosion

Reduce your data storage footprint and tame the information explosion IBM Software White paper December 2010 Reduce your data storage footprint and tame the information explosion 2 Reduce your data storage footprint and tame the information explosion Contents 2 Executive

More information