1 White Paper Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure By Lauren Whitehouse and Jason Buffington January, 2012 This ESG White Paper was commissioned by Amazon Web Services LLC and is distributed under license from ESG. 2012, Enterprise Strategy Group, Inc. All Rights Reserved
2 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 2 Contents The Cloud is Ideal for Disaster Recovery... 3 Risks are Abundant... 3 Challenges to Implementing DR... 4 Hot, Warm, or Cold Standby - Which to Choose?... 4 DR Testing Can Be Problematic... 6 Virtualization and Cloud Infrastructure Services Enable DR... 6 Is the Cloud for Everyone?... 8 Amazon Web Services... 9 Durability Amazon Web Services Regions and Availability Zones Amazon S Amazon EC Amazon EBS No-Fee Data Import AWS Direct Connect Amazon Virtual Private Cloud AWS Import/Export AWS Storage Gateway Security and Compliance Leveraging AWS Components for DR The Bigger Truth All trademark names are property of their respective companies. Information contained in this publication has been obtained by sources The Enterprise Strategy Group (ESG) considers to be reliable but is not warranted by ESG. This publication may contain opinions of ESG, which are subject to change from time to time. This publication is copyrighted by The Enterprise Strategy Group, Inc. Any reproduction or redistribution of this publication, in whole or in part, whether in hard-copy format, electronically, or otherwise to persons not authorized to receive it, without the express consent of the Enterprise Strategy Group, Inc., is in violation of U.S. copyright law and will be subject to an action for civil damages and, if applicable, criminal prosecution. Should you have any questions, please contact ESG Client Relations at (508)
3 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 3 The Cloud is Ideal for Disaster Recovery Cloud infrastructure is gaining in popularity for organizations that have very real business needs that rely on information technology (IT) infrastructure but struggle with the costs both capital and operational of expanding their data centers. But before even discussing the cloud, it s worthwhile to settle on a definition. ESG defines a cloud infrastructure as follows: A computing model in which the equipment including servers, storage, and networking components used to support an organization's IT operations is hosted by a service provider and made available to customers over a network, typically the Internet. The service provider owns the equipment and is responsible for housing, running, and maintaining it, with the client typically paying on a per-use basis. Today, more than three-quarters (82%) of organizations have plans to leverage cloud-based services to some extent over the next five years. 1 The cost, agility, and flexibility benefits are too obvious to deny, particularly for tasks such as disaster recovery (DR) 2. DR is an ideal use case for taking advantage of the cloud. While many organizations remain cautious about placing production services in the cloud, they are often more comfortable testing those waters for DR especially since the cloud alters the economics of DR so radically. Some organizations implement DR only for their most critical applications to minimize risk and keep expenses in check. Using the cloud enables companies to extend DR services to additional workloads, further reducing their exposure to business interruption. Others that currently operate failover sites are finding the costs skyrocketing because of continual data growth. But for many (particularly smaller) organizations, the cloud actually makes DR possible for the first time. This paper outlines the problems organizations face when implementing DR, describes how the cloud changes the game, and provides some insight into a suite of components from Amazon Web Services (AWS) that make DR in the cloud simple and cost-efficient. Risks are Abundant Why is DR so important? Every organization is vulnerable to a range of outages and disasters. Computer viruses are everywhere; applications and disk drives are vulnerable to faults; data can become corrupted; and of course, human error is an ever-present threat. While none of these seem like disasters, the interruptions they cause can wreak havoc on daily business activities. Given that, imagine the impact of real disasters such as fires, floods, power failures, or weather-related outages. Whether caused by technical problems or natural phenomena, this unplanned downtime must be immediately addressed by IT organizations in order to restore business to a fully operational state. The absence of a DR strategy and plan comes with significant risk. Business downtime can result in huge losses in productivity. Breaches of customer service agreements, whether explicit or implied, can cause irreparable damage to reputations and/or financial assets. In ESG research on data protection, 74% of survey respondents state they could withstand three hours or less of downtime for tier-1 (business-critical) data before experiencing adverse business affects, and more than half (53%) could only tolerate one hour or less of tier-1 downtime 3. Aggressive recovery objectives are common in today s business environment where global operations and 24/7 productivity are typical. IT must deliver on contracted recovery time objectives (RTOs, defined as the amount of time between an outage and operational resumption) and recovery point objectives (RPOs, defined as the amount of data loss that the organizations can tolerate in the transition). Once the implications of downtime and data loss 1 Source: ESG Research Report, Cloud Computing Adoption Trends, May DR is defined as the act or process of invoking pre-planned procedures after a pre-determined period of time has elapsed in order to recover critical business system functions, application servers, and applications from loss, which may include retrieving data from backup copies or replication targets as a result of a catastrophe. 3 Source: ESG Research Report, Data Protection Trends, April 2010.
4 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 4 are understood, IT organizations can determine application availability requirements and the proper protection mechanisms to apply to ensure compliance. To meet service level agreements (SLAs), organizations often utilize various layers of data protection using a mix of replication/mirroring technologies for primary system and storage IT continuity. Snapshots and daily backups are commonly used for operational recovery and DR. However, the result is over insurance for top tier applications and data (resulting in extra up-front expenses that may not be needed), and under insurance for lower tiers (resulting in extra expenses after a disaster). Challenges to Implementing DR The value of DR is not in question; every organization is concerned about its ability to get back up and running after an outage or disaster. But implementing DR can be expensive and complex, as well as tedious and time-consuming. However, consider the alternative numerous organizations have simply vanished because they were unable to get back to full operations rapidly after a disaster. Others have been impacted by lost revenue and damage to their reputation. One example of a DR debacle is the August 2010 incident in the Commonwealth of Virginia. A major IT system failure disrupted services statewide in Virginia, affecting 27 of the 89 state s agencies. As a result of the SAN going down, 13% of the state s file servers failed. The Commonwealth lacked a continuity procedure, which delayed recovery of failed systems for 18 hours and resumption of services for over a week. State agencies, such as the Department of Motor Vehicles and the Department of Taxation, were heavily impacted by the outage and unable to process requests or book revenue for the state for days to weeks. Virginia s IT contractor was cited for its responsibility in the incident and recently paid the Commonwealth of Virginia $4.7M in damages to cover the cost of improvements to Virginia s network infrastructure and data protection systems, and repair the losses incurred by the outage. Having a DR strategy in place could have avoided this outcome. Hot, Warm, or Cold Standby - Which to Choose? Traditionally, staging for disaster recovery requires access to physical resources and data copies housed at a geographically remote location. Three common remote site strategies are: cold, warm, and hot standby sites. Their thermal descriptions match up with how long it takes and (inversely) how much it costs to recover applications and data, in order to resume IT operations. Cold takes longer but is less expensive, hot is faster but more costly. Cold standby typically involves maintaining offsite copies of data on tape, as well as access to similar hardware systems on an as-needed or rental basis for bare metal recovery. The network at a cold standby site must be configured to match the primary network configuration, including VLAN, VPN, DNS, and firewall rules. After an interruption at the primary site, cold standby restores application availability within hours to days, often with delays during the recovery processes due to tape functions being time-consuming and prone to errors. Many organizations neglect to take on the expense and effort of testing a cold standby site, and, consequently, recovery is done via trial and error and may not succeed. Some disadvantages of a cold standby approach are that failing back is more complex, and, since systems and data are not available until needed for recovery, it cannot be used for other tasks such as testing one s DR plan or mitigating smaller system outages. Warm standby maintains data copies on disk (often using virtual machine images to speed restore and recovery), enabling access to infrastructure for application delivery within minutes to hours of a primary site outage. Periodic testing is easier because resources are on disk. A warm site is often used for replication or mirroring of data and systems for recovery purposes. Virtual machine images that encapsulate the operating system, application, data, and configuration settings make it simpler to synchronize between the primary and warm standby site. Hot standby offers the fastest restore as data and applications are maintained offsite on running systems. This enables continuous application availability within seconds of an interruption at the primary site.
5 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 5 Multiple application instances can be running and receiving regular updates, while only a single instance provides access to services and content. The hot standby site is available to immediately take over operations in the event of a failure at the primary site. Server clustering and synchronous replication are good DR options, but are also the most expensive to maintain. In addition, hot standby solutions require routine upgrades and maintenance to the offsite systems along with a high-speed network running between sites for replication, adding additional complexity and expense. Consequently, certain DR strategies require a complete set of additional hardware at a geographically remote site that is capable of serving nearly the same IO demands as one s original production server farm effectively doubling infrastructure and operational costs for many, and rendering DR impossible for some. In addition, having sufficient staff and time available to manage the remote site, in addition to the primary site, is often deemed not feasible. Some organizations are blocked from DR simply by their inability to attract or retain sufficient skilled IT personnel to manage growing storage and the associated backup, archiving, and DR scenarios. Many companies mostly small and mid-sized firms don t have a remote-site option because they lack access to corporate-owned or co-located properties to house duplicate resources. ESG research indicates that less than 20% of midmarket companies (defined as those with 100 to 999 employees) have corporate-owned secondary sites available to them 4 (see Figure 1). This research also indicates that 48% of midmarket companies and 23% of enterprise organizations lease space to house DR infrastructure. Maintaining a second corporate-owned or leased site often results in high costs and significant waste. Costs include the purchase and maintenance of additional server/storage/networking infrastructure, along with the rental fees for data center floor space, energy to power and cool the systems, IT staff time, etc. In many cases, these expensive resources then sit underutilized or even idle most of the time. That is a significant expense for something that may never get used. Figure 1. Type of DR Site By Company Size Which of the following best describes your organization s disaster recovery site? (Percent of respondents, N=183) Midmarket ( employees, N=21) Enterprise (1000 employees or more, N=124) 60% 50% 40% 56% 48% 30% 20% 10% 19% 23% 24% 12% 10% 10% 0% We have a secondary data center, which is our own dedicated facility We lease data center space from a third party, but we own and operate the IT infrastructure We outsource disaster recovery to a third-party provider (secondary data center space and IT infrastructure are owned and operated by service provider) We send backup tapes to an off-site facility as part of our disaster recovery process Source: Enterprise Strategy Group, Source: ESG Research Brief, The Impact of Virtualization on Disaster Recovery, publication pending, 2011.
6 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 6 DR Testing Can Be Problematic Some consider DR testing optional because it can be inconvenient, disruptive, and expensive, but it should be a requirement. By not testing a DR plan and equipment, more risk is introduced. Since downtime may be required in testing procedures, many organizations have good intentions when it comes to testing their DR plans, but are prohibited by the need to make the DR site available, spin up new hardware, and find off-peak time (often weekends, which may require additional staff compensation) to do the testing. The Director of IT at AWS customer Sage Manufacturing described their previously time-consuming process just to set up for DR testing: In the past, spinning up a test environment would consist of procuring new hardware and turning it on. This would typically take about a month from the word go to bring the hardware in and set it up. Now it takes about 12 hours. Testing DR systems may disrupt normal service delivery, and it can take significant time as IT organizations practice recovering a lost site, documenting the steps, and checking configurations. Oftentimes, those that do test use nonrealistic workloads and subsets of relevant data in order to simplify and speed the process. However, this can render the tests unreliable. Virtualization and Cloud Infrastructure Services Enable DR In the face of these challenges, server virtualization and cloud-based services are like knights in shining armor. The separation of physical devices from computing and data services enables DR to be feasible where it wasn t before, primarily because of the reduced hardware costs and the associated expenses of maintaining that hardware. Virtualization reduces the amount of remote site hardware necessary, as well as the energy and management resources to run them. Cloud computing offers the ability to scale up and down easily, delivering a true elastic capacity that optimizes resources while minimizing expense. There are no capital expenses in a cloud-based solution. The point is to pay as you go and pay only for what you use and not be tied to a long-term commitment. In addition, cloud computing can improve time-to-market conditions because service creation is streamlined and delivery automatic. Organizations can focus their IT resources on business differentiators instead of spending time managing general infrastructure resources. Flexibility is also increased because virtual machines are portable. Organizations can fail back from a disaster to the original location or to a new one, in the cloud or onsite, without regard to what the original hardware was. With cloud-based computing and storage, organizations have access to a DR platform without building one. They don t need additional corporate-owned infrastructure assets like servers and storage arrays, and they can leverage massive deployments that provide economies of scale to benefit all subscribers, whether the cloud is hosted in house or with a service provider. Redundancy can be automatic, with stored objects replicated across multiple geographies without significant additional cost. A subscription-style model makes an outsourced DR services approach highly attractive, particularly in comparison to a do-it-yourself (DIY) model. A subscription-style approach makes resources available on demand, so there is no need to over provision assets. Instead, resources are more fully utilized and can seamlessly scale. In addition, using the cloud as a DR platform makes it more affordable to create a truly durable implementation by replicating systems and data across multiple geographies. The cloud can also render the DR platform geographically agnostic, eliminating time constraints and making the location of equipment irrelevant. Of course, no conversation about cloud-based services would be of interest without discussing its fundamental benefit: reduced costs. With the cloud, the capital costs of equipment, as well as the operational costs of floor space, energy, staff, updates, and maintenance can be eliminated. As Figure 2 shows, cost reduction is the great expectation of IT organizations. Of those ESG survey respondents who cite the use of cloud computing as an
7 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 7 explicit 2011 IT cost reduction strategy, nearly one-third (31%) expect cloud services to significantly impact their IT strategies 5. Figure 2. Expected Impact of Cloud Computing Services, By Usage of Cloud as a Cost Reduction Strategy In your opinion, to what extent will public cloud computing services impact your organization s IT strategy over the next five years? (Percent of respondents) Significant impact: we have a formal strategy to use numerous cloud computing services, which will result in major changes to our IT infrastructure and/or processes Moderate impact: we will tactically deploy some cloud computing services, which will result in some, but not major, changes to our IT infrastructure and/or processes 8% 31% 40% 58% Organizations citing the use of cloud computing services as a cost containment/red uction strategy (N=113) Little impact: we will use a small number of cloud computing services, which will not result in any notable changes to our IT infrastructure and/or processes 10% 35% All other organizations (N=372) No impact: we have no plans to utilize cloud computing services over the next five years 1% 17% 0% 20% 40% 60% 80% Source: Enterprise Strategy Group, Another significant cost impact of cloud computing is the pay-as-you-go subscription model, which creates an opportunity for more predictable budgeting and eliminates the need to make a long-term commitment. Upfront capital investments are eliminated since organizations only pay for DR infrastructure when they use it. Staffing costs are minimized since the responsibility for purchasing and managing the DR infrastructure is outsourced to a cloud service provider. Monthly payments are based on services used, making it easier to budget. Replication to a cloud infrastructure makes DR processes much simpler, and lowers RTOs. Cloud-based DR enables organizations to asynchronously replicate data and rapidly fail over to an instance of their application(s) at a cloud infrastructure service provider. Since virtual machine images can be stored there, recovery time is dramatically reduced. Applications and data can come online in minutes. ESG survey respondents confirm this, as more than half (51%) agree or strongly agree that replicating virtual machine images for DR enabled them to lower RTOs 6 (see Figure 3). Testing for DR is simpler as well. DR simulations can be created more quickly and less expensively in the cloud versus an on-premises or co-location facility. In ESG research, 43% of respondents found that deploying server virtualization simplified DR testing, with larger sites (45%) more likely than smaller sites (32%) to reap the benefits. 7 5 Source: ESG Research Report, Cloud Computing Adoption Trends, May Source: ESG Research Brief, The Impact of Virtualization on Disaster Recovery, publication pending Ibid.
8 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 8 Figure 3. Sentiment Regarding Virtual Machine Replication for DR Lowering RTOs Please indicate whether you agree or disagree with the following statement on a scale of 1 (strongly agree) to 7 (strongly disagree). (Percent of respondents, N=49) 1 (strongly agree) (strongly disagree) Replicating virtual machine images for disaster recovery will enable us to lower our recovery time objectives (RTOs) 18% 33% 18% 8% 10% 10% 2% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Source: Enterprise Strategy Group, Security and compliance initiatives can also be improved in the cloud. Service providers enjoy economies of scale that make it more cost-effective to build data centers that conform to stringent security qualifications, implement extensive data encryption algorithms, and more. Lastly, many service providers specialize in delivering features and certifications that are needed to address regulatory requirements in financial and other industries. When looking at security and compliance with cloud-based solutions, ESG highly recommends that every customer conduct their own due diligence in order to fully understand any geo-political or industry regulations that might apply to how or where their data is stored. Is the Cloud for Everyone? Despite the obvious benefits, many organizations have not adopted cloud-based models yet. As Figure 4 shows, the key factors preventing wide-scale adoption of cloud computing tend to be issues regarding security, loss of control, and investments in existing infrastructure. Many organizations are reluctant to give up control of their assets, based on fears regarding data security and privacy. In addition, some feel that their investments in infrastructure and staff should be sufficient to do the job without adding to the cost by implementing a cloud deployment 8. However, with the right service features, proper education, and customer validations, cloud service providers can overcome the real and the perceived barriers. Attitudes are changing as more and more organizations adopt cloudbased strategies and demonstrate their value. A common method of initiating cloud computing for organizations that have concerns is by using it to augment their existing computing environments. They may start by offloading certain applications to a software-as-a-service (SaaS) model, or by experimenting with adding compute or storage capacity in the cloud instead of buying additional on-site equipment. These are critical assets and important business processes, and organizations want to attain a certain comfort level before committing wholeheartedly to a cloud-based deployment. Others dip their toe in the water by implementing a virtual private cloud: a private cloud that exists within a shared or public cloud. Security-sensitive companies get a flexible, secure IT resource pool transparently connected to their enterprise via a virtual private network (VPN). 8 Source: ESG Research Report, Cloud Computing Adoption Trends, May 2011.
9 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 9 Figure 4. Factors Preventing Wide-Scale Adoption of Public Cloud Computing Why do you believe that public cloud computing services will have little or no impact on your organization s IT strategy over the next five years? (Percent of respondents, N=256, multiple responses accepted) Data security / privacy concerns Feel like we would be giving up too much control Too much invested in current IT infrastructure and staff Cloud computing offerings need to mature Satisfied with existing infrastructure and process Have plenty of in-house skilled resources Regulatory compliance / audit concerns Data availability / recovery concerns Company philosophy against outsourcing Performance concerns Don t see any financial or operational benefits Network connectivity and costs 32% 32% 29% 28% 27% 25% 21% 20% 20% 19% 17% 43% Amazon Web Services 0% 10% 20% 30% 40% 50% Source: Enterprise Strategy Group, AWS has been offering Infrastructure-as-a-Service (IaaS) to organizations of all sizes using a consumption-based business model since Consisting of a collection of modular cloud service components, AWS provides a global, elastic IT infrastructure that is well known for scalability, reliability, and security. These modular services can be used independently or combined to meet specific computing and storage requirements. AWS makes it easy to get started. One option to seed the cloud repository is to import large amounts of data into AWS using portable storage devices transported via third-party logistics. Data can be exported in a similar fashion too. Ongoing data movement thereafter occurs using network links. Subscribers to AWS find that cost reduction is achieved in a number of ways. First, companies will not incur costs associated with building out a second data center or committing to a collocation lease. For example, What s Up Interactive, an AWS customer, claims that it was able to avoid an estimated $1 million in outfitting a DR site. Second, there are no fees for inbound data transfer, which eliminates costs associated with moving data to a secondary and/or tertiary site. Next, AWS pay-as-you-go pricing reduces monthly costs since only resources actually used are paid for. Finally, there are a variety of purchase options that allow you to reserve capacity for a DR failover while further lowering the cost of the utilized resources.
10 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 10 Durability Amazon Web Services Regions and Availability Zones Safeguarding data in the cloud is of utmost importance, not only for data protection but to gain customer trust. AWS Availability Zones are key to its durability for availability. When data is copied to the AWS cloud, it is stored in one of the eight global AWS Regions 9. Within each Region are multiple Availability Zones each in a distinct geographic area with data centers built on different floodplains, weather patterns, power grids, etc. Availability Zones are interconnected using fiber, so data is copied to each Availability Zone on fiber rather than the Internet offering better performance and security/privacy. Each snapshot or file is copied multiple times across the Availability Zones. Combining the Availability Zones with AWS s regular integrity checks and ability to self-heal provides their 11 nines durability design. Customers should consider this as well when calculating costs per gigabyte most organizations develop their cost estimates assuming they will make duplicate copies of everything, but with AWS this is already in place. AWS never moves customer data outside of the user-defined Region to meet customers regulatory and/or legal compliance requirements. The breadth of AWS services is extensive, but three services in particular are key for DR: Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), and Amazon Elastic Block Store (Amazon EBS). Amazon S3 Amazon S3 is a cloud-based object store available that is through Web services interfaces such as REST and SOAP. It is used as a cloud storage container for backup data and images. Organizations can write, read, and delete virtually an unlimited number of objects containing from one byte to 5 TB of data each. Amazon S3 is similar to a traditional on-premise SAN or NAS device, but, as an AWS cloud implementation, is far more agile, flexible, and geographically redundant. Amazon S3 is designed to deliver 11 nines of durability per year this is accomplished by automatically making redundant copies in multiple Availability Zones, reducing the chance of data loss to one in 150 billion. Amazon S3 is also designed to offer 99.99% availability of objects, equal to just under one hour of yearly downtime. Amazon EC2 The Amazon EC2 is on-demand computing power for which subscribers pay by the hour with no long-term commitment. It enables organizations to boot up new AWS virtual servers in minutes and rapidly scale up or down. Supporting Windows, Linux, FreeBSD, and Open Solaris, Amazon EC2 can be used with all major Web and application platforms. An Amazon EC2 environment includes the operating system, services, database, and application platform stack required for a cloud-hosted application service. The virtual application stack can be started, stopped, restarted, or rebooted from a Web-based console using Web service APIs, with 99.95% availability per region using Availability Zones. Amazon EBS No-Fee Data Import Some of the biggest expenses of disaster recovery are those associated with physical secondary sites. When transitioning to cloud-based DR, moving backup data can represent a large up-front cost. However, AWS has eliminated fees for data import into Amazon S3. This can represent a significant cost savings at the onset, as well as long-term. Amazon EBS provides persistent, block-level storage volumes for Amazon EC2 applications within the same availability zone. Amazon EBS is well suited to Amazon EC2 applications that require a database, file system, or access to raw block-level storage. Amazon S3 backups can be used to reconstitute applications in the cloud using Amazon EC2 computing power with attached Amazon EBS storage. In addition, data copies backed up in Amazon S3 can leverage Amazon EBS to run additional computation or test/development workloads. 9 At the time of publication, AWS maintains eight regions: US East (N. Virginia), US West (N. California), US West (Oregon), GovCloud (US), EU (Ireland), Asia Pacific (Tokyo), Asia Pacific (Singapore), and South America (Sao Paulo).
11 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 11 Amazon EBS is used just like any other block storage. One or more volumes (1 GB up to 1 TB) can be mounted as devices by Amazon EC2 instances. Data can be striped across multiple Amazon EBS volumes for IO and performance improvements just like onsite storage, and data can be optionally imported or exported using physical bulk transfer instead of over network connections. For DR, point-in-time snapshots of Amazon EBS volumes can be copied and maintained in Amazon S3 storage, limiting any data loss to what was created since the last recovery point and recovery time interruption. Its consumption-based pricing model is based on allocated volume size and IO requests, so costs are incurred only for what is used (Amazon S3 usage costs for snapshots are a separate fee). An example demonstrates how simple it is to fail over to the AWS cloud. An application that is backed up to Amazon S3 can be copied over to Amazon EBS storage and attached to an Amazon EC2 instance. Just like that, the application with its current data set is restored and can operate in the cloud. The application is up and running while the necessary work on reconstituting the downed system or storage is performed in the data center virtually eliminating downtime. Monthly fees to store the snapshot on Amazon S3 are incurred, as well as Amazon EC2 time and Amazon EBS storage while in use. AWS Direct Connect Enterprise data can also be backed up to the cloud via AWS Direct Connect, a dedicated network connection that links a primary customer data center or co-location environment directly to AWS. AWS Direct Connect allows the public Internet to be bypassed when connecting to AWS, which may reduce network costs, improve bandwidth throughput, and provide a more consistent network experience. Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) enables subscribers to create an encrypted virtual private network for connecting to the AWS cloud infrastructure, making the AWS cloud an extension of the corporate data center. Subscribers can provision an area of the AWS cloud that is isolated from others, and define a customized virtual network with complete control, including choosing private IP addresses in the range of choice, creating public or private subnets, configuring gateways and route tables, managing inbound and outbound access via control lists, and more. Resources can be seamlessly scaled, and usage fees are based on VPN connection hours and standard transfer charges. AWS Import/Export To seed the cloud, large amounts of data can be imported into AWS (and exported from it) using subscriber s portable storage devices transported via third-party logistics via the AWS Import/Export option. AWS transfers data directly into Amazon S3 using AWS high-speed internal network and bypassing the Internet. For large data sets, this is often more rapid than Internet transfer and could help avoid the need to upgrade connectivity. AWS Storage Gateway The new AWS Storage Gateway is a service that connects an on-premise software appliance with cloud-based storage. The appliance is downloaded from the Amazon website and launched on an on-premise virtualization host. The AWS Storage Gateway enables customers to use existing applications to store data on Amazon S3 by exposing a standard iscsi interface to the customer s on-premise application server. The customer points the application to the Gateway, which will store a primary copy locally on their on-premise storage (SAN or DAS) and also store a snapshot in the cloud on Amazon S3, as an Amazon EBS snapshot. Subsequent snapshots will store only differential changes. Should a disaster occur, customers can restore their applications using Amazon EC2, pull their snapshots into Amazon EBS, and have the application up and running right away. Now, instead of paying for servers and infrastructure that sits idle in a disaster recovery site, customers only pay for snapshots stored in S3. If they want to recover the application in the cloud using Amazon EC2 and Amazon EBS, those subscription-based fees are only incurred when needed in the case of a disaster.
12 Security and Compliance White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 12 Security concerns are often a top barrier for those considering cloud implementations. Organizations depend on their data and applications to run businesses, and their level of trust that their data is secure from intrusion, as well as fully protected from loss, are critical. The economies of scale that service providers offer can make it easier for them to provide the highest levels of physical and digital security levels that most organizations cannot implement on their own. AWS offers some of the most advanced security features in the industry. It starts with shared responsibility between the subscriber and AWS, enabling flexibility and the levels of customer control required for certain industry-specific compliance requirements. AWS operates and manages the physical infrastructure and its security, as well as host operating systems and the virtualization layer. Customers assume the responsibility for guest operating systems and application software, including updates and security patches. Customers are also responsible for configuring the AWS-provided firewall. Subscribers can enhance security as they choose to meet more stringent compliance requirements with additional host-based firewalls and intrusion detection features, as well as encryption and encryption key management. AWS services are built on an environment with extensive and validated security and controls, including: Service Organization Controls 1 (SOC 1) Type 2 report 10 (formerly SAS Type II report), with periodic independent audits to confirm security features and controls that safeguard customer data. ISO Certification, an internationally-recognized security management standard that specifies leading practices and comprehensive security controls following the ISO best practice guidelines. PCI DSS 12 Level 1 compliance, an independent validation of the platform for the secure use of processing, transmitting, and storing credit card data. Relevant government agency and public sector compliance qualifications, such as an ITAR- 13 compliant environment. To support customers with FIPS requirements, the Amazon VPC VPN endpoints and SSL-terminating load balancers in AWS GovCloud (US) operate using FIPS validated hardware. 10 The audit for this report is conducted in accordance with the Statement on Standards for Attestation Engagements No. 16 (SSAE 16) and the International Standards for Assurance Engagements No (ISAE 3402) professional standards. This dual-standard report can meet a broad range of auditing requirements for U.S. and international auditing bodies. The SOC 1 report audit attests that AWS control objectives are appropriately designed and that the individual controls defined to safeguard customer data are operating effectively. 11 Statement on Auditing Standards No Payment Card Industry Data Security Standards 13 International Traffic in Arms Regulations, required to manage data related to defense articles or services. 14 Federal Information Processing Standard publication; FIPS defines four levels of security (1-4)
13 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 13 Leveraging AWS Components for DR These components and features combine to make DR viable in AWS for both on-premises workloads and AWSbased workloads. Below are some standard use cases to demonstrate the range of AWS options. Offsite Backup. Amazon S3 and its 11 nines durability design provide a good destination for backup data. On-premises data is typically transferred over the Internet and is accessible from any location. Other data movement options include AWS Direct Connect and the AWS Import/Export service. Snapshots of Amazon S3 volumes can be stored in Amazon EBS, providing a point-in-time backup that can be restored on premises or in the AWS cloud. Of course, systems running in AWS can also be backed up to Amazon S3 using snapshots of Amazon EBS volumes. In addition, numerous commercial backup services use Amazon S3 as their target. Pilot Light Service. In this service, the most critical elements of a computing environment are already configured in AWS for fast recovery, should the need arise. For example, core database servers would be replicating to Amazon EC2, and other infrastructure components in AWS could be quickly provisioned to restore the complete system. Other business services could be pre-configured in servers to be started up quickly. Additional Elastic IP addresses can be pre-allocated for networking, and Elastic Load Balancing can be used to distribute traffic to multiple instances to enhance performance. Less critical systems can have installation package and configuration information stored in Amazon EBS snapshots to speed application server setup. AWS CloudFormation allows users to create and manage AWS resources, automating provisioning and updates to further automate failover procedures. It converts the entire DR process into a version-controlled script simplifying testing, restore and failover, and introducing cost savings. With AWS, automated provisioning and configuration can speed recovery time. Warm Standby. This solution extends the Pilot Light elements and preparation, and further decreases the recovery time by having some services always running. Business-critical systems can be duplicated and always on, running on minimal Amazon EC2 instances of the smallest possible size, making systems fully functional but not scaled to production levels. They can be scaled quickly to handle production loads in a disaster, but in the meantime can be used for testing, QA, and other non-production tasks. These are just a few examples of the many ways that AWS components and services can provide DR. Many other services are available, and AWS will work with customers to identify the most appropriate implementations.
14 The Bigger Truth White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 14 DR is a critical piece of any data protection plan, but its cost and complexity often hinder deployments. There are tremendous advantages to using cloud infrastructures to reduce costs and simplify DR operations. Services such as AWS help to eliminate upfront capital investments, since there is no need to invest in a complete set of failover infrastructure. In addition, using the cloud eliminates the need to purchase recovery and failover hardware for a secondary site. The on-demand, subscription-based model allows organizations to expand and collapse levels of infrastructure as needs change, matching costs to actual need instead of making a large investment that may never be used. In many cases the cost of each replicated server can be funded from a monthly operational budget. Significant operational savings are generated by eliminating the need for skilled staff to procure, install, configure, and maintain a secondary site. By outsourcing to an expert such as AWS, organizations can free up their in-house staff to focus on more business-oriented projects, instead of deploying and maintaining a DR infrastructure. In addition, a cloud-based infrastructure can have far greater durability than a home-grown one because of automated geographical distribution of assets. All that an administrator needs to do is establish policies, monitor resources, and run reports from the AWS portal. It s simple to manage while offering robust and durable DR services. For many organizations, the cloud opens the door to DR, enabling a significant reduction in risk. After years of worrying about being unprepared to deal with data loss, the emergence of cloud-based DR brings a sigh of relief. Many organizations simply do not have access to the resources or facilities needed for a failover site. Using a public cloud, DR can finally be introduced or extended to workloads that have been left unprotected. These subscriptionbased services make it affordable to fully protect all of their applications and data, not just the top tier. So what are the next steps an organization should take to start or expand their DR efforts using the cloud? First, take stock of what would help the organization reduce risk and make outages easier to handle. Is the current protection plan sufficient? Can DR testing that is realistic and non-disruptive to production operations be executed? What are the risks? Data corruption? Component failure? Power outage? Are there specific natural challenges in the geographical area, such as seasonal tornadoes or flooding? Next, review compliance and security needs; levels of control required; and computing, storage, and network needs. Be sure to check on regional and industry-specific compliance requirements. Then, look at budget requirements. Can the organization afford self-maintained DR? Finally, investigate services such as AWS, and compare them for levels of security, scalability, reliability, and flexibility. Will they be easily incorporated into the current paradigm? Compare the cost of establishing a DIY standby site to one where cloud services from AWS are leveraged. How much added protection can be attained using AWS on a pay-as-you-need subscription basis versus setting up and managing a remote data center and keeping it available at all times. While there are certainly a number of cloud-based services available, AWS offers economies of scale as well as extraordinarily high-level services. AWS has extensive investments in security and protection, and because of these features, has been selected by some of the most demanding and security conscious customers. Organizations such as NASA, the National Renewable Energy Laboratory at the U.S. Department of Energy, and NASDAQ all leverage AWS services for DR. AWS offers fine-grained control to the customer, as well as numerous building blocks designed to build the type of DR solution most appropriate given recovery time and recovery point objectives and budget. Services are available on demand, and customers only pay for what they use, so costs remain low. In many ways, services such as AWS are the perfect match for DR, where significant infrastructure components are seldom needed, but when they are needed, they must be available quickly. The ability to spin up virtual machines and
15 White Paper: Amazon Web Services: Enabling Cost-Efficient Disaster Recovery Leveraging Cloud Infrastructure 15 storage and then collapse them back down when they re no longer required provides an insurance policy without breaking the bank. AWS offers highly scalable, reliable, secure, high-performance infrastructure resources for DR, and ESG expects that many organizations, once they have experienced these services, will begin to trust the cloud enough to expand their use even beyond DR. This utility-type design can eliminate underutilization and over provisioning, while leading organizations to become more flexible in response to business challenges.
APPLICATION NOTE: CLOUD DATA TIERING Eversync has developed a hybrid model for cloud-based data protection in which all of the elements of data protection are tiered between an on-premise appliance (software
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
Increased Security, Greater Agility, Lower Costs for AWS DELPHIX FOR AMAZON WEB SERVICES TABLE OF CONTENTS Introduction... 3 Overview: Delphix Virtual Data Platform... 4 Delphix for AWS... 5 Decrease the
WHITE PAPER: Leveraging the Cloud for Data Protection and Disaster Recovery Leveraging the Cloud for Data Protection and Disaster Recovery Bennett Klein DATA MANAGEMENT CUSTOMER SOLUTIONS MARCH 2012 Table
The Hybrid Cloud Approach: CA ARCserve D2D On Demand Small businesses benefit from a hybrid cloud solution for data backup and recovery White Paper Published: January 2012 Applies to: Microsoft Windows
White Paper Benefiting from Server Virtualization Beyond Initial Workload Consolidation By Mark Bowker June, 2010 This ESG White Paper was commissioned by VMware and is distributed under license from ESG.
Market Report Augmenting Data Backup and Recovery with System-level Protection By Lauren Whitehouse March, 2011 Market Report: Augmenting Data Backup and Recovery with System-level Protection 2 Contents
White Paper Improving Backup Effectiveness and Cost-Efficiency with Deduplication By Lauren Whitehouse October, 2010 This ESG White Paper was commissioned by Fujitsu and is distributed under license from
Service Organization Controls 3 Report Report on the Amazon Web Services System Relevant to Security For the Period April 1, 2013 March 31, 2014 Ernst & Young LLP Suite 1600 560 Mission Street San Francisco,
DLT Solutions and Amazon Web Services For a seamless, cost-effective migration to the cloud PREMIER CONSULTING PARTNER DLT Solutions 2411 Dulles Corner Park, Suite 800 Herndon, VA 20171 Duane Thorpe Phone:
White Paper White Paper Managing Public Cloud Computing in the Enterprise Leveraging Public Cloud for Affordable VMware Disaster Recovery & Business Continuity A Quick Start Guide By Edward Haletky Principal
Object Storage: A Growing Opportunity for Service Providers Prepared for: White Paper 2012 Neovise, LLC. All Rights Reserved. Introduction For service providers, the rise of cloud computing is both a threat
Amazon Relational Database Service (RDS) G-Cloud Service 1 1.An overview of the G-Cloud Service Arcus Global are approved to sell to the UK Public Sector as official Amazon Web Services resellers. Amazon
Planning and Implementing Disaster Recovery for DICOM Medical Images A White Paper for Healthcare Imaging and IT Professionals I. Introduction It s a given - disaster will strike your medical imaging data
white paper Public or Private Cloud: The Choice is Yours Current Cloudy Situation Facing Businesses There is no debate that most businesses are adopting cloud services at a rapid pace. In fact, a recent
Solution Guide SteelFusion with AWS Hybrid Cloud Storage March 2016 The Challenge According to IDC, to meet the demands of global customer and global talent requirements, companies have to maintain remote
White Paper Storage-efficient Data Protection and Retention By Lauren Whitehouse April, 2011 This ESG White Paper was commissioned by IBM and is distributed under license from ESG. 2011, Enterprise Strategy
AVLOR SERVER CLOUD RECOVERY WHITE PAPER 1 Table of Contents Abstract... 2 1. Introduction... 3 2. Server Cloud Recovery... 3 3. Amazon AWS Cloud... 4 a. What it is... 4 b. Why Use AWS?... 5 4. Difficulties
Why cloud backup? Top 10 reasons HP Autonomy solutions Table of contents 3 Achieve disaster recovery with secure offsite cloud backup 4 Free yourself from manual and complex tape backup tasks 4 Get predictable
Amazon Web Services Primer William Strickland COP 6938 Fall 2012 University of Central Florida AWS Overview Amazon Web Services (AWS) is a collection of varying remote computing provided by Amazon.com.
Amazon Web Services Agenda - Introduction to Amazon s Cloud - How ArcGIS users adopt Amazon s Cloud - Why ArcGIS users adopt Amazon s Cloud - Examples How did Amazon Get into Cloud Computing? On-Premise
PARTNER BRIEF: IS ONLINE BACKUP RIGHT FOR YOUR BUSINESS?........................................ Is online backup right for your business? Eight reasons to consider protecting your data with a hybrid Who
S O L U T I O N S O V E R V I E W Disaster Recovery Solutions from VMware Transforming Disaster Recovery - VMware Infrastructure for Rapid, Reliable and Cost-Effective Disaster Recovery Disaster Recover
IBM Global Technology Services IBM SmartCloud IBM SmartCloud Virtualized Server Recovery i The case for cloud-based disaster recovery Cloud technologies help meet the need for quicker restoration of service
Introduction to AWS Economics Reducing Costs and Complexity May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document is provided for informational purposes
. The Radicati Group, Inc. 1900 Embarcadero Road, Suite 206 Palo Alto, CA 94303 Phone 650-322-8059 Fax 650-322-8061 http://www.radicati.com THE RADICATI GROUP, INC. Why You Should Consider Cloud- Based
Storage and Disaster Recovery Matt Tavis Principal Solutions Architect The Business Continuity Continuum High Data Backup Disaster Recovery High, Storage Backup and Disaster Recovery form a continuum of
Achieving High Availability & Rapid Disaster Recovery in a Microsoft Exchange IP SAN April 2006 All trademark names are the property of their respective companies. This publication contains opinions of
BACKUP ESSENTIALS FOR PROTECTING YOUR DATA AND YOUR BUSINESS Disasters happen. Don t wait until it s too late. OVERVIEW It s inevitable. At some point, your business will experience data loss. It could
Solution Brief: CA ARCserve R16.5 Complexity ate my budget VMware System, Application and Data Availability With CA ARCserve High Availability Adding value to your VMware environment Overview Today, performing
The Cloud Threat Why Cloud CompuTing ThreaTens midsized enterprises and WhaT To do about it This white paper outlines the concerns that often prevent midsized enterprises from taking advantage of the Cloud.
How AWS Pricing Works (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Fundamental
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
IBM Global Technology Services White Paper IBM Business Continuity and Resiliency Services Using the cloud to improve business resilience Choose the right managed services provider to limit reputational
Due to shrinking IT staffing and budgets, many IT organizations are turning to Service Providers for hosting of business-critical systems and applications (i.e., web hosting, email hosting, and database
How AWS Pricing Works May 2015 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 15 Table of Contents Table of Contents... 2 Abstract... 3 Introduction...
Benefits of the Hyper-V Cloud For more information, please contact: Email: email@example.com Ph: 888-821-7888 Canadian Web Hosting (www.canadianwebhosting.com) is an independent company, hereinafter
Amazon Compute - EC2 and Related Services G-Cloud Service 1 1.An overview of the G-Cloud Service Arcus Global are approved to sell to the UK Public Sector as official Amazon Web Services resellers. Amazon
CA ARCserve Replication and CA ARCserve High Availability r16 CA ARCserve Replication and CA ARCserve High Availability Deployment Options for Microsoft Hyper-V Server TYPICALLY, IT COST REDUCTION INITIATIVES
7 Essential Benefits of Hybrid Cloud Backup Datto is a leading provider of backup, disaster recovery (BDR), and business continuity solutions targeted to the small to medium business (SMB) market. Datto
Everything You Need to Know About Network Failover Worry-Proof Internet 2800 Campus Drive Suite 140 Plymouth, MN 55441 Phone (763) 694-9949 Toll Free (800) 669-6242 Overview Everything You Need to Know
WHITE PAPER How to choose and implement your cloud strategy INTRODUCTION Cloud computing has the potential to tip strategic advantage away from large established enterprises toward SMBs or startup companies.
White Paper Hitachi Data Systems Silver Lining HDS Enables a Flexible, Fluid Cloud Storage Infrastructure By Terri McClure November, 2010 This ESG White Paper was commissioned by Hitachi Data Systems and
Are You in Control of Your Cloud Data? Expanded options for keeping your enterprise in the driver s seat EXECUTIVE SUMMARY Hybrid IT is a fact of life in companies today. Increasingly, the way to deploy
Moving to the Cloud? DIY VS. MANAGED HOSTING 12 Factors To Consider And Why You Should Be Looking for a Managed Hosting Provider For Your Site or Application as You Move to the Cloud Your site or application
Extend Your IT Infrastructure with Amazon Virtual Private Cloud January 2010 http://aws.amazon.com/vpc Understanding Amazon Virtual Private Cloud Amazon Virtual Private Cloud (Amazon VPC) is a secure and
BUSINESS WHITE PAPER Best Practices in Healthcare IT Disaster Recovery Planning Assessing your options for leveraging the cloud to enhance compliance, improve recovery objectives, and reduce capital expenditures
DISASTER RECOVERY WITH AWS Every company is vulnerable to a range of outages and disasters. From a common computer virus or network outage to a fire or flood these interruptions can wreak havoc on your
Lab Validation Report Catalogic DPX Copy Data Services Designed for Intelligent Data Protection and Access By Vinny Choinski, Senior Lab Analyst ant Tony Palmer, Senior Lab Analyst September 2014 Lab Validation:
Developing a dynamic, real-time IT infrastructure with Red Hat integrated virtualization www.redhat.com Table of contents Introduction Page 3 Benefits of virtualization Page 3 Virtualization challenges
Data Backup Options for SME s As an IT Solutions company, Alchemy are often asked what is the best backup solution? The answer has changed over the years and depends a lot on your situation. We recognize
White Paper LIVEVAULT Top 10 Reasons for Using Online Server Backup and Recovery Introduction Backup of vital company information is critical to a company s survival, no matter what size the company. Recent
WHITE PAPER Citrix XenApp High Availability for Citrix XenApp Enhancing XenApp Availability with NetScaler Reference Architecture www.citrix.com Contents Contents... 2 Introduction... 3 Desktop Availability...
Federal GIS Conference February 9 10, 2015 Washington, DC Using ArcGIS for Server in the Amazon Cloud Bonnie Stayer, Esri Amy Ramsdell, Blue Raster Session Outline AWS Overview ArcGIS in AWS Cloud Builder
Technology Paper Disaster Acronym: DR The process, policies, and procedures that enable a business to recover data and systems after a disaster. A Foundation for Disaster in the Cloud Introduction Virtualization
Replication, Business Continuity and Restoration with Cloud Economics How HotLink DR Express Provides VMware Disaster Recovery that Won t Break the Bank Solution 2012 Neovise, LLC. All Assessment Rights
Research Brief The Evolving Public Cloud Landscape Date: June 2014 Author: Mark Bowker, Senior Analyst and Bill Lundell, Senior Research Analyst Abstract: ESG research indicates that the corporate usage
IBM Software Thought Leadership White Paper September 2010 Effective storage management and data protection for cloud computing Protecting data in private, public and hybrid environments 2 Effective storage
Whitepaper : Cloud Based Backup for Mobile Users and Remote Sites The Organisational Challenges We propose three key organizational principles for assessing backup Security Control Performance Functional
Things You Need to Know About Cloud Backup Over the last decade, cloud backup, recovery and restore (BURR) options have emerged as a secure, cost-effective and reliable method of safeguarding the increasing
High availability and disaster recovery White Paper High availability and disaster recovery with Microsoft, Citrix and HP Using virtualization, automation and next-generation storage to improve business
Systems Engineering at MITRE CLOUD COMPUTING SERIES Leveraging Public Clouds to Ensure Data Availability Toby Cabot Lawrence Pizette The MITRE Corporation manages federally funded research and development
1-888-674-9495 www.doubletake.com Real-time Protection for Hyper-V Real-Time Protection for Hyper-V Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate
WHITE PAPER: DON'T WAIT UNTIL IT'S TOO LATE: CHOOSE NEXT-GENERATION................. BACKUP........ TO... PROTECT............ Don't Wait Until It's Too Late: Choose Next-Generation Backup to Protect Your
Data Center as Service Offering Brief NetApp Private Storage for Amazon Web Services Overview NetApp Private Storage for AWS is an enterprise private storage solution for Amazon Web Services. Delivered
White Paper EMC s Enterprise Hadoop Solution Isilon Scale-out NAS and Greenplum HD By Julie Lockner, Senior Analyst, and Terri McClure, Senior Analyst February 2012 This ESG White Paper was commissioned
Cloud Computing and Amazon Web Services Gary A. McGilvary edinburgh data.intensive research 1 OUTLINE 1. An Overview of Cloud Computing 2. Amazon Web Services 3. Amazon EC2 Tutorial 4. Conclusions 2 CLOUD
ZADARA STORAGE Managed, hybrid storage Research Brief EXECUTIVE SUMMARY In 2013, Neuralytix first documented Zadara s rise to prominence in the then, fledgling integrated on-premise and in-cloud storage
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
EMC CLOUDARRAY PRODUCT DESCRIPTION GUIDE INTRODUCTION IT organizations today grapple with two critical data storage challenges: the exponential growth of data and an increasing need to keep more data for
Helping MSPs protect Data Center resources Due to shrinking IT staffing and budgets, many IT organizations are turning to Service Providers for hosting of business-critical systems and applications (i.e.
Government Efficiency through Innovative Reform SoftLayer an IBM company Service Definition i Copyright IBM Corporation 2014 Table of Contents Overview... Error! Bookmark not defined. Major differentiators...
Best Practices for Architecting Your Hosted Systems for 100% Application Availability Overview Business Continuity is not something that is implemented at the time of a disaster. Business Continuity refers
Amazon in an Instant: How Silver Peak Cloud Acceleration Improves Amazon Web Services (AWS) Amazon in an Instant: How Silver Peak Cloud Acceleration Improves Amazon Web Services (AWS) Amazon Web Services
Solution Brief: CA ARCserve R16.5 Complexity ate my budget CA ARCserve Replication and High Availability Deployment Options for Hyper-V Adding value to your Hyper-V environment Overview Server virtualization
Field Audit Report Asigra Hybrid Cloud Backup and Recovery Solutions By Brian Garrett with Tony Palmer May, 2009 Field Audit: Asigra Hybrid Cloud Backup and Recovery Solutions 2 Contents Introduction...
Five Features Your Cloud Disaster Recovery Solution Should Have Content Executive summary... 3 Problems with traditional disaster recovery... 3 Benefits Azure and AWS bring to the data center... 4 5 Features
Datto Whitepaper 7 Essential Benefits of Hybrid Cloud Backup Datto is a leading provider of backup, disaster recovery (BDR), and business continuity solutions targeted to the small to medium business (SMB)
Simplifying storage processes and ensuring business continuity and high availability IBM Virtualization Engine TS7700 GRID Solutions for Business Continuity The risks are even greater for companies that
Traditional Disaster Recovery versus Cloud based DR May 2014 Executive Summary Many businesses want Disaster Recovery (DR) services to prevent either man-made or natural disasters from causing expensive
I D C V E N D O R S P O T L I G H T R e c o ve r y i n t h e C l o u d July 2011 Adapted from The State of Business Continuity in End-User Environments in 2011 by Laura DuBois, Jean S. Bozman, and Eric