NEW White Paper Expert Guide on Backing up Windows Server in Hyper-V



Similar documents
Hyper-V backup implementation guide

Library Recovery Center

One Solution for Real-Time Data protection, Disaster Recovery & Migration

Backup and Recovery 1

WHITE PAPER: ENTERPRISE SECURITY. Symantec Backup Exec Quick Recovery and Off-Host Backup Solutions

System Protection for Hyper-V Whitepaper

BackupAssist v6 quickstart guide

BackupAssist Common Usage Scenarios

WHITE PAPER PPAPER. Symantec Backup Exec Quick Recovery & Off-Host Backup Solutions. for Microsoft Exchange Server 2003 & Microsoft SQL Server

Backup 2.0: un opportunità bestiale. Todd Fredrick Executive Vice President Sales&Marketing e Cofounder di AppAssure

Don t Hyper-Ventilate over Hyper-V backup!

A review of BackupAssist within a Hyper-V Environment

Part Two: Technology overview

A review of BackupAssist within a Hyper-V Environment. By Brien Posey

BackupAssist v6 quickstart guide

Top 10 Best Practices of Backup and Replication for VMware and Hyper-V

Version: Page 1 of 5

High Availability and Disaster Recovery for Exchange Servers Through a Mailbox Replication Approach

Product Brief. it s Backed Up

How To Use A Recoverypoint Server Appliance For A Disaster Recovery

BackupAssist v5 vs. v6

AVLOR SERVER CLOUD RECOVERY

Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage. Applied Technology

Enterprise Backup and Restore technology and solutions

Hyper-V Protection. User guide

16 Common Backup Problems & Mistakes

Hyper-V Protection. User guide

Protecting Miscrosoft Hyper-V Environments

Nutanix Tech Note. Data Protection and Disaster Recovery

Vodacom Managed Hosted Backups

Protecting Microsoft Hyper-V 3.0 Environments with CA ARCserve

CA ARCserve Family r15

VMware and Microsoft VSS: What You Need to Know

How To Backup Your Computer With A File Copy Engine

Backup and Archiving Explained. White Paper

VIPERVAULT STORAGECRAFT SHADOWPROTECT SETUP GUIDE

SynapseBackup Secure backups and disaster recovery services for both physical and virtual environments. Top reasons on why SynapseBackup is the best

Exchange Data Protection: To the DAG and Beyond. Whitepaper by Brien Posey

Optimized data protection through one console for physical and virtual systems, including VMware and Hyper-V virtual systems

OPTIMIZING EXCHANGE SERVER IN A TIERED STORAGE ENVIRONMENT WHITE PAPER NOVEMBER 2006

Backup Exec 15: Protecting Microsoft Hyper-V

Introduction. Setup of Exchange in a VM. VMware Infrastructure

BDR TM V3.0 DEPLOYMENT AND FEATURES

SHAREPOINT, SQL, AND EXCHANGE BACKUP IN VIRTUAL AND PHYSICAL ENVIRONMENTS

Protecting Windows Microsoft Exchange Server Data Protection

Data Protection. the data. short retention. event of a disaster. - Different mechanisms, products for backup and restore based on retention and age of

Disaster Recovery. Maximizing Business Continuity and Minimizing Recovery Time Objectives in Windows Server Environments.

System Protection for Hyper-V User Guide

BACKUP ESSENTIALS FOR PROTECTING YOUR DATA AND YOUR BUSINESS. Disasters happen. Don t wait until it s too late.

Perforce Backup Strategy & Disaster Recovery at National Instruments

Data Protection. for Virtual Data Centers. Jason Buffington. Wiley Publishing, Inc. WILEY

Sharepoint, SQL, And Exchange Backup In Virtual And Physical Environments

Redefining Microsoft SQL Server Data Management. PAS Specification

Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays

WHITE PAPER THE BENEFITS OF CONTINUOUS DATA PROTECTION. SYMANTEC Backup Exec 10d Continuous Protection Server

Data Backup Options for SME s

Protecting Microsoft SQL Server with an Integrated Dell / CommVault Solution. Database Solutions Engineering

MICROSOFT EXCHANGE best practices BEST PRACTICES - DATA STORAGE SETUP

Using Hitachi Protection Manager and Hitachi Storage Cluster software for Rapid Recovery and Disaster Recovery in Microsoft Environments

TECHNOLOGY OVERVIEW INTRONIS CLOUD BACKUP & RECOVERY

How do you test to determine which backup and restore technology best suits your business needs?


Availability for your modern datacenter

Maximizing Business Continuity and Minimizing Recovery Time Objectives in Windows Server Environments

Nutanix Solution Note

Best Practices for Architecting Storage in Virtualized Environments

System Protection Whitepaper

High Availability with Windows Server 2012 Release Candidate

Backup and Recovery for Microsoft Hyper-V Using Best Practices Planning. Brien M. Posey

Protecting the Microsoft Data Center with NetBackup 7.6

WHITE PAPER: DATA PROTECTION. Veritas NetBackup for Microsoft Exchange Server Solution Guide. Bill Roth January 2008

STORAGECRAFT SHADOWPROTECT 5 SERVER/SMALL BUSINESS SERVER

SnapManager 5.0 for Microsoft Exchange Best Practices Guide

DISK IMAGE BACKUP. For Physical Servers. VEMBU TECHNOLOGIES TRUSTED BY OVER 25,000 BUSINESSES

Best Practices for Virtualised SharePoint

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

Frequently Asked Questions about Cloud and Online Backup

The Benefits of Virtualization for Your DR Plan

Pervasive PSQL Meets Critical Business Requirements

Unitrends Integrated Backup and Recovery of Microsoft SQL Server Environments

System Image Backup and Recovery

How to Manage Critical Data Stored in Microsoft Exchange Server By Hitachi Data Systems

TABLE OF CONTENTS THE SHAREPOINT MVP GUIDE TO ACHIEVING HIGH AVAILABILITY FOR SHAREPOINT DATA. Introduction. Examining Third-Party Replication Models

System Center 2012 Suite SYSTEM CENTER 2012 SUITE. BSD BİLGİSAYAR Adana

Integrating Data Protection Manager with StorTrends itx

excepta Solutions Backup & Disaster Recovery Services and software overview Tel. +31 (0) info@ncoactive.com

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration

1. Overview... 2 Documentation... 2 Licensing... 2 Operating system considerations... 2

Backup Exec 2014: Protecting Microsoft SharePoint

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Real-time Protection for Hyper-V

Availability Guide for Deploying SQL Server on VMware vsphere. August 2009

Is VMware Data Recovery the replacement for VMware Consolidated Backup (VCB)? VMware Data Recovery is not the replacement product for VCB.

Transcription:

NEW White Paper Expert Guide on Backing up Windows Server in Hyper-V by John Savill, Microsoft MVP

John Savill Microsoft MVP John Savill is a Windows technical specialist, an 11-time MVP, an MCITP: Enterprise Administrator for Windows Server 2008, and is ITIL certified. He is the author of the popular FAQ for Windows and a senior contributing editor to Windows IT Pro. John is the author of The Complete Guide to Windows Server 2008 (Addison-Wesley), and he s currently writing his latest book, Microsoft Virtualization Secrets (Wiley). WHITE PAPER 2

Table of Contents Virtualization Benefits and Dangers...4 The Need For Virtual Environment Protection...5 Intelligent Guest Level Backup Advantages...7 Backup Storage Considerations...7 Restoration Processes...9 Protecting Your Protection...10 WHITE PAPER 3

Synopsis Microsoft s Hyper-V platform is quickly gaining market share over its competitors in the market. In this paper we will examine what you need to know to protect your virtualized workloads successfully. New Infrastructural landscape Looking at the IT Infrastructure of most organizations today and the initiatives at top of their priority list, you will see a completely different approach to virtualization and management, much different than just two years ago. If we look at how operating system provisioning is handled today, we see environments created in minutes instead of many weeks. We see a new IT infrastructural landscape with the potential to fully leverage our IT assets, provide amazing flexibility, availability and portability for our services, all contained in a smaller data center footprint. The enabler for this complete rethinking of our datacenters can be summed up in one word. Virtualization. Virtualization has driven a complete paradigm shift for every IT infrastructure. Some of the major improvements include: 1. Multiple operating systems run on a single physical host. This saves money on hardware, on licensing, on power, on datacenter space and more. 2. Hardware presented to the guest operating systems is virtualized and abstracted from the real physical hardware. This means virtual machines can easily be moved between completely different physical hosts. This flexibility is great for day-to-day running and even better for disaster recovery scenarios. 3. A higher quality of service is attainable. Operating system instances can be provisioned in real time as needed, allowing business units and end-users to self-serve through portals provided by solutions such as System Center Virtual Machine Manager. 4. Increased standardization simplifies management and simplifies adherence to regulatory and compliance requirements. Templates simplify provisioning and maintaining operating system instances. But with all its advantages, a challenge to data safety But with all of its advantages, virtualization introduces new challenges for IT departments and it can highlight existing gaps in your data infrastructure that will need closing. One of the most significant gaps is likely to be how well you re protecting your newly virtualized data and systems. Failure to do so exposes organizations to huge potential data loss, financial loss, longer system RTOs and even criminal prosecution if you fail to meet regulatory requirements. Is the virtual machine you created actually being backed up and replicated? Is its data being archived in a manner that will meet your organization s needs? If you don t know, your business faces a huge risk. To begin with, virtualization will likely increase the number of operating system instances you need to protect if you want to achieve greater resiliency to an instance. Adding more instances also means more management overhead unless you automate the configuration process. Finally you need to protect virtualization hosts. WHITE PAPER 4

Do I even need to back up? Nearly every service has some kind of replication built into it today; maybe it s database mirroring or log shipping, mail store replications, or multi-master replication of a directory service. Many systems have trashcans where deleted objects go before they are permanently removed. With this in mind, one could be forgiven for assuming that replication alone is plenty good enough. The answer is no, and a big no at that! While replication can deliver high availability and trashcans can help in a quick restore of objects, neither solution will save you if you have corrupted data and neither can rebuild a system. Backups are more important than ever to achieve complete protection. Can t I just back up the Hyper-V box? This may seem like a reasonable approach, requiring the least amount of work. With Hyper-V we have a management partition which is used to manage the virtual machines, perform configuration of the virtual environment and enable communication to certain types of resource like storage and network using standard Windows drivers. We can log in to this Windows Server management partition locally, remotely through protocols like RDP or just manage remotely through our management tools such as Server Manager. However, because this management partition is running Windows Server, why not just perform the backup in the management partition and tell it to back up all the virtual machines? A note on Cluster Shared Volumes One quick consideration when you re backing up Hyper-V machines is Windows Server 2008 R2 Cluster Shared Volumes (CSV). This is used for shared storage in a cluster and enables all nodes in the cluster to read and write to the same NTFS LUN simultaneously, enabled through CSV. There are a few special considerations to backing up virtual machines on a CSV enabled volume, so be sure to check if your backup software supports CSV. If it doesn t, as CSV is still quite new, make sure they re planning to add it or remove it from consideration. What happens when we back up? Every modern Windows backup application today leverages Volume Shadow Copy Service (VSS). This architecture allows application developers to ensure their applications and associated data are correctly gets backed up and can be restored. This is achieved by application vendors providing VSS Writers which perform the steps needed to ensure that (1) applications data on disk is in a backup ready state and (2) that writes are suspended to the data during the backup. With this, we ll be able to restore this application if needed and there is no risk of inconsistent data rendering the restored environment unusable! Most major applications provide VSS Writers in addition to those provided as part of the Windows operating system and key Microsoft operating system roles like Hyper-V. 1. To leverage these VSS Writers, a backup application acts as a VSS Requestor and asks the Volume Shadow Copy Service, which coordinates all the actions needed for the backup, to create a shadow copy of the requested volumes. A shadow copy is a point-in-time snapshot view of the volume, which, once created, can then be backed up to another disk or tape. 2. The Volume Shadow Copy Service enumerates all the registered VSS Writers on the system and then tells initiates a commit action that triggers the snapshot. WHITE PAPER 5

3. The VSS Writers are notified of the commit and each writer performs actions like flushing transactions to disk and quiescing changes to ensure their data on disk is in a backup-ready state. 4. The Volume Shadow Copy Service then tells a VSS provider, which can be software-based or hardware-based, to actually create the shadow copy of the data which is currently frozen. The VSS provider is responsible for ensuring the shadow copy is maintained until it is deleted which is typically after the backup software has finished copying the shadow copy to another location. 5. Once the shadow copy is taken (which can take a maximum of 10 seconds) the Volume Shadow Copy Service thaws the system, allowing the VSS writers to unfreeze writes to the data and resume normal operations. 6. The Volume Shadow Copy Service checks with each VSS writer to ensure that all writes were held during the shadow copy creation. If they were not held, then the shadow copy is deemed inconsistent and deleted. If shadow copy creation is successful its location is given to the VSS requestor, i.e. the backup application, which can now do whatever it wants with it. This whole process can create a shadow copy in seconds, which can then be backed up to various mediums over a far longer period of time without affecting production availability and performance. With Hyper-V, a VSS backup is actually extended beyond where we take the backup. Hyper-V has integration services installed on the guest virtual machines which allow rich communication between the Hyper-V management partition and the guest operating systems, resulting in a smooth mouse/keyboard experience, heartbeat, time synchronization, shutdown execution and snapshot integration. That s right, we can perform a VSS backup on the Hyper-V host. The Hyper-V host will actually notify each virtual machine that has the integration services installed that a VSS shadow copy is being taken. The VSS writers inside the guest operating systems will be called to ensure that data within the guest operating systems is in a backup ready state and writes are paused while the backup is taken. A backup taken at the Hyper-V host level of the virtual machines is actually integrity-assured providing the integration services are installed in the guest and the guest operating system supports VSS. This actually means it may be entirely possible to back up at the Hyper-V host level because we have the VSS notifications to the guests. But ultimately we don t care about the backup, we care about the restore and that is where backing up at the Hyper-V host level may not be enough. When I backup the VM at the host, I know that I can restore that virtual machine and it will be functional but that s what I m restoring, the entire virtual machine. You may actually be able to go a little further. When looking at your Hyper-V backup solution it s good to choose one that allows itemlevel restore from your Virtual Hard Disk (VHD) backups. This item level restore means I ve backed up the entire VHD but when I perform a restore action I can look inside the VHD and rather than restore the entire VHD, I can restore only selected files from the contained file system. This has become a lot simpler with Windows Server 2008 R2 since mounting a VHD is part of the operating system now. This ability to perform item-level restore from a VHD would be very useful if you were backing up a file server for example and just wanted to restore a single file. Now imagine the virtual machine was running SQL Server, SharePoint, Exchange or any workload that allows granular levels of restoration. Opening up the VHD and trying to perform an item-level restore will not work for SQL data or Exchange the restore would not understand the data. Instead we need to have backup agents that understand the data so it can be restored in ways that make the most sense for the data and the service being protected. Great examples are restoring a table from a database, restoring a mailbox or even mail item from a mail database and restoring a document from a SharePoint library. None of these restore types would be possible if we only performed the backup at the Hyper-V host level; we need to have backup agents within the guest operating systems for these types of workloads to enable the greatest functionality and finest granularity of restore. WHITE PAPER 6

The need for agents in the guest and the power that brings While we have seen backups can be taken at the Hyper-V host level, we have also seen we lose granularity in our ability to restore. This is ultimately why we perform backups, so when things go wrong and they will we don t lose state and information. This does not mean we need to install backup agents in all our guests. If we have guests running fairly basic workloads and we only need to restore the entire VM or (on rare occasions) a file, performing the backup at the Hyper-V host level is fine. By doing this, you give up the ability to do granular restores, full backups of I/O intensive applications, and the ability to restore individual files without restoring an entire virtual machine. Installing a backup agent inside the guest operating system allows the backup software to have intelligence into exactly what is running inside the virtual machine. This gives the backup administrator full flexibility into what need to be protected, how to protect it and, if a disaster happens, how to restore only what is really needed, rather than restoring the entire virtual machine. What they want to protect and how they want to protect it, interesting, but what do those two statements from the previous paragraph mean? Don t we just create a VSS shadow copy then back it up somewhere? With an agent running inside the virtual machine, we can be very particular about what we want to back up rather than backing up everything. We may decide to not back up the operating system volumes we may want to only back up particular databases on a SQL server instead of the entire data disk. Additionally our backup solution, through its knowledge of our services, may be able to go the extra mile. In addition to regularly backing up the data, it may also be able to harvest transaction logs at a more frequent interval for services that use transaction logs like databases and mail systems that can be to minimize the data lost. Configuring how often we want to back up the data may vary for each protected workload, and agents inside guests give us that flexibility. This is actually an important consideration when picking your backup solution. What types of data it supports to backup and what are its capabilities for restoration as often this is the most important aspect. If a user just deleted a mail message, I want to just restore that 1 mail item, not the entire 8GB mailbox is that possible? Don t jump to your backup solution for all types of restoration. As a best practice, many applications have their own trashcan where items that are deleted actually sit for a period of time before they are removed. They can actually be restored from this application trashcan very quickly so always look at the capabilities of your systems before instantly reaching for your backup/recovery solution. Where possible use a hybrid approach as these trashcan capabilities are no replacement for a backup solution that can give full protection from corruption and disasters. Whenever we want the most flexibility, the most control and most restore granularity, we should be thinking agent inside the guest. Where do we back it up to and what are we really backing up each time? I remember my first job when I left school 18 years ago. I was a VAX/VMS systems administrator and one of my tasks was performing the nightly backup. Backing up a system was a multi-step process- I had to remember to put the square-shaped tape in the machine before I left the office, and when I got in the next morning, switch the tape for the second tape because the backup was larger than a single tape. Then 2 hours later, I d get both tapes, put them in an envelope and put them at reception for the offsite backup service to collect for secure storage. WHITE PAPER 7

In the event we needed to perform a restoration, I would have to contact the archive company, get the tape back (which may not be till the next day), then hope the backup would actually work (which often didn t due to solar flares ). Times have changed The cost of disk drives has come down remarkably in recent years, while their capacity has greatly increased which makes using disks as the storage medium for our backups a viable option instead of tape. Using disks as the backup target gives increased speed over tape to perform the backup but also means the data is easily available to perform restorations from and removes many of the complexities commonly associated with tape drives and tape media. Using disks for our backup storage also allows us to store more than one backup of a data set. We can have many backups at different points in time, which is great when we come to restoring choose from what point in time we wish to restore from rather than just the last backup. If the last backup were corrupted, with additional restore points, you could go back in incremental time blocks to find the last best backup, corruption free. This brings us nicely into another advantage of disk as the backup medium. When we think of traditional backup methods on tape, it was common to perform a full backup at the start of the week which contained all the data, then each day perform an incremental backup which only contained the files that had changed that day. This was much faster to perform and required less storage than a full backup. If you had a loss on Friday, you would have to restore Monday s tape, then Tuesday s, then Wednesday s and then Thursday s a huge waste of time and a huge headache. This method is very important with today s backup architectures; in fact more so since the backup data is now sent over the network to the backup server, which connects to the storage. We actually go one better than an incremental backup which backed up entire files that changed that day. This would not work well today when you consider today s files can be gigabytes in size, but may have only had a few kilobytes of change. Look for a backup solution that uses block level backups which will reduce the amount of disk space you need to store the backups and more importantly cut down on network bandwidth used during the backup. The way these block level backups work does vary but is essentially the following: 1. The first time a new backup source is protected, a full backup of all the data is performed and stored on the backup server. This is the only time the entire data set is copied over the network. 2. For subsequent backups only, the blocks on disk that relate to the protected data that have changed are copied to the backup server, meaning the data copied over the network directly relates to the amount of change. These backups should occur at whatever interval meets your targets around maximum amount of acceptable data loss. 3. If your organization has Recovery-Point Objecting (RPO) of 4 hours, we need to make sure we perform these backups at least every 4 hours. This frequency will also be available when you come to perform a restore so if you backup every 4 hours, you would be able to select the restore point in 4 hour increments. 4. Many backup solutions may give you the option of merging these point in time views that are older than x days or y weeks in order to clean up your views, as it will become unlikely that you would need 4 hour granularity of a restore from 3 weeks ago. 5. The backup server looks at the blocks in the current view of the data that are being replaced, moves them to a previous point in time slice so they are maintained for performing restorations at that point in time, then writes the latest blocks so a current view is available. 6. Organizations may still want tape or Blu-ray disc copies for offsite storage or very long term archiving, so support for tape may be a factor in your backup selection. Often, solutions will allow a point-in-time view of the data to be exported to tape. WHITE PAPER 8

For offsite backup needs such as disaster recovery, it s actually very common to have an instance of the backup solution at the disaster recovery location, with its own disks that actually protect the primary backup instance and data at the main datacenter location. The DR backup instance may protect all data from the primary backup instance or only the really important data and servers. In the event of a disaster recovery, this is just as fast at the backup site as at the primary. The final piece, restoration! We ve already discussed the granularity of restoration that is possible with a top grade backup and recovery solution. Being able to restore not only from multiple points-in-time but also only restoring the data we want to restore instead of entire containers of data. There are other restoration considerations. Disasters and total server losses bring us to another important capability performing bare metal recoveries of our servers. If a server has to be completely restored from the backup or has to be replaced, we need to be able to restore the latest complete backup of a server, then apply the latest protected data from that server. We may want to perform this restoration by booting the physical box over the network using PXE boot or boot from a CD or USB key to initiate the restore. Consider what works best for your organization and make sure you choose a solution that meets that. The complete server loss or even worse a site loss brings up another interesting capability some backup solutions offer. The ability to run in a virtual standby mode can be very useful when a physical server hosted OS instance (not a virtual machine) fails but you don t have physical hardware available. Some solutions allow you to restore the backup of a physical server to a virtual machine and it automatically takes care of hardware differences allowing your server to be back up and running even though you don t have physical boxes available. This can be critical when you have aggressive Recovery Time Objectives (RTO) to meet. Remember when I recounted my early days as a IT admin when after getting back my backup tapes, I hoped the restore would actually work? This is not acceptable. The scenario of a failed restore can be avoided with 2 actions: 1. Have a solid restore process in-place and test routinely. With any change in hardware, software or personnel, run the restore test again to ensure it is still fully functional. 2. Make sure your backups integrity is assured. The best way to make sure the backup integrity is sound is for the backup solution to perform regular integrity checks when the backup data is received either using integrity checking capabilities provided by the application such as Exchange Eseutil or for file system protection through actions like Chkdsk. By performing these integrity checks on the backup system, we are assured our protected data will be usable and we are not burdening the source systems with the workload of performing the integrity validation. We get the best of both worlds. WHITE PAPER 9

Who watches the watchmen? We know how important our data and systems are we have invested great resources and efforts to get a comprehensive and functional backup solution in place, but what is protecting that backup system from corruption or loss? The use of a second backup solution at the DR site protecting the primary backup solution at the datacenter has already been discussed however using the DR backup solution for normal day-to-day restorations in the main datacenter would not be desirable as all the data would have to flow over the WAN between sites. Instead understand the dependencies of your backup solution. If the backup solution uses SQL to store its configuration and metadata and if that SQL server failed your backup solution it would be useless and unable to mitigate this use of a SQL instance that is part of a highly available SQL cluster. Whatever the dependencies are, try and mitigate any single point of failure. Final thoughts Virtualization is a fantastic revolution of the way we look at IT, it enables completely new ways to how we manage, provision and look at our infrastructure. Virtualization does not simplify our backup approach and as we ve looked at in this paper there are actually more considerations with protecting our virtual environments and ensuring no loss of capability or granularity but by investigating the backup solutions and choosing the right one we can actually make backup and more importantly restore an intuitive and functional part of our infrastructure. WHITE PAPER 10