1 Live Migration across data centers and disaster tolerant virtualization architecture with HP StorageWorks Cluster Extension and Microsoft Hyper-V TM Table of contents Executive summary... 3 Applicability and definitions... 3 Target audience... 4 Introduction to Cluster Extension (CLX)... 4 Server virtualization... 8 Benefits of virtualization... 8 Introduction to Hyper-V... 9 Introduction to Hyper-V live migration... 9 Enabling Live Migration across data center sites The CLX value add Multi-site Live Migration Benefits HP-supported configuration/hardware for Windows Server 2008 and above Server support Storage support Hyper-V Implementation considerations Disk performance optimization Storage options in Hyper-V Virtual Hard Disk (VHD) choices Pass-through disk Cluster Shared Volumes (CSV) Hyper-V integration services Memory usage maximization Network configurations Live Migration Implementation considerations Live Migration A managed operation Virtual Machine and LUN Mapping Considerations Distance considerations Storage replication groups per VM considerations CLX and Hyper-V scenarios Scenario A: Disaster recovery and high availability for virtual machine Scenario B: Disaster recovery and high availability for virtual machine and applications hosted by virtual machine Scenario C: Failover cluster between virtual machines Cluster between virtual machines hosted on the same physical server Cluster between virtual machines hosted on separate physical server... 21
2 Step-by-step guide to deploy CLX Solution Deploying Windows Server 2008 Hyper-V Installing Hyper-V role Creating geographically-dispersed MNS cluster Configuring network for virtual machine Provisioning storage for a virtual machine Creating and configuring virtual machine Installing Cluster Extension software Configuring Cluster Extension software Making a Virtual Machine highly available Creating a CLX resource Set the dependency Test failover or quick migration Appendix A: Bill of materials References... 39
3 Executive summary Server virtualization, also known as hardware virtualization, is a hot topic in the information technology (IT) world because of the potential for serious economic benefits. Server virtualization enables multiple operating systems to run on a single physical machine as virtual machines (VMs). With server virtualization, you can consolidate workloads across multiple underutilized server machines onto a smaller number of machines. Fewer physical machines can lead to reduced costs through lower hardware, energy, and management overhead, plus the creation of a more dynamic IT infrastructure. Windows Server 2008 Hyper-V, the next-generation Hyper-Visor-based server virtualization technology, is available as an integral feature of Windows Server 2008 and enables you to implement server virtualization with ease. One of the key use cases of Hyper-V is to achieve Business Continuity and disaster recovery through complete integration with Microsoft Failover Cluster. The natural extension of this ability is to complement this feature with the HP products like HP StorageWorks Cluster Extension and HP StorageWorks Continuous Access to achieve comprehensive and robust multi-site disaster recovery and high availability solution. This also allows you to utilize Hyper-V R2 in combination with Cluster Extension to migrate VMs live across data centers, between servers and storage systems, without affecting user access significantly. This paper briefly touches upon the Hyper-V and Cluster Extension (CLX) key features, functionality, best practices, and various use cases. It also objectively describes the various scenarios in which Hyper-V and CLX complement each other to achieve the highest level of disaster tolerance. This paper concludes with the step-by-step details for creating a CLX solution in Hyper-V environment from scratch. There are many references also provided for referring to relevant technical details. Applicability and definitions This document is applicable for implementing both HP StorageWorks Cluster Extension EVA and HP StorageWorks Cluster Extension XP on Microsoft Hyper-V platform. The following table list acronyms used in this document and their definitions: CLX XP CLX EVA CLX MNS MSCS OS HP Storage SCVMM Hyper-V Host OS Guest OS Cluster Extension HP StorageWorks XP Cluster Extension HP StorageWorks EVA Cluster Extension Majority Node Set Microsoft Cluster Service Operating System XP and EVA Microsoft Systems Center Virtual Machine Manager Virtualization technology from Microsoft The physical server on which Hyper-V is installed Virtual Machine created on Host OS 3
4 Figure 1: Hyper-V Terminology Virtual Machine Guest OS/Guest partition/ Child partition Host OS/Host partition Hyper - V server Target audience This document is for anyone who plans to implement Cluster Extension on a multi-site or stretched cluster with array based HP Continuous Access solution in a Hyper-V environment. The document describes Hyper-V configuration on HP ProLiant servers, explains how to create a virtual machine, details method of adding a virtual machine to a Failover Cluster, and finally mentions the CLX supported scenario with an example. To learn more about Cluster Extension, please see the documentation available on the Cluster Extension Websites (http://www.hp.com/go/clxxp and using the Technical documentation link under Support. Introduction to Cluster Extension (CLX) HP StorageWorks Cluster Extension (CLX) software offers protection against system downtime from fault, failure, and disasters. It enables flawless integration of the remote mirroring capabilities of HP StorageWorks Continuous Access with the high-availability capabilities provided by clustering software in Windows. This extended-distance solution allows for more robust disaster recovery topologies as well as automatic failover, failback, and redirection of mirrored pairs for a true disaster tolerant solution. CLX provides efficiency that preserves operations and delivers investment protection. The CLX solution monitors and recovers disk- pair synchronization on an application or VM level and offloads data replication tasks from the host. CLX automates the time-consuming, labor-intensive processes required to verify the status of the storage as well as the server cluster, thus allowing the correct failover and failback decisions to be made to reduce downtime. HP Cluster Extension offers protection against application downtime from fault, failure, and disasters by extending a local cluster between data centers over large geographical distances. 4
5 To summarize, the key features of CLX are as follows: Protection against transaction data loss: The application cluster is extended over two sites and the storage is replicated at the second site using array-based replication methodology. The data, therefore, exists ubiquitously with virtually no difference (depending on the replication method) in the data storage of site A and site B. No Single Point of Failure (SPOF) solution to increase the availability of company and customer data. Fully automatic failover and failback: Automatic failover and failback reduces the complexity involved in a disaster recovery situation. It is protection against the risk of downtime, whether planned or unplanned. No server reboot during failover: Disks on the server on both the primary and secondary sites are recognized during the initial system boot in a CLX environment. Therefore, LUN presentation and LUN mapping changes are not necessary during failover or failback for a truly hand-free disaster tolerant solution. No single point of failure: Identical configuration of established SAN infrastructure redundancies is implemented on site B, providing supreme redundancy. MSCS Majority Node Set integration: CLX Windows uses the Microsoft Majority Node Set (MNS) and Microsoft File Share Witness (FSW) quorum feature that is available in Windows since Microsoft Windows Server These quorum models provide protection against split-brain situations without the single point of failure of a traditional Quorum disk. Split-brain syndrome occurs when the servers at one site of the stretched cluster lose all connection with the servers at the other site and the servers at each site form independent clusters. The serious consequence of the split-brain is corruption of business data, because data is no longer consistent. EVA array family support: The entire EVA array family is supported on either end of the Disaster Tolerant configuration. Whether repurposing legacy units or simply adding newer generation units to an existing CLX EVA configuration, both sites are covered. See the Software Support section for configuration requirements. XP array support: The entire range of XP arrays is supported on either end of the disaster tolerant configuration, which includes XP24000/20000, XP12000/ For support details, refer to SPOCK website. 5
6 One of the failover use cases of CLX is described in the below figures. Before site failure: Applications or VMs are hosted on the primary site and the business critical data is replicated to the recovery site for disaster recovery. Figure 2: Before Site Failure Arbitrator Node App A App B Quorum Microsoft Failover Cluster App A App B Continuous Access Primary Site Recovery Site 6
7 After Site Failure: CLX automates the application or VM storage failover and change in replication direction in conjunction with Microsoft Clustering solution achieving Business Continuity and High Availability. Figure 3: After Site Failure Arbitrator Node App A Microsoft Failover Cluster App B Quorum Automated by CLX App A Continuous Access App B Primary Site Recovery Site 7
8 Server virtualization In the computing realm, the term virtualization refers to presenting a single physical resource as many individual logical resources (such as platform virtualization), as well as making many physical resources appear to function as a singular logical unit (such as resource virtualization). A virtualized environment may include servers and storage units, network connectivity and appliances, virtualization software, management software, and user applications. A virtual server, or VM, is an instance of some operating system platform running on any given configuration of server hardware, centrally managed by a Virtual Machine Manager, or hypervisor, and consolidated management tools. Figure 4: Server Virtualization Virtual Machine Virtual Machine Virtual Machine Virtual Machine Virtualized server Virtualization Manager Benefits of virtualization Virtualized infrastructure can benefit companies and organizations of all sizes. Virtualization greatly simplifies a physical IT infrastructure to provide greater centralized management over your technology assets and better flexibility over the allocation of your computing resources. This enables your business to focus resources when and where they are needed most, without the limitations imposed by the traditional one computer per box model. Virtualization enables you to create VMs that share hardware resources and transparently function as individual entities on the network. Consolidating servers as VMs on a small number of physical computers can save money on hardware costs and make centralized server management easier. Server virtualization also makes backup and disaster recovery simpler and faster, providing for a high level of business continuity. In addition, virtual environments are ideal for testing new operating systems, service packs, applications, and configurations before rolling them out on a production network. 8
9 Introduction to Hyper-V Windows Server 2008 Hyper-V, the next-generation Hyper-Visor-based server virtualization technology, is available as an integral feature of Windows Server 2008 and enables you to implement server virtualization with ease. Hyper-V allows you to make the best use of your server hardware investments by consolidating multiple server roles as separate virtual machines (VMs) running on a single physical machine. With Hyper-V, you can also efficiently run multiple different operating systems Windows, Linux, and others in parallel, on a single server, and fully leverage the power of x64 computing. Hyper-V offers great advantage in certain scenarios like Server Consolidation, Business Continuity, and Disaster Recovery through complete integration with Microsoft Failover Cluster, Testing and deployment through its inherent ease and scaling capability, and so on. For more details on Hyper-V technology, refer to Microsoft page at Introduction to Hyper-V live migration Hyper-V Live Migration is integrated with Windows Server 2008 R2 Hyper-V and Microsoft Hyper-V Server 2008 R2. Using the feature allows you to move running VMs from one Hyper-V physical host to another without any disruption of service or perceived downtime. Moving running VMs without downtime provides better agility in a data center in terms of performance, scaling, and efficient consolidation without affecting users. It also reduces cost and increases productivity. Windows Server 2008 Hyper-V and Windows Server 2008 R2 Hyper-V also offers a feature called Quick Migration. Live migration and Quick Migration both move running VMs from one Hyper-V physical computer to another. The difference is Quick Migration saves, moves, and restores VMs, which results in some downtime. The Live Migration process uses a different mechanism for moving the running VM to the new physical computer. The various Live Migration steps are summarized as follows: 1. When Live Migration operation is initiated, all VM memory pages are transferred from the source Hyper-V physical computer to the destination Hyper-V physical computer. The VM is pre-staged at the destination Hyper-V physical computer. 2. Clients continue to access the original VM, which results in memory being modified. Hyper-V tracks changed data, and re-copies incremental changes to the destination Hyper-V physical computer. Subsequent passes get faster as the data set is smaller. 3. When the data set is very small, Hyper-V pauses the VM and suspends operations completely at the source Hyper-V physical computer. Hyper-V moves the VM to the destination Hyper-V physical computer and the VM resumes operations at the destination host. This window is very small and happens within the TCP/IP timeout window. Clients accessing the VM perceive the migration as live. 4. The VM is online on the destination Hyper-V physical computer and clients are redirected to the new host. This makes live migrations of VMs preferable as users have uninterrupted access to the migrating VM. Because a live migration completes in less time than the TCP timeout for the migrating VM, users experience no outage for the migrating VM during steps 3 and 4 of the migration. The TCP timeout is a time window and if the operations are performed within this time window, the reconnection of the client to the VM (and therefore the application hosted within VM) is not necessary. Achieving step 3 and 4 within TCP timeout window is critical for the process of live migration to be successful in the sense of uninterrupted access to the VM. 9
10 Enabling Live Migration across data center sites The CLX value add The Hyper-V Live Migration process as explained above does not involve any step related to storage, thereby making it mandatory to have shared storage access to all the hosts. This means that VM data is stored on a SAN storage LUN and is provisioned to all physical hosts in the cluster. This solution works perfectly fine in a single data center cluster solution, but how about a solution that is spread across sites? This is usually the situation in multi-site disaster recovery solutions where a cluster is stretched across sites and each site has a dedicated storage array. The data is made available on both the sites using storage array-based data replication solutions like Continuous Access. This is where CLX as a product adds a great value. As explained in the introduction section, CLX seamlessly integrates with Failover Clustering and enables storage-level failovers as part of cluster resource movement across sites. Thereby it extends High Availability and Business Continuity to the Multi-site Disaster Recovery solution. However, to enable Hyper-V Live Migration across sites CLX had to fulfill additional requirements such as 1. Develop a mechanism in CLX to be aware of the Hyper-V Live Migration process while continuing to be seamlessly integrated in the Failover Clustering solution 2. Perform storage failover as part of the VM Live Migration process 3. Achieve the storage failover related operation within the TCP timeout window without impacting the VM Live Migration process 4. Manage the storage layer related unfavorable conditions for a VM live migration CLX was re-architected and developed to address these requirements. Now CLX can be used to live migrate a VM from one site to other, across the hosts within the stretched Microsoft Failover cluster and across the replicated arrays at the same time without any perceived VM downtime. Besides this, there is absolutely no change in how CLX is installed, integrated into cluster, and configured from previous versions. In summary, CLX enables Hyper-V Live Migration across sites, servers, and storage in a completely transparent manner, without any need of learning new products or practices. This feature in CLX and Live Migration enhances the fundamental benefits of Hyper-V live migrations as explained in the section below. Multi-site Live Migration Benefits Live Migration-integration of CLX into Microsoft Failover Clustering provides much greater flexibility and agility for your computing environment. It enhances the benefits derived through server virtualization. Some of the key benefits are: 1. Zero downtime maintenance The ability to move a VM from one server to other without any downtime takes away one of the biggest headache from an IT administrator and that is planning a maintenance (downtime) window. With the CLX Live Migration solution updating firmware, HBA, Server, etc. can be performed for a server or servers within a data center without having to worry about user disruption. Now, it is possible to do virtually any maintenance work in and around a data center at regular working hours without impacting end-users. 10
11 2. Zero Downtime Load Balancing CLX Live Migration can help enhance the performance of a given solution through zero downtime load balancing. A VM can be migrated to another site, server, or storage depending on the resource utilization and availability. For example, a VM can be migrated to a storage system in the remote data center site depending on IOPS, cache utilization, response time, power consumption, and the like. 3. Follow the Sun data center access model In a solution where users of a given application are spread across the globe, working in different time zones, the productivity is enhanced to a great extent if an application is closer to the users. This is also known as Follow The Sun/Moon computing. Hyper-V Live Migration and CLX takes first step to make this possible by providing the ability to move VMs and application within and across data centers such that it is closest to user for faster/reliable access. 4. No Additional Learning Cost Using CLX and Hyper-V Live Migration, all the operations like failover/failback across sites, servers and storage, live migration, quick migration, VM and CLX configurations etc can be performed through a well know management software i.e., Microsoft Failover Cluster GUI or Microsoft SCVMM. There is no need to learn many different tools/products and their respective features/limitations. HP-supported configuration/hardware for Windows Server 2008 and above This section covers the HP hardware and related components for Windows Server 2008 Hyper-V. As a pre-requisite step, it is important to ensure that hardware components are officially supported by HP. Though this document lists down supported servers and StorageWorks arrays, as a best practice it is recommended that support streams should be reviewed for latest information. Support streams list support for various other solution components like Host Bus Adapters (HBA), Network Interface Cards (NIC), and so on. The various available resources are listed in the references section at the end of this document. Alternatively, get in touch with your local HP account or service representative for definitive information about supported HP configurations. Server support Refer to the HP white paper available at the following link to identify the supported HP server for Hyper-V. It also covers various other supported components like HBA, NIC, and Storage controllers. Also covered are the various steps to enable the hardware-assisted virtualization for Hyper-V Storage support Refer to the link below for a detailed list of HP storage products supported with Windows Hyper-V 11
12 Hyper-V Implementation considerations Disk performance optimization Hyper-V offers two different kinds of disk controllers, IDE and SCSI controllers. Both have their distinct advantages and disadvantages. IDE controllers are the preferred choice because they provide the highest level of compatibility for a large range of guest operating systems. However, SCSI controllers can enable a virtual SCSI bus to provide multiple transactions simultaneously. If the workload is disk intensive, consider using only virtual SCSI controllers if the guest OS supports that configuration. If this is not possible, add additional SCSI connected VHDs. It is more effective to select SCSI-connected VHDs that are stored on separate physical spindles or arrays on the host server. Storage options in Hyper-V Virtual Hard Disk (VHD) choices There are three types of VHD files. The performance characteristics and tradeoffs of each VHD type are discussed below. Dynamically expanding VHD Space for the VHD is allocated on demand. The blocks in the disk start as zeroed blocks, but are not backed by any actual space in the file. Reads from such blocks return a block of zeros. When a block is first written to, the virtualization stack must allocate space within the VHD file for the block and then update the metadata. This increases the number of necessary disk I/Os for the write and increases CPU usage. Reads and writes to existing blocks incur both disk access and CPU overhead when they access the block mapping in the metadata. Fixed-size VHD Space for the VHD is first allocated when the VHD file is created. This type of VHD is less apt to fragment, which reduces the I/O throughput when a single I/O is split into multiple I/Os. It has the lowest CPU overhead of the three VHD types because reads and writes do not need to look up the block mapping. Differencing VHD The VHD points to a parent VHD file. Any writes to blocks never written to before cause the space to be allocated in the VHD file, as with a dynamically expanding VHD. Reads are serviced from the VHD file if the block has been written to. Otherwise, they are serviced from the parent VHD file. In both cases, the metadata is read to determine the mapping of the block. Reads and writes to this VHD can consume more CPU resources and result in more I/Os than a fixed-sized VHD. 12
13 Pass-through disk One of the storage configuration options in Hyper-V child partitions is the ability to use pass-through disks. While the former options all have referred to VHD files that are files on the file system of the parent partition, a pass-through disk routes a physical volume from the parent to the child partition. The volume must be configured offline on the parent partition for this operation. This simplifies the communication and by design, the pass-through disk overhead is lower than the overhead with VHDs. However, Microsoft has invested in enhancing the fixed-size VHD performance, which has significantly reduced the performance differential. Therefore, the more relevant reason to choose one over the other should be based on the size of the data sets. If there is a need to store very large data sets, a pass-through disk is typically the more practical choice. In the virtual machine, the pass-through disk would be attached either through an IDE controller or through a SCSI controller. Note: When presenting a pass-through disk, the disk should remain offline on the host OS. This prevents host and guest OS from using the pass-through disks simultaneously. Pass-through disks do not support VHD-related features, like VHD Snapshots, dynamically expanding VHDs, and differencing VHD. The further considerations for choosing pass-through disk against VHD disks are available at this Microsoft TechNet blog. Detailed information about Hyper-V storage performance based on measurements derived from the Microsoft performance lab can be found in the following blog entry Cluster Shared Volumes (CSV) With Windows Server 2008 R2, Hyper-V uses CSV storage to simplify and enhance shared storage usage. CSV enables multiple Windows Servers to access SAN storage using a single consistent namespace for all volumes on all hosts. Multiple hosts can access the same Logical Unit Number (LUN) on SAN storage. CSV enables faster live migration and easier storage management for Hyper-V when used in a cluster configuration. Cluster Shared Volumes is available as part of the Windows Failover Clustering feature of Windows Server 2008 R2. Hyper-V integration services The integration services provided with Hyper-V are critical to incorporate into the general practices. These drivers improve performance when virtual machines make calls to the hardware. After installing a guest operating system, be sure to install the latest drivers. Integration services target very specific areas that enhance the functionality or management of supported guest operating systems. Periodically check for integration service updates between the major releases of Hyper-V. Memory usage maximization In general, allocate as much memory to a virtual machine as you would for the same workload running on a physical machine without wasting physical memory. If you know how much RAM is needed to run a guest OS with all required applications and services, begin with this amount. Be sure to include a small amount of additional memory for virtualization overhead. 64 MB is typically sufficient. Insufficient memory can create many problems including excessive paging within a guest OS. This issue can be confusing because it might appear to be that the problem is with the disk I/O performance. In many cases, the primary cause is because an insufficient amount of memory has 13
14 been assigned to the virtual machine. It is very important to be aware of the application and service needs before making changes throughout a data center. The various Sizing considerations can be reviewed at this Hyper-V sizing white paper. Network configurations Virtual network configurations are configured in the Virtual Network Manager in the Hyper-V GUI or in the System Center virtual machine Manager in the properties of a physical server. There are three types of virtual networks: External: An external network is mapped to a physical NIC to allow virtual machines access to one of the physical networks that the parent partition is connected. Essentially, there is one external network and virtual switch per physical NIC. Internal: An internal network provides connectivity between virtual machines, and between virtual machines and the parent partition. Private: A private network provides connectivity between virtual machines only. The solution network requirements for the specific scenarios should be understood before configuring the network of various types. Live Migration Implementation considerations Besides the Hyper-V-specific implementation considerations, a few Live Migration-specific considerations are summarized below: Live Migration A managed operation Live migration is a managed failover operation of VM resources. Meaning, it should be performed when all the solution components are in a healthy state. All the servers and systems are running and all the links are up. It is recommended to ensure that the underlying infrastructure is in healthy state before performing a live migration. However, CLX does have the capability of discovering any storage level conditions unfavorable for performing live migrations. In response to such a situation, CLX stops/cancels live migration process and informs the user. This is also handled in a way that it does not cause VM downtime. For example, if a live migration is initiated while VM data residing on the storage arrays is still merging and the data between both storage arrays is not in sync, CLX proactively cancels the live migration and informs the user to wait until the merge is finished. In the absence of this feature, it would be possible that a live migration fails or the VM comes online in the remote data center with inconsistent data. 14
15 Networking Considerations: There should be a dedicated network configured in a cluster for live migration traffic. It is recommended to dedicate a 1 Gigabit Ethernet connection for the live migration network between cluster nodes to transfer the large number of memory pages typical for a virtual machine. Configure all hosts that are part of Failover Clustering on the same TCP/IP subnet. Failover clustering supports different IP addresses being used in different data centers through the OR relationship implementation of IP address resources. However, in order to use Live Migration the VM needs to keep the same IP address across data centers in order to achieve the goal of continues access from clients to the virtual machine during and after the migration. That means the same IP subnet must be stretched across data centers for Live Migration to work across data centers without connectivity loss. Several options are available from HP to achieve this goal. Virtual Machine and LUN Mapping Considerations Currently Microsoft Failover Clustering allows live migration for only one VM between a single source and destination server within a cluster. In other words, any cluster node server can be part of only one Live Migration as a source or destination. In maximum configurations where there are 16 nodes in a cluster, a maximum of eight live migrations can happen simultaneously. On the storage side, a replication group is the atomic unit of failover. It is important to maintain a 1-to-1 mapping between a VM and a replication storage group. In configurations where multiple VMs are hosted on the same LUN (or replication group), live migration of one VM can affect others and hence are not recommended. Note: With Microsoft Cluster Shared Volumes (CSV), it is possible to have multiple VM's hosted on the same CSV. CSV is not supported by CLX at this time. Distance considerations The supported distance for the Hyper-V Live Migration in a CLX configuration depends on various parameters like inter-site link bandwidth and latency (distance), the VM configurations, the type of storage based replication, etc. For specific information, refer to CLX product streams on Note: Currently CLX supports Synchronous Storage based replication only Storage replication groups per VM considerations Typically, a virtual machine uses a single dedicated LUN for VM data (guest operating system) and one or two LUNs for the application data. Either all these LUNs can be part of the same replication group or, if isolation between VM and application data is a requirement, can belong to separate replication groups. Hence, a typical VM configuration needs a maximum of two replication groups. CLX supports such configurations. However, for specific CLX Live Migration support around the number of replication groups per VM refer to CLX product STREAMS on SPOCK web site 15
16 CLX and Hyper-V scenarios This section lists various scenarios in which CLX and Hyper-V Failover Clustering can together achieve disaster recovery and high availability. For each of these scenarios it is important to note that the VM data has to reside on the disk that is part of the storage replication group (more formally known as DR Group for Continuous Access EVA and Device Group for Continuous Access XP). This makes sure that the VM data is replicated to the recovery side as well. As explained above the main job of CLX software is to respond to any site failovers or Live Migration request by evaluating the required storage action and reversing the Continuous Access replication direction between sites accordingly. Through integration with Failover Clustering, this functionality is performed automatically and completely transparent to user applications (VM in this case). For proper functioning, it is important that VM cluster resource, the VM data disk cluster resource, and the CLX resource managing the corresponding replication group should be in the same Failover Cluster resource group and they should follow a specific order. The order is determined by the dependency hierarchy of the cluster resources. The CLX resource should be on top of the hierarchy, the disk resource should be dependent on the CLX resource, and in turn, the VM resource should be dependent on the disk resource. When multiple VM s are created on a single vdisk (LUN) or multiple vdisk that is/are part of a single replication group then all the VMs and corresponding physical disks should be part of the same service group of the Failover Cluster. CLX does the storage failover at the replication group level and since all the disks that host the VM are in a single replication group, only one CLX resource needs to be configured for all the VMs in a service group. All the VM resources should depend on their respective physical disks and all the disks should in turn depend on the single CLX resource. For more detailed information, refer to CLX installation and Administrator Guide at the link provided in the reference section at the end of this document. 16
17 Scenario A: Disaster recovery and high availability for virtual machine This is a prominent scenario where the VM is hosted on a standard Windows Server 2008 (host OS/Host partition) and it is clustered with another similar host OS. Whenever any failure is detected by the cluster, the VM can failover from one host OS to another. Every time the failover is performed across the nodes, CLX takes appropriate actions to make replicated storage accessible and highly available. The following diagram is an illustration of this scenario. A multi-site replicated disk is presented to two Hyper-V parent partitions that are clustered using Windows Failover Cluster. Each of the parents OS reside on each site. VM is created on the replicated disk and the VM is added to the Failover Cluster as a service group on the parent partition. When a failure is detected by the Failover Cluster (for example, host failure), Failover Cluster moves the VM to the other cluster node and CLX ensures that storage replication direction is changed and the storage disk is made write enabled. Figure 5: Scenario A Before Failover 17
18 18 Figure 6: Scenario A After Failover
19 Scenario B: Disaster recovery and high availability for virtual machine and applications hosted by virtual machine This is an extension of scenario A. This scenario considers that there is an application hosted by the VM. Also, assume that this application uses replicated SAN storage for disaster recovery. When the VM fails over, the application also fails over along with it to other host OS. After failover, for VM and application to come online, disk read/write access should be made available. This can be achieved through replicated disks and failover through CLX. The way this would work is to have a minimum of two physical disks with multi-site replication, one disk for hosting the VM and the other hosts the application. The following diagram is an illustration of scenario B. Two multi-site replicated disks are presented to two Hyper-V parent partitions that are clustered using Windows Failover Cluster. On the parent partition, a VM is created on one of the replicated disk and the data for the application hosted by VM is stored on the other replicated disk. The VM is added to the Failover Cluster as a service group. When a failure is detected by the Failover Cluster (host failure for example), Failover Cluster moves the VM and with it the application to the other cluster node and CLX ensures that storage replication direction is changed hence achieving automatic failover and high availability. Figure 7: Scenario B Before Failover 19
20 Figure 8: Scenario B After Failover For this scenario to function properly it is important to note that both the VM as well as the application hosted by the VM should use a disk (LUN) which belongs to the same replication group. In such a situation, a single CLX resource can manage the failover of both the disks as a single atomic unit. The hierarchy of the resources within the cluster should be maintained as mentioned above. For example, in the configuration above both Lun 1 and Lun 2 should be part of same replication group. 20
Live Migration across data centers and disaster tolerant virtualization architecture with HP Cluster Extension and Microsoft Hyper-V Technical white paper Table of contents Executive summary... 3 Applicability
How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade Executive summary... 2 System requirements... 2 Hardware requirements...
Best practices for fully automated disaster recovery of Microsoft SQL Server 2008 using HP Continuous Access EVA with Cluster Extension EVA Subtitle Table of contents Overview... 2 Key findings... 3 Solution
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Dell High Availability Solutions Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer.
Dell High Availability Solutions Guide for Microsoft Hyper-V R2 A Dell Technical White Paper THIS WHITE PAPER IS FOR INFORMATIONAL PURPOPERATING SYSTEMS ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL
Best Practices Best Practices for Installing and Configuring the Hyper-V Role on the LSI CTS2600 Storage System for Windows 2008 Installation and Configuration Guide 2010 LSI Corporation August 13, 2010
T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi
Virtual Fibre Channel for Hyper-V Virtual Fibre Channel for Hyper-V, a new technology available in Microsoft Windows Server 2012, allows direct access to Fibre Channel (FC) shared storage by multiple guest
Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4 Application Note Abstract This application note explains the configure details of using Infortrend FC-host storage systems
HP Converged Infrastructure Solutions HP Virtual Connect and HP StorageWorks Simple SAN Connection Manager Enterprise Software Solution brief Executive summary Whether it is with VMware vsphere, Microsoft
HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN Table of contents Executive summary... 2 Introduction... 2 Solution criteria... 3 Hyper-V guest machine configurations...
HP StorageWorks MPX200 Simplified Cost-Effective Virtualization Deployment Executive Summary... 2 HP StorageWorks MPX200 Architecture... 2 Server Virtualization and SAN based Storage... 3 VMware Architecture...
TECHNICAL REPORT Deploying Microsoft Hyper-V with Dell EqualLogic PS Series Arrays ABSTRACT This technical report details information and best practices for deploying Microsoft Hyper-V with Dell EqualLogic
Windows Server Failover Clustering April 00 Windows Server Failover Clustering (WSFC) is the successor to Microsoft Cluster Service (MSCS). WSFC and its predecessor, MSCS, offer high availability for critical
HP iscsi storage for small and midsize businesses IP SAN solution guide With data almost doubling in volume every year, businesses are discovering that they need to take a strategic approach to managing
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
HP StorageWorks Data Protection Strategy brief Your business depends on IT more than ever before. The availability of key application services and information is critical to maintain business processes,
Installation Checklist HP ProLiant Cluster for HP StorageWorks Modular Smart Array1000 for Small Business using Microsoft Windows Server 2003 Enterprise Edition November 2004 Table of Contents HP ProLiant
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
Microsoft Exchange Server 2007 and Hyper-V high availability configuration on HP ProLiant BL680c G5 server blades Executive summary... 2 Introduction... 2 Exchange 2007 Hyper-V high availability configuration...
SAP database backup and restore solutions for HP StorageWorks Enterprise Virtual Array using HP Data Protector 6.1 software Table of contents Executive summary... 2 Solution overview... 2 Objectives...
#1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the
Poster Companion Reference: Hyper-V Virtual Machine Mobility Live Migration Without Shared Storage Storage Migration Live Migration with SMB Shared Storage Live Migration with Failover Clusters Copyright
Getting Started with Hyper-V and the Scale Computing Cluster Scale Computing 5225 Exploration Drive Indianapolis, IN, 46241 Contents Contents CHAPTER 1 Introduction to Hyper-V: BEFORE YOU START. vii Revision
HP VMware ESXi 5.0 and Updates Getting Started Guide Abstract This guide is intended to provide setup information for HP VMware ESXi. HP Part Number: 616896-002 Published: August 2011 Edition: 1 Copyright
Technical white paper HP high availability solutions for Microsoft SQL Server Fast Track Data Warehouse using SQL Server 2012 failover clustering Table of contents Executive summary 2 Fast Track reference
Multipathing I/O (MPIO) enables the use of multiple iscsi ports on a Drobo SAN to provide fault tolerance. MPIO can also boost performance of an application by load balancing traffic across multiple ports.
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
Building a Scalable Microsoft Hyper-V Architecture on the Hitachi Universal Storage Platform Family Reference Architecture Guide By Rick Andersen April 2009 Summary Increasingly, organizations are turning
Implementing Hyper-V A Dell PowerVault MD3200 and MD3200i Technical White Paper Dell THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES.
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
HP Data Protector software Assuring Business Continuity in Virtualised Environments Would not it be great if your virtual server environment actually translated to a better reality? One where you could
Overview software adds iscsi target functionality to Server devices designed for small and medium businesses, enterprise departments and branch offices, and other environments wishing to implement low-cost
Virtualizing Microsoft SQL Server 2008 on the Hitachi Adaptable Modular Storage 2000 Family Using Microsoft Hyper-V Implementation Guide By Eduardo Freitas and Ryan Sokolowski February 2010 Summary Deploying
The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment Introduction... 2 Virtualization addresses key challenges facing IT today... 2 Introducing Virtuozzo... 2 A virtualized environment
EonStor DS remote replication feature guide White paper Version: 1.0 Updated: Abstract: Remote replication on select EonStor DS storage systems offers strong defense against major disruption to IT continuity,
Windows Server 2008 Hyper-V Backup and Replication on EMC CLARiiON Storage Applied Technology Abstract This white paper provides an overview of the technologies that are used to perform backup and replication
Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server Windows 6.1 February 2014 Symantec Storage Foundation and High Availability Solutions
OpenVMS Technical Journal V6 Case Studies Using EMC Legato NetWorker for OpenVMS Backups Case Studies Using EMC Legato NetWorker for OpenVMS Backups... 2 Overview... 2 Introduction... 2 Case Study 1: Backup
Windows Server 2008 R2 Hyper-V R2 Panoramica delle nuove funzionalità Andrea Mauro Direttore Tecnico 20 novembre 2009 Assyrus Srl Microsoft Virtualization User State Virtualization Server Virtualization
HP P4000 SAN Solutions technical presentation Daniel Stamm Senior Technology Consultant Storage firstname.lastname@example.org 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Setup for Failover Clustering and Microsoft Cluster Service Update 1 ESXi 5.1 vcenter Server 5.1 This document supports the version of each product listed and supports all subsequent versions until the
Session Code: VIR311 Deploying Windows Server 2008 Hyper-V and System Center Virtual Machine Manager 2008 Best Practices Agenda Discuss Hyper V deployment strategies Discuss System Center Virtual Machine
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Microsoft Collaboration Brief April 2010 Best Practices for SAP on Hyper-V Authors Josef Stelzel, Sr. Developer Evangelist, Microsoft Corporation email@example.com Bernhard Maendle, Architect, connmove
Technical white paper HP AppSystem for SAP HANA Distributed architecture with 3PAR StoreServ 7400 storage Table of contents Executive summary... 2 Introduction... 2 Appliance components... 3 3PAR StoreServ
Technical white paper Using HP StoreOnce Backup Systems for NDMP backups with Symantec NetBackup Table of contents Executive summary... 2 Introduction... 2 What is NDMP?... 2 Technology overview... 3 HP
Configuring a Microsoft Windows Server 2012/R2 Failover Cluster with Storage Center Dell Compellent Solution Guide Kris Piepho, Microsoft Product Specialist October, 2013 Revisions Date Description 1/4/2013
HP Cloud Map for TIBCO ActiveMatrix BusinessWorks: Importing the template An HP Reference Architecture for TIBCO Technical white paper Table of contents Executive summary... 2 Solution environment... 2
Introduction By leveraging the inherent benefits of a virtualization based platform, a Microsoft Exchange Server 2007 deployment on VMware Infrastructure 3 offers a variety of availability and recovery
Test Validation Test Validation - SEP sesam Enterprise Backup Software Evaluation of Enterprise Data Protection using SEP Software Author:... Enabling you to make the best technology decisions Backup &
What s New with VMware Virtual Infrastructure Virtualization: Industry-Standard Way of Computing Early Adoption Mainstreaming Standardization Test & Development Server Consolidation Infrastructure Management
Sizing guide for Microsoft Hyper-V on HP server and storage technologies Executive summary... 2 Hyper-V sizing: server capacity considerations... 2 Application processor utilization requirements... 3 Application
HP Insight Recovery Technical white paper Abstract... 2 Introduction... 2 Overview of Insight Dynamics VSE and Logical Servers... 2 Disaster Recovery Concepts... 4 Recovery Time Objective (RTO) and Recovery
Best practices for deploying an HP EVA array with Microsoft Hyper-V R2 Table of contents Executive summary... 3 Environment planning... 3 Compatibility matrix... 3 Sizing for Hyper-V... 4 Memory on a Hyper-V
Overview Introduction (AppRM) automates array-based backup and recovery of business application databases. In the event of failure or corruption, AppRM provides the ability to restore data to the moment
Dell Solutions Overview Guide for Microsoft Hyper-V www.dell.com support.dell.com Notes and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION:
Optimized Storage Solution for Enterprise Scale Hyper-V Deployments End-to-End Storage Solution Enabled by Sanbolic Melio FS and LaScala Software and EMC SAN Solutions Proof of Concept Published: March
Desktop virtualization: Implementing Virtual Desktop Infrastructure (VDI) with HP What is HP Virtual Desktop Infrastructure? Virtual Desktop Infrastructure from HP is a desktop replacement solution designed
Technical White Paper Report Best Practices Guide: Violin Memory Arrays With IBM System Storage SAN Volume Control Implementation Best Practices and Performance Considerations Version 1.0 Abstract This
P2000 P4000 29/07/2010 1 HP D2200SB STORAGE BLADE Twelve hot plug SAS drives in a half height form factor P410i Smart Array controller onboard with 1GB FBWC Expand local storage capacity PCIe x4 to adjacent
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Microsoft System Center 2012 SP1 Virtual Machine Manager with Storwize family products IBM Systems and Technology Group ISV Enablement January 2014 Copyright IBM Corporation, 2014 Table of contents Abstract...
DATASHEET VERITAS Storage Foundation 4.3 for Windows Advanced Volume Management Technology for Windows In distributed client/server environments, users demand that databases, mission-critical applications
Software to Simplify and Share SAN Storage Enabling High Availability for Citrix XenDesktop and XenApp - Which Option is Right for You? White Paper By Andrew Melmed, Director of Enterprise Solutions, Sanbolic,
Technical white paper Protecting enterprise servers with StoreOnce and CommVault Simpana HP StoreOnce Backup systems Table of contents Introduction 2 Technology overview 2 HP StoreOnce Backup systems key
Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating
Oracle Database Deployments with EMC CLARiiON AX4 Storage Systems Applied Technology Abstract This white paper investigates configuration and replication choices for Oracle Database deployment with EMC
Microsoft SQL Server Enabled by EMC Celerra and Microsoft Hyper-V Copyright 2010 EMC Corporation. All rights reserved. Published February, 2010 EMC believes the information in this publication is accurate
HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper Mimosa NearPoint for Microsoft Exchange Server and HP StorageWorks 1510i Modular Smart Array Executive
Features Comparison: Hyper-V Server and Hyper-V February 2012 The information contained in this document relates to a pre-release product which may be substantially modified before it is commercially released.
HP ProLiant Storage Server family Radically simple storage The HP ProLiant Storage Server family delivers affordable, easy-to-use network attached storage (NAS) solutions that simplify storage management
White Paper MICROSOFT HYPER-V SCALABILITY WITH EMC SYMMETRIX VMAX Abstract This white paper highlights EMC s Hyper-V scalability test in which one of the largest Hyper-V environments in the world was created.
1-888-674-9495 www.doubletake.com Real-time Protection for Hyper-V Real-Time Protection for Hyper-V Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate
Veritas Storage Foundation High Availability for Windows by Symantec Simple-to-use solution for high availability and disaster recovery of businesscritical Windows applications Data Sheet: High Availability
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
Poster Companion Reference: Hyper-V and Failover Clustering Introduction This document is part of a companion reference that discusses the Windows Server 2012 Hyper-V Component Architecture Poster. This
VMware Virtual Machine File System: Technical Overview and Best Practices A VMware Technical White Paper Version 1.0. VMware Virtual Machine File System: Technical Overview and Best Practices Paper Number:
Page 1 of 31 Quick Start - Virtual Server idataagent (Microsoft/Hyper-V) TABLE OF CONTENTS OVERVIEW Introduction Key Features Complete Virtual Machine Protection Granular Recovery of Virtual Machine Data